Jump to content

Benefits of gigabit ethernet for backup infrastructure?


Recommended Posts



I manage a relatively small network of around 10 servers, including our backup server running Retrospect on Mac OS X 10.3.6. We perform nightly backups of all our servers to three LaCie FireWire hard drives and an Ultra-SCSI AIT tape drive. With the exception of the backup server, all of our servers run either Windows 2000 or Debian Linux (running the unofficial Retrospect client). All of the servers are connected to the same switch, so network throughput should not be an issue.


We are considering upgrading all of our servers to gigabit Ethernet, but I would like to know if we would see any real performance gain. Already, the redundant backups to four different backup sets means that it takes almost eight hours every day for a complete backup cycle. I'm not sure if this has to do with the speed at which Retrospect writes and compares its backups, or if network throughput could also be a factor.


In any event, can someone point me to some good resources on the topic, or otherwise provide me with some recommendations?


Thanks in advance.

Link to comment
Share on other sites

I'll give you an anecdotal response.


I had a (nearly) free opportunity to upgrade my home network from 100M to 1G a while ago. I swapped out the central switch for a 10M/100M/1000M box and swapped out the NICs in several PCs for 10M/100M/1000M also. So my PCs are connected via 1G links (while the link to my cable modem is still 100M and things are only 3.5M/256K after that...).


I've noticed almost no improvement in any performance or behavior for day-to-day stuff and still usually see <30Mb sustained transfers on the local network. And I had a chance to do a pretty good apples-to-apples comparison with a particular backup which was recycled just before and just after the upgrade with a similar amount of data. Before the upgrade it took Retrospect about an hour to do the full backup (including the snapshot). After the upgrade it took about 45min for the same operation. So although this could be considered a performance jump, it really doesn't matter to me much (i.e. if a backup is running in the woods in a single night and nobody sees it, does it matter if it takes one hour to two hours or three hours...). Of course this might scale in your case (e.g. an eight hour backup becomes a six hour backup?) but I don't think it will be anywhere near 10x faster.


So my current take is that unless you have very fast processors and very fast disk systems and/or use a backup system with very low overhead (e.g. don't run Retro snapshots...), the jump to 1G from 100M is not a big or cost-effective win.

Link to comment
Share on other sites


This topic is now archived and is closed to further replies.

  • Create New...