Jump to content

Backup performance


Recommended Posts

What are the factors which determine backup client performance besides network and disk speed?


I ask because I am currently backing up many different clients (Linux, OSX, Netware) using v7 and I get some wildly varying transfer rates. One client on a particular volume will backup at 750MB/min and another will backup at 50 MB/min. Also, it can take an extremely long time to scan volumes before Retrospect even begins to copy the files. For example, I back up an entire Netware server (90GB, Dell, RAID 5) before my OS X server even finishes scanning a 280GB (XRAID, RAID 5) volume. At an average rate of about 200MB/min, the OS X server takes nearly 24 hours to backup.


These machines are all on the same Gigabit switch. All running at 1000/FD. I am backing up to large SATA disks on a Windows 2000 server running Retrospect v7. It is done at night when the servers and network are mostly idle.


Things seem the slowest when there are small files. Small files kill performance, it seems. Is there anything I can do to tweek performance, particularly on OSX clients? I can't say all OS X clients and all volumes are slow. Some are fast (usually not as fast as the one Netware server, though).



Link to comment
Share on other sites



You are absolutely right. Small files kill performance and are likely the biggest bottleneck in your situation. Apparently reading the inode table for each file causes the disk head to go back and forth causing the slow reads. A large number of files also makes the scan before backup go much longer.


Your options are limited. The only practical solution is to narrow the scope of the backup by using subvolumes in Retrospect.




Link to comment
Share on other sites


This topic is now archived and is closed to further replies.

  • Create New...