Jump to content

Large SLOW backup set, try #2,

Recommended Posts



Since I never got a response to this one, I thought to try again.


My client is backing up a number of OS X and Windows NT/2000 clients to a file backup set on a firewire hard disk, using Retrospect 5 on an G4 OS X Server 10.2.6 machine with 1.5 GB memory. His business generates tons of small files, so the backup set winds up with >1.2M files. The backup itself runs reasonably well, and everything gets backed up at night, but a restore is so slow as to be nearly unuseable. In particular, it's nearly impossible to navigate the directory tree, with delays of minutes between clicking an entry and seeing something happen in the gui.


I presume that this is related to the number of files in the backup; is that a reasonable conclusion? If so is the right approach to split the backup across >1 backup sets?








Link to comment
Share on other sites

Yes, a backup set containing over a million files will restore slower then a set with 500,000 files. Generating a browser window of a volume with over a million files can be very slow. Splitting the data between sets will keep the size of the set as a whole more manageable.


Additionally, backing up subvolumes, rather then the root level of the drive, will speed up a restore. The snapshot browser window will contain the contents of the subvolume rathe then the entire volume.


A search restore (Search for files and folders) can be much quicker then looking at a snapshot for the volume. This method is most useful when searching for particular files, rather then entire directories.

Link to comment
Share on other sites


This topic is now archived and is closed to further replies.

  • Create New...