robertdana Posted March 20, 2006 Report Share Posted March 20, 2006 I just had the unfortunate experience of having to recover my retrospect *backup server*. This took *much longer* than expected because of the the need to recatalog the backup sets to perform the backup. This is frustrating because there was a perfectly usable catalog file sitting on the backup- but it could be recovered without doing a catalog in the first place. As part of my standard backup practice, I'm going to start doing two things to avoid this pain again: 1. Keep backups of the backup server itself in a separate backup set from other machines 2. Make backups of the retrospect catalog files in a seperate job to a seperate backup set. But this is cumbersome to configure and schedule, and there are several things that could be done "by default" to make this process go a lot faster by avoiding a full recataloging of the backup set: 1. Include a special link to the catalog files in the backup set so that they could be automatically located and recovered by retrospect without a complete recataloging of the set, allowing other files to be located and recovered without a recatalog. 2. Change the structure of the backup sets so that files for different machines are somehow "partitioned" and easily recataloged / located without having to scan through the entire backup set containing all files for all machines. This would enable a feature like a "recatalog filter" so that files not for the target machine could be skipped quickly during the recatalog process. -Robert Link to comment Share on other sites More sharing options...
Richy_Boy Posted April 16, 2012 Report Share Posted April 16, 2012 We've tlked about this before on the old forum and a logical solution would be to have a relational database structure rather than a flat catalog file. This would help with the rebuild and repair of catalogs... Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.