Richy_Boy Posted August 7, 2009 Report Share Posted August 7, 2009 (edited) Is it possible to beef the catalog function up a bit as it seems to be one of the major problems with Retrospect for me. For example, removing a single snapshot (old server for example) from a backup set hammers the backup servers catalog driver for several minutes (100%) which is not only wasting my time, but also makes all the automated backup jobs grind to a halt too. Also, simple backups spend an awful amount of time scanning, building snapshots etc when the actual backing up process takes very little time. i.e. on a proactive backup jumping between clients, it spends WAY more time doing the 'book keeping' than copying the data, which seems bizarre!? Would it not be possible to impliment some clever relational database model to handle duplication and to remove client machines without having to rebuild a static catalog? It seems odd in this day and age that huge single files are being opened, modified and saved again when more efficient processes could be used which would transform Retrospects performance... and I suspect would make the catalogs a touch more robust at the same time! Rich Edited August 7, 2009 by Guest Quote Link to comment Share on other sites More sharing options...
Richy_Boy Posted August 7, 2009 Author Report Share Posted August 7, 2009 Just a thought.. (whilst I'm STILL removing snapshots) It would be great, as well as being able to multiply select snapshots, to be able to have a drop down so you can choose at what age snapshots should be groomed off the backup set. i.e. 1 month, 6 months, 1 year, 5 years or Never etc. Then it would be a pretty automated process or removing old snapshots from a backup set. Rich Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.