Jump to content
Cygnis

"Transfer Backup Set" results in very large catalog files...

Recommended Posts

Hi all,

 

We are using Retrospect Single Server v7.7.620 on Windows 2003 SBS.

 

We have 'cloned' two large disk backup sets (one 1.4TB, one 1.2TB) to two new disk backup sets, using the "Transfer Backup Set" operation.

 

The resulting new backup sets have MUCH larger catalog files than the originals. One has gone from 750mb to 1.7gb, while the other has gone from 3.2gb to 12.8gb!

 

Catalog file compression is enabled in all sets. The new sets have the same grooming settings, same number of sessions/snapshots, and backup data size on disk as the old sets... it's just the catalog files that are much larger.

 

Details that may be relevant:

- Both the original sets and both the new sets were created in version 7.7 - no other Retrospect versions have been involved.

- I got a couple of "Backup set format inconsistency" errors during the Transfer operations.

 

I've tried grooming one, but the catalog file size didn't change. Would rebuilding the catalog file help? (If I were to rebuild, would the snapshots all be retained?)

 

Any other ideas, and/or suggestions on what might be the cause of this? Thanks!

 

P.S. I have also submitted a support request, and will share my findings here if/when I receive a reply.

Share this post


Link to post
Share on other sites

Retrospect Support suggested a catalog Rebuild. I did this for the smaller of the two sets, and it initially resulted in a Catalog file of the same size. However, the "Compress catalog" setting in the Backup Set properties had reverted to "Off"... when I set it back to "On" and ran another backup, the catalog compressed properly again, so all is well for that set now.

 

(Prior to the rebuild, turning the "Compress catalog" option on/off had no effect... the catalog file simply wasn't compressing.)

 

Unfortunately, when I tried rebuilding the bigger set, it failed with what appears to be memory allocation errors (e.g. TMemory::mhalloc: VirtualAlloc(246.5 M, MEM_RESERVE) failed, error 8). Then the scheduled weekly Groom of this backup set took place afterwards, and wiped out all but the one Snapshot that had survived the botched Rebuild. :(

 

Thankfully, I still have the original source backup set online, so I've created another new one by transferring selected Snapshots instead of the entire backup set. This one's catalog is not compressing either! Will try rebuilding it and post back with my results.

Share this post


Link to post
Share on other sites
I've created another new one by transferring selected Snapshots instead of the entire backup set. This one's catalog is not compressing either! Will try rebuilding it and post back with my results.

 

I'm happy to report that catalog compression is now working for this new set, after a catalog rebuild.

 

I'm curious, why do you want to compress the catalog? We never do.

 

To keep it compact. :) We back up our catalogs (via Duplication to other disks and including them in tape backups of the server), and at large sizes, keeping them backed up becomes quite slow and space-consuming.

 

Having said that, I may start excluding the catalogs from some of our scheduled backups, as keeping them backed up probably won't help us in some of the anticipated disaster recovery scenarios. If I do this, compression will probably become unnecessary.

 

Is there a performance benefit to leaving them uncompressed?

Share this post


Link to post
Share on other sites

Is there a performance benefit to leaving them uncompressed?

Dunno, Is there by compressing them? :P

 

I would wage the pro's and con's a bit. I'm not sure how much you actually gain storage wise by compressing the catalogs. This feature has been in Retrospect for ages and probably isn't really needed anymore because storage is so cheap today. Also the program defaults to uncompressed, maybe for a reason? Having said that, I would expect such a feature to be unproblematic. If it is, rather avoid it if possible.

Share this post


Link to post
Share on other sites

I pretty sure from memory the catalog file is about half the size when compressed. This might not mean much to a small backup set with large files, but when it's a huge backup set with millions of small files, it can get pretty huge.

 

We use SSD disks for OS/catalog files, so space is at a premium, so compress.

Share this post


Link to post
Share on other sites

Okay, they can get largish, but we have plenty of space. :lol: B.t.w. Richy, does the SSD help speed wise when building the catalog file? We seem to loose a lot of time with that (our devs use millions of very small files, so that is the culprit for us).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×