Maser Posted January 29, 2004 Report Share Posted January 29, 2004 When I do my full backups of my computers, there's a time lag of 4-10 minutes to scan the computer (which nothing can be done about) before backup starts. But then there's a 4-5 minute time lag when backup seems completed to when the catalog file is compressed. So -- my question: Is there anything wrong with toggling catalog compression off, then on, then off, then on, etc. -- for the same storage sets? Ideally, I'm trying to increase overall performance time by considering *not* compressing the catalog for part of my initial backup, then turn it on, then turning it off again for another part, etc. Would that eventually corrupt the catalog? Or is this OK to do? Link to comment Share on other sites More sharing options...
CallMeDave Posted January 29, 2004 Report Share Posted January 29, 2004 The primary reason for using compression in 5.0 was to avoid the 2 Gig limit on catalog size. With 6.0 breaking that limit, you could just do without compression altogether (drive prices keep falling, and falling, and falling...) Dave Link to comment Share on other sites More sharing options...
Maser Posted January 29, 2004 Author Report Share Posted January 29, 2004 Except I'm sticking with 5.1, so the question still remains. And, technically, *would* remain with 6.0. My current *compressed* catalog files for my OSX client backups (100 clients, storage sets exist for 2 months) are 1G in size. I'd hate to think how large they would be if they were *uncompressed*. Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.