Jump to content

Disk Fragmentation and Processor Pegging


Recommended Posts

I have two questions:

 

1.) Is there a recommended block allocation size for Retrospect to use when backing up to disk that might help reduce fragmentation? We've backed-up about 2.9 TB to a SCSI-attached ATA RAID array and the RAID is 95% fragmented. Short of using a scheduled defrag. utility, which we plan to use, I'm wondering if there is an optimal Retrospect config. to help minimize this.

 

2.) Why does Retrospect, specifically MultiServer 6.5, hit the processor so hard during backup operations? The system we're using runs a 2.4 Gig Xeon w/a Gig of RAM. While backup occurs, we've noticed that at 4 streams (of 8 total) the Windows task manager reports that the processor is anywhere between 98-100% utilized. Needless to say, Retrospect crawls while trying to do anything else (read ops log, view device manager, etc.). My expectation would certainly be to see heavy disk and network I/O but I'm very surprised to see such extreme use of the processor.

Link to comment
Share on other sites

Hi

 

There aren't any settings in Retrospect that can help with the fragmentation. Chances are the multiple backup streams are what is making it so bad.

 

Are you using compression in your backups? That burns a lot of CPU - even more so when you are running multiple streams.

 

Thanks

Nate

Link to comment
Share on other sites

Hey Nate,

 

No we're not currently using compression in Retrospect. We were considering doing so but after reading your reply I'm beginning to think it's not a very good idea. Would it make any difference, from a Retrospect perspective, to add a second processor, meaning is Retrospect multiprocessor aware? Also, is there a way, in Retrospect not Windows, to priortize use of the processor during backup to permit administrative access?

 

On the subject of disk fragmentation, should I be concerned about the integrity of the data we are backing-up to such a fragmented disk? Needless to say, I don't want to defeat the purpose of securing data in the first place by not being able to restore it to its original state. What is Dantz's position?

 

As always, thanks much.

 

-Michael

Link to comment
Share on other sites

Hi

 

Unfortunately a second processor won't help Retrospect. Currently it will not take advantage of two CPUs. However a second chip will certainly let you run other things while Retrospect is using the other first one.

 

Retrospect does not provide any process priority management features.

 

Fragmentation will not corrupt or damage your data. It may make reading it a little bit slower but you don't need to worry about it. I would leave the disk as is.

 

Thanks

Nate

Link to comment
Share on other sites

  • 2 weeks later...

High CPU usage also comes from Proactive backups. They ( 4 of them, 83 clients each) rutinely run my cpu at 100% while checking for source/polling ). Unfortunatelly there's nothing you can do but hope that the next version will be programmed better. As far as compression goes - if you have dual processor and Win 2000 and up you could enable ntfs compression instead of using Retrospect's for better performance.

 

I run some tests in the past and found that standard 4k cluster size is the most optimal for file backups. ( I, however, no longer use file backups as they were getting corrupted whenever HD run out of disk space )

 

grin.gif

Mikee

Link to comment
Share on other sites

  • 3 months later...

Retrospect Multiserver 6.0 has been running much more slowly lately. I tried running Windows XP Pro Disk Defragmentor, because the disk shows heavy fragmentation. But the utility complains it is unable to defragment the .bkf files even though the 130 GB hard drive is only half-full. Is there some better, and more reliable industrial strength defragmentor program I could try?

 

---Pam

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...