Jump to content

Awful, Poor Performance with Win 7.7.620 Multi-Server


Recommended Posts

I replied in a related thread, but thought to open up a new one. Here is my post from that thread. And if anyone has any suggestions or ideas, I'd love to hear them. At present I'm thinking my issues could be due to Server 2008 (32bit) and Retrospect 7.7.620 and ??????? I've never seen such poor performance.

 

Oh - and the systems I compare below are new - the 7.7.620 Multi-Server (System B is only about 4 months old.

 

****************************************************************************

 

I see this as well with my backups, and I plan on calling support to see if they have any suggestions. I've been using Retrospect for Windows since version 7.5, and around 5 plus years of usage, and under a number of configurations. For comparison's sake I will list out the last configuration running 7.5 and then upgraded to 7.7 due to Exchange 2010. We'll call this System A.

 

System A Specifications:

Single, multi-core Xeon processor with 8gb of RAM running on Windows 2003 server R2

1gb fiber network adaptor to a core HP 9308 switch

SCSI connector to a 16 drive external array - Each drive SATA 1TB / 7200rpms - Raid 5 = 15TB of data storage (using round numbers) - Raid is all hardware based.

SCSI connector to a Sony StorStation LIB-AIT4 drive / Library.

Performance: Could run 7 simultaneous jobs from clients connected directly to, and clients not connected directly to, the core 9308 switch and I would get at least, mean ave. 70 MB/min, across all jobs. When one job would finish, the performance would bump up.

Clients I was backing up: File servers, Transactional Data servers, Sharepoint and Exchange. Os's were a mixture of Server 2003 R2, and Server 2008 (Exchange 2010) system.

 

System B: (the current system - at my new job)

Single multi-core Xeon processor with 8gb of RAM running on Server 2008 SP2 (32bit)

6 internal SATA 3TB / 7200rpms drives - Raid 5 = 15TB (round numbers again) slung off a 3ware SATA Raid Card that can handle 3gbps throughput.

Dual 1gb Intel nics, teamed, for 2gb throughput going into an HP 5308xl switch (I'm configuring up a 9308 presently)

Running Retrospect 7.7.620

SCSI connector to a Sony StorStation LIB-AIT3 drive/library

 

It doesn't matter if I'm running 1 job or 5 this is what happens. The "copying files" portion of the job starts, and I can see the performance throttle up to 120 MB/min - then 260 MB/min - then up to 600 MB/min <-- all the while you can see the files that are being copied just flying by. Then the job seems to hang (but it hasn't) - it seems to hang for a long period of time, say 10, 15 plus minutes - then the performance drops down to 11.5 or 12.9 or 13.6 MB/min and stays that way for the rest of the job.

Mean average for the jobs comes out at 12.6 - and again it doesn't matter if only 1 job is running or 5 simultaneously - they all act the same way. The part where the job has seemed to hang, I believe is when the throughput is massively throttling down.

 

Restores: From the 15TB data volume on the Retrospect server to a directory called "restore" on the 500gb boot drive (I never restore directly to the "original" location - are like freaking greased lighting.

 

Backup to tape from the disk jobs - are like freaking greased lighting.

 

Clients I'm backing up: Right now, all 2003 servers, that are old and the data volumes are heavily fragmented. I know this can be a source of the poor performance and I'm in the process of building up new servers to replace them. The servers are File servers, and one Transactional Data server. Oh - and all clients have 1gb network connections.

 

Also - I'm backing up a Network "Client" - It's a 1TB mirrored NAS. A Seagate Black Armor NAS-220 to be precise. So this unit does not have to retrospect client installed, but Retrospect can see it, and copy the files. The NAS-220 is like 2 to 3 months old, meaning it's new, not fragmented, and it also has a 1gb nic. Throughput on the NAS-220 backup job is slighty different. First the performance throttles up to like 500 MB/min, then throttles down to like 11.9 MB/min - Then throttles back up - and then, back down.

 

For the life of me I do not understand why I have such poor performance. I'm going to be checking my switch (again) - and I read something about Win 2008 Server and shadow copies or something (I'll have to find and re-read the post) - but the restores on the Retrospect server from data volume to boot volume indicate to me that the hardware within the server is up to snuff due to the high performance. Same with the backup to tape from the disc backup sets.

 

Does anyone have any thoughts or ideas as to why I as well, am having such poor performance, and that's with only running 1 single backup job. With running only 1 backup job the thing should be screaming ............... :angry:

 

Again, any thoughts, suggestions, or ideas would be greatly appreciated. I may even copy this and start a new thread ......

 

Thanks ....

 

G.

Link to comment
Share on other sites

Check the write cache setting on the RAID on system B.

 

After a RAID-5 failure on our site (two disks failed before the first one had been rebuilt), we had to configure the RAID from scratch. Write performance was good for a minute or two and then dropped. Read performance was good. Turning on the write cache solved the problem.

 

The recommendation for our (old) RAID is to have write cache turned off since we don't have battery backup, but we had to dismiss that recommendation to get reasonable performance.

 

Hope this helps.

Link to comment
Share on other sites

Check the write cache setting on the RAID on system B.

 

After a RAID-5 failure on our site (two disks failed before the first one had been rebuilt), we had to configure the RAID from scratch. Write performance was good for a minute or two and then dropped. Read performance was good. Turning on the write cache solved the problem.

 

The recommendation for our (old) RAID is to have write cache turned off since we don't have battery backup, but we had to dismiss that recommendation to get reasonable performance.

 

Hope this helps.

 

Thank you for the suggestion. It's interesting, because essentially that is what's exactly happening. Write performance is good for a minute or 2, and then degrades horribly. I'll check my RAID settings, change the write cache, and report back after some tests.

 

Again, thanks for the info! :)

 

G.

Link to comment
Share on other sites

@ Lennart Thelander ...... you, sir, get 5 gold stars! :D I checked the RAID controller, and Write Caching was in fact disabled. I enabled at and I'm presently doing a test backup job (40,000 plus files / 142gb in total size) and the backup job is screaming. The same test with the write cache disabled was at 48 hours and counting and it hadn't finished yet.

With write cache enabled the estimated time to back up all 142 gb is 3 hours, 17 minutes.

 

At this moment the job has been running for 25 minutes (including scanning) and the throughput is till way up (over 620 MB/min) whereas before it would have peaked around that for a minute or two before cycling down to 11.9 MB/min.

 

Thank you so much for your input ........ :)

 

Take care,

 

G.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...