Jump to content
Sign in to follow this  
Ramon88

"Idle/loading/preparing" takes (too) much time.

Recommended Posts

I'm not the first to ask this I think. But I'm seeing ridiculous long "idle/loading/preparing" times with some of our windows clients. Retrospect takes forever to match and snapshot these clients, and thus waste a lot of precious backup time. I hardly can keep backup scripts within the off-hours. Of course I could take the work around route and split everything up, each machine its own backup script for example. But let's be honest, that's a bit impractical. Retrospect should do that kind of thing for me!

 

The clients that are 'slow' are either Vista or XP based (software versions for RS and clients are the most recent). Most of them are developer machines and they do use a lot of (very) small files, like 350.000 - 650.000 per client. The clients take between 45-120 minutes of idle/loading/preparing each. The backup itself happens in mere seconds.

 

I've asked our developers to reduce the amount of files they work with, however this isn't practical for all of them. And quite frankly, why is this a problem for Retrospect anyway? I seriously question the efficiency of the Retrospect client.

 

Can we see a client update resolving this issue? Or do we have to wait for 8.0 to have this improved? Or do I need to accept this behaviour will remain the norm for some time to come? If so, I would like to know, so I can start splitting up backup scripts. Something I rather not want to do, but if it's the only workaround...

Share this post


Link to post
Share on other sites

Hi Robin,

 

This one is version 7.6.123. Clients are 7.6.106.

Btw, these clients are all developer machines running MS Visual Studio 2008 and SVN software for versioning, greatly increasing the number of small files.

 

One client has 636.000 files to process. It takes shy of two hours preparing for the actual backup (which takes less than three minutes using incremental backup of course). This client has its files located on a Velociraptor drive... So processing is around 5.600 files per minute.

 

On the same client, backup of its user directory of 50.000 files takes less than 5 minutes, all included. Processing here is about 11.700 files per minute.

 

(Very) small files seem to take a big performance hit. This is probably a bit of a specific problem for clients that have a developer role.

 

Is there a way to speed up the client's processing performance?

 

For what it's worth, we back-up using Media Verification (MD5 digests) and storage target is a Disk Catalog making use of Retrospects standard grooming policy.

Share this post


Link to post
Share on other sites

The more files you have, the longer it will take to build the snapshot.

 

For a Server OS, turn off the script option to copy NTFS File permissions from servers.

 

How much RAM do you have? How much free space do you have on the C: disk of the backup computer?

Share this post


Link to post
Share on other sites

Hi Robin,

 

The clients are a mixed bunch of both Vista Business (x64) and Windows XP Pro (x86).

 

(Though we also have 2003 and 2008 servers holding these kind of developer files - they also take their good time but I'll see if we can incorporate your NTFS permissions tip)

 

Clients all have at least 2GB of RAM and dual core CPU's. Hard drive policy here is to fit a larger one if it fills over <75% of its total capacity and/or partition. In these cases however both parameters don't even pass 50% capacity. Users are logged out when the backup is taking place.

Share this post


Link to post
Share on other sites
How much free space do you have on the C: disk of the BACKUP COMPUTER (backup server)?

 

I didn't see the answer to that specific question.

 

The more small files you have, the longer the process will take.

Share this post


Link to post
Share on other sites

Ah sorry, I misunderstood.

 

This particular backup server runs its OS (XP Pro) from a 16 GB SSD-module. It is a lean installation with only Retrospect installed on top of the OS. The machine is built around a dual core Intel T7200. Swapping is disabled (better for the SSD). It is a very quick booting machine that doesn't write a lot to its C: disk. Catalogs are stored on one of the storage hard disks. The installation, including hibernation file takes about 5GB of storage. It's only reason for being is Retrospect. It's not used for anything else.

 

For efficiency it has only 1GB RAM. Do you think expanding it will make a difference? As for swapping. So far it didn't need it, but we could easily use one of the storage disks for that. However this would keep that disk spinning.

 

This machine was tailor made with 'green' computing in mind. It's a bit of an oddball, but so far it has been running without too many issues. Monitoring CPU and RAM seems to suggest everything is within its potential. The problem seems to be more of a client side thing. Could the client be made more 'small files friendly'?

Share this post


Link to post
Share on other sites

Retrospect works best with at least 2 GB of RAM. We also require between 5 and 10 GB of free space on the C: disk for each execution unit you are running.

Share this post


Link to post
Share on other sites

Thanks, I'll add another module and keep you posted.

 

Can you explain why Retrospect needs 5-10 GB of space on C: for each execution unit? Last time I checked I was not seeing Retrospect fill up the drive during execution. Does it write its own temp files and are they kept in balance with the available space on the C: drive?

Share this post


Link to post
Share on other sites

Retrospect uses the default windows temp directory and a custom directory to cache temp files during backup operations.

Share this post


Link to post
Share on other sites

I'm currently monitoring a backup run (two active execution units at start, now only one - the 'slow' one).

 

Initial conclusions:

- Free RAM is not the issue in this case. Commit Charge Peak reading at 583.604 (K) versus Limit 940.412 (K).

 

- Free space on disk C: hardly changed, maybe ± 200 MB.

 

- During Matching with a 'slow' client CPU-usage is around 50-55%. However this is a dual core CPU and a single execution unit can only use a single core for this (or so it seems). Network utilization at this phase is ±0%.

 

So my conclusion is: Matching speed is server CPU-limited?

Am I seeing this right? I did not expect this actually...

Share this post


Link to post
Share on other sites

Yes, scanning and matching is a very CPU heavy process and the faster the computer then the faster this should be.

Share this post


Link to post
Share on other sites

Well... we were planning on having this server replaced at the end of the year. Might be something we need to do a little bit sooner.

 

And I didn't expect this to be so computing intensive at all so I was kind of looking in the wrong direction I guess.

 

Might be something the Retrospect engineers can improve upon as well. You never know with version 8 on the horizon? ;)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×