Jump to content

How I sped up Retrospect backups


Recommended Posts

My Retrospect server (console & engine) are installed on the same Mac and are on our network's 1 dot subnet. Most of my sources reside on our 2, 3, and 4 dot subnets. From each of those sources it takes 2 network hops to reach the Retrospect server. Yesterday, I added a static or persistent route to each source that allowed them to reach the server in 1 hop and was able to greatly improve the network backup speed. It seems Retrospect is much happier this way. The difference between 1 hop and 2 is only a couple of milliseconds but it sure seemed to improve performance. See examples below that show backup speed before and after the route was added:

 

Before/After MB per minute

299/869

325/396

255/700

436/1205

 

Thoughts?

Link to comment
Share on other sites

*Really*?

 

What kind of clients are these and what version of the client software are you running? And what are the specs of your *engine* computer (CPU, RAM, OS, etc...)

 

I back up a number of 10.6.2 and Windows 7 clients -- only the "Users" folder on each of them -- to a mac mini running 10.6.2 (which is set for automatic ethernet networking, but shows "1000BaseT, full-duplex, Standard (1500)" as the settings.) All my clients are on the same Gigabit network

 

And for the past weeks (looking at my activities list), the *fastest* incremental client backup is 263.7MB/min from a favorite folder on my 10.6.2-running xServe on the same switch (one hop).

 

 

I would be *very* curious to know what your setup is where you are seeing these speeds...

 

 

 

Link to comment
Share on other sites

All of my clients are at a minimum G5s with at least 2GB of RAM and are running Leopard 10.5.8. They are all running the latest client version 6.3.028.

 

The console/engine Mac is a 2.8 GHz Intel Core 2 Duo iMac with 4 GB of RAM. It is also running Leopard 10.5.8.

 

I am also only backing up the "Users" folder excluding cache files. I don't define each clients Users folder as a favorite but instead select it using a rule and excluding what I don't want.

Link to comment
Share on other sites

So, you scan the *entire* Macintosh HD and then only backup the Users folder by selecting it in a Rule?

 

Can you elaborate on your client setup/scripts in this case?

 

(My engine computer is a 2G core 2 duo mini -- also with 4G RAM. I can't believe the extra .8G is what is making the difference here...)

 

And the *G5 clients* are giving you over 400MB/min in performance, too?

 

 

And what are you backing up *to*? External FW 800 disk? eSATA connection? USB 3.0 device? ;-)

Link to comment
Share on other sites

Yes, I scan the entire HD and have the rule select the User folder. I suppose setting the User folder as a favorite would be faster during the scanning process...

 

The script is scheduled to run daily... nothing special. I'm backing up to a FW800 disk connected to the console/engine iMac.

 

G5 clients are well above 400MB/min.

 

 

Link to comment
Share on other sites

Are you doing *anything* to the default network settings of the clients?

 

I'm mystified as to how you are getting such speeds when none of the rest of us are.

 

People who are backing up AFP shares are getting faster speeds than backing up with the clients, but *nobody* has posted 1G/min speeds like you have -- even with software compression turned *off* (which I'm assuming you *must* be doing, right?)

 

 

Can you post a log from the activity that jumped to this:

 

436/1205

 

I'd really like to see how much data was being backed up.

 

 

Link to comment
Share on other sites

Yes, I am not using software compression.

 

Here's a couple of log file entries showing speeds from today's backup.

 

3/25/2010 12:09:34 AM: Connected to Xxxxx Xxxxx

* Resolved container Xxxxx Xxxxx to 1 volumes:

Xxxxx Xxxxx on Xxxxx Xxxxx

- 3/25/2010 12:09:34 AM: Copying Xxxxx Xxxxx on Xxxxx Xxxxx

3/25/2010 12:19:08 AM: Snapshot stored, 122.6 MB

3/25/2010 12:19:17 AM: Execution completed successfully

Completed: 160 files, 2.1 GB

Performance: 1069.0 MB/minute

Duration: 00:09:42 (00:07:46 idle/loading/preparing)

 

3/24/2010 10:52:35 PM: Connected to Xxxxx Xxxxx

* Resolved container Xxxxx Xxxxx to 1 volumes:

Xxxxx Xxxxx on Xxxxx Xxxxx

- 3/24/2010 10:52:34 PM: Copying Xxxxx Xxxxx on Xxxxx Xxxxx

3/24/2010 11:00:46 PM: Snapshot stored, 130.6 MB

3/24/2010 11:00:56 PM: Execution completed successfully

Completed: 356 files, 3.5 GB

Performance: 1503.5 MB/minute

Duration: 00:08:21 (00:06:01 idle/loading/preparing)

Link to comment
Share on other sites

I posted a few months back about my speeds being over 1GB/m on my server systems. This continues to be true when doing a large backup. When you do an incremental backup of only a couple hundred megs, the transfer doesnt really get a chance to "spin up" to the maximum transfer rate.

 

My previous post about performance

http://forums.dantz.com/showtopic.php?tid/33012/post/135868/hl//fromsearch/1/#135868

Link to comment
Share on other sites

So you aren't doing compression -- I'm sure that's what makes my speed different from yours (at some level).

 

Out of curiosity (not that you *have* to do this) -- would you be willing to turn software compression on for a day or so to see how much of a hit that really is?

 

 

As for the "spin up" -- while it's true that the majority of my incremental backups are small -- even the "solo" backups I run in the middle of the night which will have a small number of files -- but they are large (think "outlook.ost" and "archive.pst" files) -- the speeds rarely get above 200M/Min.

 

 

I wonder how much of a hit compression really *is* on a 2G core 2 duo mac. I would think it shouldn't be *that* much, but maybe it's something I should test...

 

Link to comment
Share on other sites

Oh, and for those speeds --

 

It's only taking you about 10 seconds to store the snapshot -- for me, it takes about 4 minutes from when the log say "snapshot stored" to "execution completed succesfully".

 

 

I actually just watched a backup here -- the client backed up 214M of incremental files. When the *file* backup was done, it was at about 300M/min. Backups (even with compression) actually seem to go fast.

 

However, when the actually execution had fully completed (meaning writing snapshots, compressing the catalog file, etc) -- the *final* speed for the activity was 39.7M/min

 

 

Are you compressing your catalog files? For these really fast backups, how large is the media set/catalog file?

 

Are your media sets configured for grooming?

Edited by Guest
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...