Jump to content

Real world Groom results -- for those interested


Maser

Recommended Posts

So, I wondered how long this would take -- now I know, so I'm posting my results of a groom to give people an idea how long a "real world" groom might take.

 

I have a disk media set that backs up my Macintosh clients (28 of them) daily. This set was running since late May.

 

Prior to the groom, the set looked like this:

 

1729 backups

4, 106, 877 files

684.9G

(catalog file -- compressed -- 11.5G)

 

I set the groom to groom to keep 60 backups (this is not the "automatic" setting. I know all my clients don't have 60 backups, but a small number of them that *are* on daily had about 90 backups each.

 

POST GROOM (plus one proactive backup that ran before I got the stats):

 

1705 backups

4, 024, 476 files

669.6 G (so over 15G was groomed)

 

Time to Run: slightly over 45 hours (yes, 45 -- 1 day, 21 hours, 21 minutes).

 

 

The computer running the engine? a 2G core 2 Duo Mac Mini with 4G RAM running 10.5.8 with over 80G free hard disk space on the internal drive.

 

YMMV. I did a groom of a different media set two weeks ago -- which had about 490 backups and slightly over 1 million files, but is larger in disk size (over 730G) -- this groom took about 20 hours to run.

 

 

So, in my testing with large sets, grooming speed is likely dependent more on the number of backups and the number of files to look at.

 

I have a 3rd set (currently 383 backups, about 500G, but only 62,123 files) that I have yet to run a groom on (EMC knows why). Once I do that, I'll update this thread with those results for further comparison. If I'm right, because there are so much fewer *files* on this set, I'd think it should take a lot less time to run.

 

 

What I don't know now (and I'd like to hear somebody chime in on this with experience): Since I continually want to "groom 60" with that set that's going to have so many backups and so many files....

 

 

Maybe I'm better off *not* putting all my clients in one set (forgoing "matching") and just put each client in it's own set -- specifically to be able to run "grooms" faster.

 

Thoughts from anybody that might have gone that way?

 

 

 

Link to comment
Share on other sites

Oh, I have one other thing to add to all the above that I think is a bug (which I've submitted to EMC).

 

*During* the large groom, I opened the console often to see how things were progressing.

 

In doing so, I noticed that my other proactive scripts did not seem to be running.

 

When I looked closer at "Scripts" -- I found that for my other two scripts (which should have been running) -- both the Sources *and* the Media Sets had been removed from the scripts!

 

I was able to manually put things back into the scripts (while the groom was still running) and then those two proactive scripts started working again.

 

 

This bug -- others have reported -- where scripts seem to lose values. I don't necessarily think it's groom related, but it may be some kind of memory leak or something...

Link to comment
Share on other sites

Thanks for the info. I, too, am finding grooming seems to scale with backups and files, rather than size.

 

My largest backup set has 30,000 files in it, and is nearly 5.5TB in size (it's an Xsan volume with a comparatively small number of very large files). It has roughly 100 backups in it - a groom takes ~20 hours.

 

This is on a Xserve, with 2 dual core Xeons, and 2GB RAM.

 

I also notice a similar thing with scripts losing their target media sets. For me it happens randomly. Everything will be fine for weeks, then blam - every other day scripts will be failing to run because their backup set isn't available.

 

It also seems to happen if I restart the Retrospect Engine for some reason (it hangs, crashes, or I have to restart the SAN).

Link to comment
Share on other sites

I switched from a single massive media set to multiple sets. I haven't done any side by side comparisons, but things seem to run better this way.

 

In my current setup, I have 21 media sets across 3 physical disks with 36 sources and 37 scripts.

 

Each network source has its own proactive script. Where the machines are grouped together, they share a media set as their common target. Each media set is limited to 500GB and set to groom to 10 backups.

 

I have one Copy Media Set script that copies the data from the media sets off-site to a NAS. It has been working well for a few months and when things do go wrong (scripts losing their contents, media sets becoming corrupted) the modularity keeps one error from taking down the whole setup.

Link to comment
Share on other sites

Maybe I'm better off *not* putting all my clients in one set (forgoing "matching") and just put each client in it's own set -- specifically to be able to run "grooms" faster.

 

Thoughts from anybody that might have gone that way?

 

I'm new to Retro 8 for mac, but on my Windows 7.6 system I grouped my 82 clients into about 10 sets. I still get benefits from matching. Grooming was then *possible* on the Windows machine. I could also get more concurrent backups. The other big win was that when a catalog file would get corrupted (this occurred about once a week), I could just restore that one catalog from backup, and fewer clients would need to "redo" their latest backups.

 

I'm just moving to Retro8 now, but I'm setting it up similarly. Mostly I'm looking for concurrent execution and the ability to keep the catalog sizes in check.

Edited by Guest
Link to comment
Share on other sites

Does anybody have results for the Retrospect Groom setting with multiple sets? My issues are with file size and duration of the groom cycle.

 

My results:

Manual Groom from a 1+ M file set of about 700GB...

+ Executing Grooming at 8/9/2009 9:03 PM

Grooming Media Set RS8_Daily Backup Set 4...

Groomed 122.0 GB from Media Set RS8_Daily Backup Set 4.

8/10/2009 12:04:53 PM: Execution completed successfully

Duration: 15:01:47 (00:06:04 idle/loading/preparing)

Link to comment
Share on other sites

On last "real world" groom.

 

I have another set - 516.9G -- 403 backups -- but only 63,644 files.

 

Similar size and number of backups my set that took 20 hours to groom -- but *way fewer* files.

 

I did another "groom 60" to this set. It removed 50G and took 47 minutes.

 

 

So, to me, the groom speed is more specifically related to the *number* of files in the set and not so much on the size or number of backups of the set.

 

 

Hope these results were informative to some!

Link to comment
Share on other sites

Could people check to see if Retrospect 8 when grooming is using more than one processor core?

 

So far I have never seen Retrospect 8 use more than the equivalent of one core, even when trying options like encryption, and software compression. This is a huge waste of an 8-core Mac Pro, I might as well have used a Mac mini. :(

 

On my über Mac Pro the CPU utilisation remains so low that you can barely see it on the graph, even when Retrospect 8 is supposedly working flat out. (This is not a good thing.) :(

Link to comment
Share on other sites

I could certainly check that the next time I do a groom.

 

What's the best way to determine this?

 

Have Activity Viewer running at the same time Retrospect is doing its business. If the Retrospect process is only showing a maximum of about 100% CPU utilisation then this means it is not using more than the equivalent of one CPU core, 200% would be all of two cores.

 

As (so far) Retrospect seems completely incapable of using more than one CPU core, effectively my Mac Pro is reduced to being a 'single core' 2.26GHz machine, rather than the 8-cores it actually has.

 

I have mentioned this in another thread and did myself point out that some operations cannot due to their nature be shared amongst multiple CPU cores, but plenty could. This urgently needs fixing as we are not going to get a '10GHz' CPU any time soon. Such a major design handicap might have been forgivable or at least understandable in the old Retrospect 6, but in the supposedly totally new Retrospect 8 with its completely new engine that is supposed to be able to do multiple things at the same time and be a lot faster [hah!] it is inexcusable.

Link to comment
Share on other sites

Here's a screen shot of Activity Monitor while I have a groom running while in the "matching files" stage.

 

The CPU on "retroengine" has gone as high as 148 while I've been watching, but normally it's bouncing between 100-130.

 

Thanks, that does appear to show it using more than one CPU core. For what it's worth, I discovered today after yesterdays backup finished (later than normal), that it had for the first time ever done some grooming. I had not expected it to do grooming until next week so I did not check the CPU activity, I will check it next time.

 

Rather than the horror stories of 40+ hours I have seen here, it fortunately only took an extra 10 hours (approximately). What was unexpected was that it seemed to only do the grooming after the backup had finished. I would have expected it to do the grooming first and then the backup so as to first free up space for the new backup. The order it (apparently) does it does not seem logical since now it will not have enough space for the next backup despite grooming.

 

Link to comment
Share on other sites

40 hours isn't necessarily a "horror story". If grooming is based on the number of *files* (and actual "backups") in a media set, I suspect that the more files you have, the longer it will take to groom things.

 

And it depends on how you have grooming set. I suspect the "automatic" settings are probably faster to work with than the "specific number" settings, too...

 

Grooming automatically kicks in when some specific "free space" criteria are met. It may be that you had enough free space before you started your backup, but then did not after the backup was finished?

 

 

I copied something from a post way back when about when automatic grooming will run:

 

CASE 1: Catalog is on the same disk as the .rdb file.

 

Grooming will be launched when one of the following conditions is true.

 

1. free space < (the size of catalog file * 5)

 

2. free space < 1G

 

CASE 2: catalog and rdb files are on different disks

 

Grooming will be launched when the free space is about 10M.

 

the algorithm is complex which depends on free space, capacity, location of catalog file, and etc. the above is the approximate description.

 

 

Link to comment
Share on other sites

Anything longer than 24 hours is a horror story. In my case I have about 4.6million files in the backup and about 6 incremental backups for about 12 sources.

 

I am currently using the auto grooming option.

 

It is now apparently doing another groom but like always in my case the Retro Engine stubbornly refuses to use more than about 100% CPU, i.e. apparently only one core.

Link to comment
Share on other sites

When I perform a groom using Retrospect for Windows on a large media set it could easily take between 24 and 48 hours. The Mac version will work the same way.

Is that media set used by your backup scripts? I assume not, since R8 is chewing on it for two days. In which case, why bother grooming it?

 

Link to comment
Share on other sites

Grooming with Retrospects default policy can take quite a long time when you have a 'deep' backup set.

 

Depending on how long you want to go back I find it more efficient to use the "Groom to remove backups older than"-option. Say you use alternating A & B Backup Sets, you can set it, for example to 10. Grooming now occurs more often but shouldn't take that long anymore. You can now go 20 days back with a 'resolution' of 2 days. You can always recycle one of the two and still have the other set as a recent backup.

 

Setting Retrospect up to accomodate your needs can be something that can have a lot of parameters to consider. How 'deep' do I need to backup? How long to retain my data? How much storage do I have available, etc. But also: How big is my available time frame. There will always be a weakest link in the chain. It's your job to "optimise the best compromise". That's not something Retrospect can do for you. You need to figure that all out yourself and set the program up accordingly.

 

If you find your requirements are not achievable within the available backup time frame, maybe you need to consider faster storage or even a second backup server.

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...