Jump to content

Realistic Time for 'Copy Media Set'


Recommended Posts

Hi, we're running a 'Copy Media Set' script to copy our entire current media set onto an external 6TB drive to store offsite as a backup-backup. But it seems to be taking unusually long and wanted to see what to expect.

 

Our Media Set is comprised of 5 hard drives taking up almost 6TB total space, and the catalog file is 123GB. We have Retrospect 13 on a Mac Mini Server. All hard drives are good quality 7200rpm drives connected with a Firewire 800 dock/interface. While we're running the Copy Media Set script, both the source & destination HD's are daisy-chained with identical FW800 docks.

 

In the Copy Media Set, we did NOT set encryption or software compression, as we though this would tax the processor too much.

 

:: The first drive in the set was 1TB. This took about 17hrs to complete.

 

:: But are second drive in the set, which is 2TB, is at 76hrs & counting. It still has 500GB to go!

 

- Is this unusually long for just copying files over, or can it take this long depending upon hardware?!?

 

It is still churning away and hasn't frozen up, which Retrospect can often do for us. So I don't want to touch it after this long, unless there's something I can adjust between this hard drive and the next.

 

In Retrospect, I see the status repeatedly rotating between 'Copying' & 'Updating Catalog File'. It doesn't seem to copy many files at a time from what I can tell. When the 'Performance' shows anything, most of the time it is: 102.4KB/m. If this is 102 Kilobytes per minute, we're in trouble!

 

- Any insight here for how to: A) proceed; and B) modify anything so it's not so painful in the future', would be welcome.

Link to comment
Share on other sites

If what Lennart Thelander says in the first paragraph of the post immediately above is correct, then jethro should have followed the instruction fragment "with the option Match Source Media Set to destination Media Set unchecked ..." in step C2) of this post.  But also, as it doesn't say in that linked-to post, he also should have had the option  "Don’t add duplicate files to the Media Set" unchecked (IMHO the fact that this option defaults to checked is either a careless holdover from Backup scripts—where it forms "the other key component of Retrospect’s Smart Incremental backups"—or a stupid attempt to save space on the destination).  

 

And if those two things don't speed up Copy Backup Set, then the whole idea of creating an off-site copy of an existing on-site Media Set—whether for "seeding"/upload to the cloud or for physical off-site storage—using Copy Media Set is impractical!

 

BTW, jethro, did you either manage to repair or to bypass the bad disk member of your on-site Media Set, which you discussed in this thread?

 

P.S.: Inserted second sentence, about leaving the option "Don’t add duplicate files to the Media Set" unchecked, to the first paragraph.  Based on my having setup a test which is now finished running, that option—unlike Match Source Media Set to destination Media Setdefaults to checked.

 

P.P.S.: jethro, you should wait and read my post below this, which I have updated now that my test is finished.  Based on the results, I think you should kill your current Copy Media Set script and restart it from scratch, with the setup per my first paragraph.

Link to comment
Share on other sites

Being a "guy at home"—with the setup described in this post—and responsible only to myself, I can run tests.  So I ran one this evening, doing a Copy Media Set of my 235GB "Media Set White"—recreated from a Recycle backup of 6 drives today—into a brand-new "Media Set Violet". I only have spare space available on two USB3 drives, one of which is "G-Drive Blue"—last backed up to a week ago—and the other of which is "G-Drive White"—which already contains the "Media Set White" the test is copying from (I put "G-Drive Red" into my bank safe-deposit box this morning, and I wasn't prepared to dig out my non-existent diamond drill and nitroglycerine just to facilitate this test).  I ran out of spare space on "G-Drive Blue", and—after spending 10 minutes in another room before I noticed the Add Member message—had to put the second member of "Media Set Violet" onto the spare space on "G-Drive White".

 

So we have to allow for some inefficiency, given that the second half of the Copy Media Set run was copying onto the same drive it was copying from.  Nevertheless, the Copy Media Set run took 2.5 hours—not counting the Add Member wait time—to copy 255GB.  At that rate the Copy Media Set for jethro's first drive should have taken well under 10 hours, not 17 hours.  Of course my "backup server" is an inherited 2010 Mac Pro (5,1) with the cheapest available 4-core processor, which is still more powerful than Jethro's "backup server"—although it's probably only using one core because I'm a destitute old fogey with Retrospect 12.5 Desktop Edition.  Still, if I were jethro, I'd kill the Copy Backup Set run he has going and start again—with both the options "Match Source Media Set to destination Media Set" and "Don’t add duplicate files to the Media Set" unchecked.  

 

P.S.: Slightly revised fourth sentence in second paragraph; if jethro has the Mac mini Server (Mid 2011) that is item #11 on this page, he has just as many cores as my 2010 Mac Pro (5,1)—even though his processor is only 2.0GHz vs. my 2.6GHz.

Link to comment
Share on other sites

This afternoon I've just rerun the test I ran last night, only doing the Copy Media Set to a single FireWire 800 drive destination—which I remembered this morning had available space—instead of onto destination members on two separate USB3 drives (one of which was also the source drive).  The time is about the same; 2.7 hours for 261GB—6 GB more than yesterday because I ran my usual "Sun.-Fri. Backup" No Media Action script for my MacBook Pro onto "Media Set White" early this morning.  The "Files remaining" (what is that?) reduce the total actually copied by about 20GB for each run.

 

So why did my Copy Media Set runs get approximately 100GB/hour, whereas jethro's run got approximately 60GB/hour for his first drive and is getting approximately 20GB/hour for his second drive?  The three variables that differ between my runs and jethro's runs are: 1) I had "Don’t add duplicate files to the Media Set" unchecked, while he probably has it checked.  2) My "backup server" machine has an intrinsically faster CPU than jethro's "backup server" machine, although both of our machines probably have a 4-core CPU.  3) My Copy Media Set runs copied less data than on jethro's first and second volumes, although the speed drop-off seems less than linear for his first volume and more than linear for his second volume—and of course the question remains why the amount of data copied should cause any speed drop-off at all.

 

I'm going to make a phonecall to Mayoff, to see if he can provide a judgement as to the comparative importance of the three variables.

 

P.S.: Added third variable that differs—the amount of data copied—as a final sentence in the second paragraph.

Link to comment
Share on other sites

As soon as Retrospect Inc. opened this morning, I phoned Mayoff.  He suggests that jethro phone Support, (888) 376-1078 or (925) 476-1030.  He says that he's seen situations in which slowdowns like this turn out to be hardware or overtaxed-resources problems.

 

Good luck jethro, and please post what Support tells you to this thread.

Link to comment
Share on other sites

Prompted by what Mayoff said on the phone this morning, I started to think about overtaxed resources other than CPU speed—namely RAM.  So early this afternoon I reran the same test as in post #5 in this thread, but with Activity Monitor also running.  A year ago, when I first starting benchmarking the "backup server" on my Mac Pro for my thread on Ars Technica, I added a Real Memory column to Activity Monitor because I—being an old fuddy-duddy—do not completely believe in the reality of paging (it's virtual memory, after all).  So, in between folding socks out of the dryer in the bedroom where my Mac Pro sits, I looked at the Real Memory used during the Copy Media Set run.

 

I noticed that the biggest user of Real Memory was the RetrospectInstantScan process, not the RetrospectEngine process.  And the Real Memory used by RetrospectInstantScan went up from about 720MB to about 820MB.  It didn't keep climbing from there, however, as I thought it might.  However 820MB is a fair amount of RAM by any standards.  It didn't make any difference on my Mac Pro "backup server", because I upped its RAM from 3GB to 7GB when I inherited it in late spring 2015.  But it might make a difference on Jethro's Mac mini Server, which only comes with 4GB RAM if it's the model I linked to from the P.S. of post #4.

 

Therefore I recommend that jethro, and anyone else running a Copy Media Set script, first go to System Preferences->Retrospect and turn off Instant Scan for the duration of the script run.  I also recommend that jethro make an investment in additional RAM; my additional 4GB cost me US$43 from—IIRC—Other World Computing.

 

P.S.: Look at this thread.  Problem sound familiar?  Note that, if you click the link for item #11 in the Low End Mac page linked to in the P.S. of post #4 in this thread, that link goes to a Low End Mac page that says in the next-to-last paragraph that OWC has found you can upgrade that model to either 8GB or 16GB—and links to the OWC page.

Link to comment
Share on other sites

Thanks for the responses. It's a lot to read through right now, so I may have to go back through it when I get a bit more time.

 

But I did check Activity Monitor, and surprisingly, Retrospect wasn't maxing out the CPU (Core i7 2Ghz). It ranged from 25-65% mostly. We see Retrospect freeze up and max the CPU when just doing normal tasks or even opening the program (takes 5-6 minutes when starting just to be usable). So we're scared to pause or try to stop the backup, as I'd be surprised if it wouldn't completely freeze Retrospect. And RAM isn't an issue, we have 16GB in our server (Retrospect using under 1GB).

 

At this point, we're on to the 3rd HD of 5, which is 1TB, and it's running just as slow. We're at about 400GB after 24hrs. So it looks like it wasn't due to a faulty drive (drive #2 was the one we thought we'd have to have repaired).

 

As I'm heading out for the Holidays tomorrow, and have a ton to get done before then, we'll have to just let this process finish out (hopefully by the end of the week), and look into doing it differently in 2017 when we are going to start a completely new set to alleviate some of the issues with our 5-year-old 6TB set.

 

When we get our new Media Set going at the beginning of the year, we are going to do a weekly Copy Media Set for an offsite HD (not the same one we're running now). So I'll have to look more into exactly what "Match Source Media Set to destination Media Set" and "Don’t add duplicate files to the Media Set" do to determine if they can be safely left off without losing the flexibility that's important to us. We'll just need the new Copy Media Set script to be able to complete in less than a work day, as it will be brought in just for that, then taken back home.

 

Thanks for the help!

Link to comment
Share on other sites

....

But I did check Activity Monitor, and surprisingly, Retrospect wasn't maxing out the CPU (Core i7 2Ghz). It ranged from 25-65% mostly. We see Retrospect freeze up and max the CPU when just doing normal tasks or even opening the program (takes 5-6 minutes when starting just to be usable). So we're scared to pause or try to stop the backup, as I'd be surprised if it wouldn't completely freeze Retrospect. And RAM isn't an issue, we have 16GB in our server (Retrospect using under 1GB).

 

At this point, we're on to the 3rd HD of 5, which is 1TB, and it's running just as slow. We're at about 400GB after 24hrs. So it looks like it wasn't due to a faulty drive (drive #2 was the one we thought we'd have to have repaired).

 

....

 

When we get our new Media Set going at the beginning of the year, we are going to do a weekly Copy Media Set for an offsite HD (not the same one we're running now). So I'll have to look more into exactly what "Match Source Media Set to destination Media Set" and "Don’t add duplicate files to the Media Set" do to determine if they can be safely left off without losing the flexibility that's important to us. We'll just need the new Copy Media Set script to be able to complete in less than a work day, as it will be brought in just for that, then taken back home.

 

Thanks for the help!

 

 

I see from Wikipedia and his first quoted paragraph that jethro must have the Mid-2011 MacMini(5,3).  Therefore he has as many CPU cores and twice as much RAM as my Mac Pro "backup server", which runs Copy Media Set much faster than jethro.  Until now I had forgotten about jethro's original post in the  2 March thread, which reported he was having problems opening Retrospect and keeping it running.  From jethro's first quoted paragraph he is still having the problems, so I am wondering along with Mayoff whether jethro has a hardware/software issue not related specifically to Copy Media Set.

 

For instance, jethro has double the official maximum RAM for the MacMini(5,3).  Unless jethro just happened to pick up a bargain on a previously-custom-built machine with quadruple the amount of RAM that he should normally need for Retrospect, I am wondering what other software he is running on his MacMini(5,3).  Could it be that he is running Mac OS X Server?  If so, he must be aware that—according to the ever-helpful DovidBenAvraham's WP article—he should be running a Retrospect Edition that supports OS X Server, not merely the Desktop Edition.  Is that why jethro did seem and does seem reluctant to contact Retrospect Support about his other Retrospect problems (insert appropriate smiley here)—even last March when he ws still entitled to free telephone support for Retrospect Mac 13?  Or, considering hardware only, is jethro just avoiding the possibility that his machine has a flakey internal hard disk drive—which would cost some money to replace (insert appropriate smiley here)?

 

Before I saw jethro's quoted post #8 in this thread, I did have the additional idea that jethro's presumed leaving "Don’t add duplicate files to the Media Set" checked might be causing cumulative RAM-grabbing by the RetrospectEngine process while his Copy Media Set script is running.  So this afternoon I reran the same test as in post #7 in this thread, except with that option checked.  Intermittently looking at Real Memory (which was almost as unexciting as watching paint dry) in Activity Monitor, I noticed sudden temporary increases during Updating Catalog File phases.  Each temporary Real Memory increase was greater than the increase after the previous Backup was copied, so it is possible that—since jethro's Copy Media Set run must be copying hundreds of Backups of the same Source—his temporary increases might be eventually exceeding the available Real Memory and causing what used to be known as "virtual memory thrashing".

 

I was considering converting the posts I have made in this thread into a Support Case.  However I no longer feel I can do that until jethro posts further particulars on what version of OS X and other software he is running on his Mac mini Server, his Edition of Retrospect Mac 13, and whether he has tested his internal hard disk drive. 

 

In any case—even if jethro fixes his Copy Media Set problem so that it runs as fast as mine does, his proposed weekly run cannot run within a single working day unless he starts a new Media Set every 8-10 months (depending on how long his workday is) .  Do the arithmetic: 6TB/5 years-saved = (1200GB/year-saved) / 100GB/hour = 12 hours/year-saved.  IMHO jethro should instead follow the procedure I suggested in this post, which uses Copy Backup on a weekly basis after an initial Copy Media Set.

 

P.S.: The test described in my third paragraph ran in roughly the same amount of time as my previous tests, allowing for the additional No Media Action run of my "Sun.-Fri. Backup" script to "Media Set White" I had made before I ran the latest test.  So leaving "Don’t add duplicate files to the Media Set" checked didn't seem to increase the running time.  OTOH "Don’t add duplicate files to the Media Set" was checked during all the three daily runs of the "Sun.-Fri. Backup" script to "Media Set White", so I wouldn't expect there would be any duplicate files in the Source of the Copy Media Set.

 

P.P.S: Revised last paragraph; jethro's last quoted paragraph says he's going to start start a new Media Set at the beginning of 2017.

Link to comment
Share on other sites

Hi,

Just a quick follow up. I'll have to jump back on this after the break at the beginning of the year.

 

But we do have a Mac Mini server for which we purchased additional RAM (16GB - high quality). It is running Mac OS X Server 10.9.5. And we just purchased an upgrade to Retrospect 13.5 Server Edition with 10 client licenses. Everything is completely legit.

 

We only have the typical server software running along with Retrospect, nothing else. CPU usage is really not being tasked, surprisingly. I watched it for a bit. And concerning RAM, Retrospect & RetrospectEngine are at 500MB each, RetrospectInstantScan is at 300MB. Apart from that, it's only system resources. And we're not hitting the swap drive, so we're really OK here too. Not even at 30% of RAM resources.

 

Concerning long startup times, it always hangs on a message like 'Syncing catalogs' or something. Takes a very long time.

 

Lastly, we are now on to drive 4 of 5, which is a 2TB drive. BUT, the overall size of our media set I took from the Media Sets section in Retrospect, which stated that it was about 5.6TB over 5 members. IN REALITY, it's going to be well over 6TB, probably about 7TB, when completed. Wondering why Retrospect's calculation didn't match reality?!? We're going to have to now get another destination drive to finish this thing out.

 

And our weekly offsite backups SHOULD be only what's new from week-to-week, right?!? Only the first initial backup will be huge.

 

I'll check-in here after the New Year. Thanks for the help!

Link to comment
Share on other sites

Thanks for the hardware and software particulars,  jethro.  I apologize for implying that you might have penny-pinching tendencies.    I had a hunch that you didn't buy such a fairly fancy Mac mini Server just to run Retrospect, so I would be interested in knowing what "the typical server software" is—aside from OS X Server itself.  That software may be what is causing your Retrospect slowness and crashing.

 

In regard to that, I strongly urge you to contact Retrospect Inc. Tech Support.  First phone one of the numbers I listed in post #6 in this thread.  However don't stop there; IME A. (if he still works there) isn't the brightest bulb in the chandelier.  Insist on speaking to Mayoff (dial again and if necessary phone again and dial x806 to get him), and file a Support Case by going to www.Retrospect.com and clicking the pane with the telephone icon in it at the upper right.

 

As far as the "long startup times" is concerned, "syncing catalogs" (which presumably you see at the upper left of the Retrospect main window) is shown while the Retrospect Console is obtaining information from the particular Retrospect Engine(s) you are running.  If you don't understand that, DovidBenAvraham had a brief but pithy explanation here in the "Retrospect Macintosh 8" section, starting with the last sentence in the "Powerful new engine" indented item-bulleted paragraph.  Upon booting my Mac Pro "backup server", my Startup Items Retrospect Console app's "syncing catalogs" takes at most a second with my single RetrospectEngine process (I have Retrospect Mac 12.5 Desktop Edition running under OS X 10.10.5); if it is hanging for several minutes on your "backup server", you definitely have a problem that should be discussed with Retrospect Inc. Tech Support.

 

As far as 5.6TB turning out to be over 6TB, you are almost certainly the victim of a decades-old traditional difference in measurement units between programmers and storage-drive-makers.  I ran afoul of it over a year ago, and put an explanation into this post.  For example Retrospect says my three USB3 G-Tech G-Drive Slims each have a capacity of 465GB, but HGST marketed them as 500GB drives.  When I multiply 1024 by itself 3 successive times to get 1TB binary and then multiply that product by 5.6, I get nearly 6.2TB decimal using my handy-dandy Apple Calculator app.  What we have here is what I have termed a "conceptual bug", and I don't think we have a prayer of getting Retrospect Inc.—a spin-off founded by programmers—to fix it.  "Tradition, tradition ..." (about a year ago I took my now-ex-girlfriend, at her request, to a staging of "Fiddler On the Roof").

Edited by DavidHertzberg
Changed link in third paragraph to _old_ Wikipedia article, in which link directly to section _now_ works.
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...