Jump to content

Staged Backup strategies and advice


Recommended Posts

Hi there. I’m hoping to implement a “staged backup” strategy at my workplace, backing up to Disk Sets and Transferring those to LTO-6 Tape Sets. Until now I’ve been running Proactive backups directly to multiple tape sets. But it’s too complicated and inefficient.
Currently we have 12 clients:

5 are typical "Office" PCs plus a MacBook. A few gigabytes per day.

4 are "Production" workstations (media post production). Huge files. A single video shoot can bring in 500GB of new files to back up.

3 are “Remote” servers (email and web sites). Several gigabytes to start, but smaller data every day after. I don’t care about saving versions of these files, it’s only for disaster recovery.

We average about 200GB of new data per week, but that can vary greatly depending on jobs.

Here’s my plan:
- create a new Disk set for “Office”
- create a new Disk set for “Production”
- create a new Disk set for “Remote”
- these would back up daily to a hard drive on our server
- Set “Grooming” on each set to keep the last 20 backups. That should allow enough room for holidays/vacations etc interfering with the schedule.
- On Mondays, run “Transfer Backup Sets” of “Office” and “Production” Disk Sets to an Off Site Tape Set. When finished, store the tape Off Site until next Monday.
- On Wednesdays, run “Transfer Backup Sets” of “Office” and “Production” Disk Sets to an On Site Tape Set. This will allow for future restoring of files older than what’s left on the hard drives after grooming, without needing the Off Site tapes brought in (sometimes we have to restore projects fairly quickly, plus it doesn’t hurt to have two sets!).
- On another day during the week, run a “Transfer Backup Sets” of “Remote” Disk Sets to a separate Off Site Tape Set. I don’t think I want the “Remote” set backed up with the others, we’ll likely never need these, and it will slow down searches and restores of our media files. This could even be done monthly depending how much risk we want.
- Then at some point when it’s appropriate (6 months? 3 months?) I should be able to start a new On/Off Tape Set, update the scripts to point to the new Tape Sets and off we go?

How important is it to not have the Disk Sets backing up while running the Transfer Sets operation? Does Retrospect wait for them to finish? Or does it freak out and corrupt the data or backup sets? Should I set it to only run certain backups at certain times? Most of our computers are on overnight. Any other downsides I should know about?

Thanks for any replies.

Link to comment
Share on other sites

1 hour ago, backy said:

How important is it to not have the Disk Sets backing up while running the Transfer Sets operation? Does Retrospect wait for them to finish?

I think Retrospect will wait for the transfer to finish.

1 hour ago, backy said:

I should be able to start a new On/Off Tape Set, update the scripts to point to the new Tape Sets and off we go?

You don't even have to update the scripts. Just use the "New Backup Set" backup (or transfer) as outlined here, and Retrospect will update the script for you: 

https://www.retrospect.com/en/documentation/user_guide/win/fundamentals#backup-actions

  • Like 1
Link to comment
Share on other sites

Lennart_T,

Unfortunately Retrospect won't wait for the Transfer to finish before running the Backup.  And that has the unfortunate consequence discussed beginning with the second substantial paragraph of this OP in a January 2017 thread.  Since that post is phrased in terms of the Retrospect Mac terminology, here's another post that backy can use for translation to Retrospect Windows terminology.

My Support Case giving a product suggestion for overcoming the consequence was ignored.  Therefore I'd suggest that backy use your  "New Backup Set" suggestion, even though that would create a complication.

Link to comment
Share on other sites

32 minutes ago, DavidHertzberg said:

Unfortunately Retrospect won't wait for the Transfer to finish before running the Backup.

That's bad.

One workaround would be to schedule both the backup and the transfer to the same execution unit. Then the backup will have to wait for the transfer to finish (or vice versa).

  • Like 1
Link to comment
Share on other sites

Thanks for the replies! I'm learning a lot. Also I forgot to mention I'm using Retrospect Multiserver 17.5.2.103, on Windows Server 2012 R2, in case that helps. The production computers are mostly Macs.

 

51 minutes ago, Lennart_T said:

You don't even have to update the scripts. Just use the "New Backup Set" backup (or transfer) as outlined here, and Retrospect will update the script for you: 

https://www.retrospect.com/en/documentation/user_guide/win/fundamentals#backup-actions

Thanks. I've read that section now, and I see the option "New Backup Set" when I edit the Transfer Backup Sets script in Retrospect. Cool.

 

4 minutes ago, Lennart_T said:

One workaround would be to schedule both the backup and the transfer to the same execution unit. Then the backup will have to wait for the transfer to finish (or vice versa).

Understood. Or I guess I could schedule the Proactive backups to not run on the days when I'm running the Backup Set Transfers, but I feel like maybe that brings its own problems.

I'm also learning now about "Storage Groups":

https://www.retrospect.com/en/support/kb/storage_groups

So instead of creating separate Disk Sets for "Office" and "Production" and "Remote", I could make a Storage Group and back them all up at the same time. Is that correct? Cause that sounds like a massive time saver, depending on the speed of the network and the target drive of course.

Link to comment
Share on other sites

backy,

I was going to make the same suggestion as Lennart_T yesterday afternoon in an additional paragraph in this preceding post—but I had to leave for a dental cleaning appointment.  The screenshot at the top of page 176 in the Retrospect Windows 16 User's Guide (I'm referring to that because the Retrospect 17 User's Guides have been subject to the attentions of the StorCentric Slasher—e.g. in the last paragraph of that linked-to-post) shows where to specify the Execution Unit for a Backup script.  The screenshot on page 210 shows the same thing for a Transfer Backup Sets script.

However you can't set the Execution Unit in a Proactive script that uses a Storage Group as a destination.  That's because—as briefly explained in the first three sentences of the last paragraph of this post in another thread—a Storage Group is a magnificent kludge (IMHO) for enabling interleaved backups of different machine-drive Sources using a single Proactive script, rather than forcing the administrator to create a separate Proactive script for each machine Source; the enabling is done by using the multi-threading capability (expressed as Execution Units) of the Retrospect "backup server" Engine.

There are two tradeoffs, however.  The first is that, when the Knowledge Base article uses the term "volume", it means volume on a particular Source machine.  If your 12 Source machines have only one volume each, they would just fit within the limit of 15 Execution Units your "backup server" could—given around 20GB RAM—run simultaneously.  But the Proactive script will create a separate Backup Set component of the Storage Group for each machine-volume combination; I've tested this on Retrospect Mac—doing so because the KB article seemed unclear.

The second tradeoff is that all the initial Members of a Storage Group's component Backup Sets must fit on a single Destination drive.  At least—using Retrospect Windows—the KB article says you can designate an individual one of those component Backup Sets as the Source for a Transfer script.   (As the KB article article also says, you can't do that designation using Retrospect Mac—IMHO because the StorCentric acquisition in June 2019 prevented the engineers from fully completing the Retrospect Mac GUI for Storage Groups.  But I've  tested using a Rule—Retrospect Mac name for a Selector—for restricting Transfer to a component.)  Unless you can add additional Members to an individual Backup Set component of a Storage Group (I couldn't test this, because I have to work within the inadequate limits of the Retrospect Mac GUI), you'll have to—after successfully running  all your Transfer Backups script(s)—run a Backup script with the Recycle Media Action—specifying the No Files Selector—in order to re-initialize the component Backup Sets of your Storage Group before any  initial Member of a component Backup Set exceeds its space on the Storage Group's designated initial Member drive.

My personal suggestion is that you abandon the idea of using a Storage Group as a Proactive script Destination, and instead create individual scripts with individual Backup Sets as Destinations for at least each of your "Remote" Sources.  It'll be more work to set up, but give you fewer long-run problems.

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

11 hours ago, DavidHertzberg said:

My personal suggestion is that you abandon the idea of using a Storage Group as a Proactive script Destination, and instead create individual scripts with individual Backup Sets as Destinations for at least each of your "Remote" Sources.  It'll be more work to set up, but give you fewer long-run problems.

Hi David! I can't thank you enough for that amazing post. You've clearly put in the time learning the ins and outs of this application. I'm headed to the dentist myself this Friday. How random.  😬

That all makes sense. I'll avoid Storage Groups for this and use individual backup sets, then transfer those to tape sets.

My goals of this new backup plan are to use our LTO tapes more efficiently (by backing up to hard disk first), and to simplify the process overall, so I'm not the only one in the company that knows how it all works (despite the job security). I'm hoping to test it out next week when I'm back in the office, then implement it once I'm comfortable.

Thanks again.

Link to comment
Share on other sites

backy,

Consider using the Data Compression (in software) option (page 357 of the Retrospect Windows 16 User's Guide) on your Transfer scripts.  That'll save tape space.  OTOH the option may slow down your Transfer scripts if you don't have a powerful "backup server" machine; the ancient HP DAT 72 tape drive that I use for backing up my (now-deceased) ex-wife's old Digital Audio G4 Mac has a hardware compression capability, but ancient Retrospect Mac 6.1 doesn't support it.

I learned about Storage Groups to fully answer other administrators' questions, starting with this March 2019 post in a thread whose OP asked about running multiple Backup jobs to the same Backup Set.  I was curious enough to run a couple of experiments on my own home installation, which is how I learned about how Storage Groups really work but also about the limitations of their current Retrospect Mac GUI.

If you liked my "amazing" post that much, you could click the "Like" heart icon at its bottom right.  The head of Retrospect Tech Support runs a contest every few days; I enjoy competing for "most liked content". Lennart_T's second post in this thread is also pretty helpful, so maybe you should "Like" that post too; competition is good.😁

P.S.: If you're going to give the Backup/Proactive script and the Transfer script for a particular Source the same Execution Unit, I wouldn't use the New Backup Set Backup Action.  I haven't used it, but it sounds like a potential complication.

Edited by DavidHertzberg
P.S.: If you're going to give Backup/Proactive and Transfer script for a particular Source the same Execution Unit, I wouldn't use the New Backup Set Backup Action.
  • Like 1
Link to comment
Share on other sites

9 hours ago, DavidHertzberg said:

Consider using the Data Compression (in software) option (page 357 of the Retrospect Windows 16 User's Guide) on your Transfer scripts.  That'll save tape space. 

I think tape stations always have hardware compression. I don't even think you can turn it off (in Retrospect).

So trying to turn on software compression is useless, I'm afraid.

 

Link to comment
Share on other sites

Hi David. You raise a good point about Compression and the speed cost. Our LTO-6 drive has hardware compression Retrospect supports. Our server is circa 2012, 2.4Ghz quad-core Xeon, with 16GB RAM. We have two to three people using Remote Desktop into their user accounts all day. As I understand it, Retrospect automatically uses hardware compression if it exists.

I think I'll uncheck Compression while backing up to the hard drive sets to keep things backing up quickly and not hammering the server, and then let the hardware compression work when we transfer to tape...does that make sense?

I actually don't see an option to turn off hardware compression anyway, but I could have just missed it.

Link to comment
Share on other sites

Just now, Lennart_T said:

I think tape stations always have hardware compression. I don't even think you can turn it off (in Retrospect).

So trying to turn on software compression is useless, I'm afraid.

 

LOL, that's what I was just wondering! We're all in sync here...

Link to comment
Share on other sites

20 hours ago, Lennart_T said:

I think tape stations always have hardware compression. I don't even think you can turn it off (in Retrospect).

So trying to turn on software compression is useless, I'm afraid.

 

What Lennart_T says may "always" be true nowadays—especially for LTO "tape stations", but it wasn't true in the past.  IIRC my first DAT drive, from DAT Technologies, did not have hardware compression—which I could have used because I was at one point backing up 4 machines in my and my then-wife's home installation.  I was creating at least 2 DAT tapes from my 7-hour Saturday Recycle runs, but I couldn't use software compression because my "backup server" machine was slow.  I had hopes when I got the HP StorageWorks DAT72 drive, but it turned out Retrospect Mac 6.1 didn't support its hardware compression feature.

backy, make sure for your Transfer scripts that you don't click the More Choices button shown in the dialog on page 213 of the Retrospect Windows 16 User's Guide.  Those lead to the options shown on pages 360–361, but you want those options to default to Match source volumes to Catalog File and Don't add duplicates to Backup Set.  That will make sure newly-backed-up-to-disk files are copied to tape once—and only once—so long as their contents don't change, allowing emergency retrieval despite later grooming of your disk Backup Sets.

 

 

Link to comment
Share on other sites

backy,

After some more belated thought and one little experiment, I'd like to revise my recommendation in the last paragraph of this up-thread post.  If you want to use a Storage Group defined on a Retrospect Windows "backup server" as a Destination, you may be able to get away with it—but I'd  advise against it.

My belated thought was that you could define a Grooming policy for the Storage Group.  My experiment showed I can do this even on a Retrospect Mac 16 "backup server".   Presumably the Grooming policy is applied to each component Backup Set as it is automatically created—when a new machine-volume is added as a Source for a script whose destination is the Storage Group, but I can't confirm this because of the so-far-incomplete Storage Group GUI in Retrospect Mac.  Also presumably in Retrospect Windows you could modify the Grooming policy and the initial Member size for a particular component Backup Set, but again I can't confirm this.  Ability to do those modifications depends on the capability of using the Retrospect Windows GUI to directly access a component Backup Set; there's currently no such capability in Retrospect Mac's GUI.

The combination of these two capabilities—if they exist in Retrospect Windows—would allow you to tailor the maximum initial Member size for a particular component Backup Set.  This—done carefully—would enable you to  ensure that the sum of all components' initial Member sizes would never actually exceed the size of the Storage Set's defined Destination disk.  Therefore, if you ran Transfer scripts frequently enough, you could make sure that all files from components had been Transferred to tape before they were groomed out of existence.  So you wouldn't have to run any Recycle scripts having as a Destination the Storage Group; you could rely on the components' Grooming policies.

If you can add additional Members to a particular component Backup Set, that would provide an additional safety factor.  I can't do this either, again because the so-far-incomplete Retrospect Mac Storage Group GUI won't let me directly access a Storage Group's component Media Sets.

Of course your Transfer scripts wouldn't be copying files simultaneously backed up by your Proactive scripts (because you couldn't make them use one Execution Unit)—pending enhancement per my Support Case #54601 (case# in P.P.S.).  And it'd take substantial effort for you to explain this strategy to another employee of your company.  Undoubtedly my recommendation in that up-thread post that you not use a Storage Group is still the wise choice.

 

 

Edited by DavidHertzberg
In last paragraph, your Transfer scripts wouldn't be copying files _simultaneously_ backed up by your Proactive scripts, because you couldn't make them use one Execution Unit.
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

×
×
  • Create New...