Jump to content

RMS Proactive -- Normal vs. Recycle?


awnews

Recommended Posts

I'm trying out the 30day trial of RMS6.5. I have a W2K Server backing up itself and another W2K Server box with scheduled backups and am trying out the Proactive option on a some clients (W98, XP, Macs).

 

When doing a standard scheduled backup, Retro allows the user to set up a Schedule to do a Normal backup (e.g. daily) and a Recycle backup (e.g. monthly). However, I don't see any such option for Proactive backups since the Schedule option doesn't work that way. The manual refers to the "Progressive Backup" under Proactive, which I assume to be a "Normal" (incremental) backup. But how do I get Proactive to the equivalent of a Recycle so the backup set doesn't grow without bounds?

Link to comment
Share on other sites

So I came across the following paragraph in the Retrospect manual:

 

" NOTE: Proactive Backup uses only the normal

backup action because recycle and new media

backups are inappropriate for use with Proactive

Backup scripts. "

 

My immediate reaction is "this is nuts!" Would someone please explain to me how Proactive backup works such that a recycle is "inappropriate." I'm backing up to big harddrives installed on a server. Does Proactive do "Progressive" (incremental) backups or not? If so, how does one prevent a backup set from growing forever and without bounds without some sort of Recycle procedure

 

You will *not* find me manually deleting and recreating Proactive sets that have grown too large. Does Dantz expect me to set up a scheduled recycle backup per client to each file set (another reason to have one backup set per client), which would defeat a lot of the utility of the Proactive option?

Link to comment
Share on other sites

Hi

 

There is a good reason for this but I can't quite recall the exact logic. I think what it boils down to is that doing recycles with proactive scripts could leave you in a situation where your backups have been overwritten and you wouldn't be expecting it -leaving you without a backup.

 

The other part of this is that Retrospect does not allow you to schedule when a set is recycled. You can tell a script to _run_ a recycle but you can't have the set do that on its own. That in combination with the way Proactives are scheduled makes Proactive recycle backups impossible (at the moment anyway)

 

Since you have a separate set and proactive script for each client you will also need a separate recycle script for each client.

 

Nate

 

 

Link to comment
Share on other sites

Well, from what I can tell then, Proactive backup is useless. Without a reasonable way to recycle a backup, the backup will grow without bounds. I can't keep sizes under reasonable limits without manually intervening or setting up a scheduled recycle--*useless* in a Proactive environment since the whole reason I'm using it as that the PCs are not running at predictable times.

 

It doesn't seem that hard to establish a recycle criteria--even Nth backup, after N days/weeks/months (i.e. on the next Proactive backup after that), etc. I understand that I wouldn't be able to absolutely control when a (long) recycle occurs but it's no worse than how long it took to do the backup the first time.

 

Regarding "doing recycles with proactive scripts could leave you in a situation where your backups have been overwritten and you wouldn't be expecting it -leaving you without a backup," that's true for scheduled backups too (even though I "know" when the recycle is going to happen, I lose all backups at that moment). If I want a true margin of safety, I need to set up multiple backups sets for Proactive backups, as I already do with scheduled backups that get recycled occasionally, so that I don't lose all backed up data on a single recycle.

 

So please tell me how Dantz expects users to use Proactive backups (to large drive(s) on the server) when the size of the files (File or Data) will grow without bounds?

 

You also mentioned "since you have a separate set and proactive script for each client you will also need a separate recycle script for each client" referring to my other post regarding a set per client. So I would have to set a schedule recycle per client set (a lot of work to set up and almost useless as noted above since PCs are not on at scheduleable times). But that's *far* better than the alternative, where I would need to set up a recycle for an N client set and wipe out the *entire* set of data for all grouped clients at the same moment then do *full* backups for all clients in that set. I would much rather spread out the recycles over clients and over time.

 

I think that Dantz needs to do some *serious* rethinking about some key areas of Proactive backup.

Link to comment
Share on other sites

Try this:

 

Schedule proactive backup (saving to Set A) to run 24/7 except one day, say, Sunday 12-1AM.

 

Schedule recycle job for Set A to backup nothing (will just erase the contents) on Sunday at 12AM.

 

You might also setup a transfer or duplicate job before the recycle job to copy the backup Set A to a different Set.

 

Good luck.

 

computer.gif

Mikee

Link to comment
Share on other sites

Hi

 

The textbook answer is to create a standard script to go along with your proactive script to handle the recycle backups only. Standard scripts will take precident over the proactive backup so you can schedule them to overlap with no trouble. (its not quite what you are looking for but that is how it is designed to work)

 

One thing to note: if you scedule a recycle backup of an empty folder, the set will be recycled even though there is nothing to back up. Using that kind of script along with proactive backup is one way to keep backup set size in check.

 

From a Retrospect perspective the best way to go about the backup would be create 2 backup sets and rotate between them. When you recycles one you will still have the other one on hand. In your case that would mean backing up all of the clients into these two sets rather than individual ones.

 

Nate

 

Link to comment
Share on other sites

Thanks for the ideas.

 

On the "create a standard script to go along with your proactive script to handle the recycle backups only" idea: Correct me if I'm wrong, but unlike a normal/recycle backup where it's possible to use the same script & settings and *only* tweak the schedule area, for the Proactive & Recycle combo I'm going to have to create two *totally independent* scripts (one Proactive, one Scheduled) and point them both to the same Disk/File backup. That's a lot of work, esp. if I don't want to group the clients and instead have independent scripts.

 

Since I can't depend on the client PC to be on at the scheduled time of the recycle, what happens when the recycle runs and the client isn't present--does the script run and fail? with or without doing the recycle? The "empty folder recycle" doesn't help here if the client isn't even present.

 

If the recycle doesn't purge the backup set even if the client isn't accessible, I can't rely on it to run, and data will continue to accumulate without a definitive bound, with potential but unreliable recycles weeks or months apart.

 

If the schedule recycle will purge the set regardless of whether the client is present (or if I recycle from an empty folder also on the SERVER, *not* on the client), if the recycle happens long (days, weeks?) before the next proactive backup [of unpredictable date & time] I've just created a large hole when data is deleted but a new full backup isn't run. This is drastically different than a regular recycle, when data is purged but the new full backup is immediately created in the set.

 

-------------------------------

On your other suggestion, yes I could go to a lot of trouble to create two rotating backups sets (esp. if i group clients) and rotate a recycle between them. But I still don't like the kludge of how the whole O-Ring "textbook answer" for ProActive recycles is "intended" to operate. I had planned to create two proactive backups sets, with one (e.g.) run weekly, recycled monthly, and the other run monthly, recycled yearly + some recycle offset, and was stunned to find this serious deficiency with the expensive Proactive option.

 

It also sounds like, to cut down on the manual config for recycles, it would make a lot more sense to create few Proactive sets with many clients (few sets to create, few recycles to set up). But the serious problem here, as I pointed out in a previous post, is that after a recycle of the "many client set" the Proactive backup set is going to have to backup a *lot* of clients all at once, generating a lot of (10Mbps) network traffic in a short period. Keeping the recycles spaced out over many backups clients and sets is far less risk and doesn't load down the server and network in a short timeframe.

 

--------------------------------

Please "suggest" to the Dantz software engineers that, to be useful, Proactive backup needs to include an embedded recycle option inside the Proactive script. I've suggested a few simple ways--every N backups, after a defined time interval (e.g. days/weeks/months), if the backup exceeds a size (absolute, % of something, etc.).

 

As far as whether I continue, this may be a show-stopper on the trial. Proactive was the differentiator that made RMS useful and attractive in this environment. However I don't know that I can "sell" the program given this serious limitation and need for massive configuration and hand-holding.

 

Link to comment
Share on other sites

Hi

 

In your post you mentioned the followign types of recycles:

ˆø—p:

every N backups, after a defined time interval (e.g. days/weeks/months), if the backup exceeds a size (absolute, % of something, etc.).

 


 

These ideas share a fundamental problem with the "empty folder" recycle backup. With proactive backup there is no way for Retrospect to verify that the required sources are available to perform a full backup after the set is recycled. Proactive backup performs its backups one source at a time, it polls for every source in the script but it does not stop or give an error if it can't find back up a source. Assuming Proactive backup supported recycles, the entire set would recycle even if there was only one of your sources available for backup.

 

A standard script will not recycle the set if the specified source can't be found. It can also be used to perform a recycle after a defined time interval (e.g. days/weeks/months).

 

It is true you can end up with a "hole" in your backup if you don't manage the recycles correctly. That is why Dantz recommends using standard scripts to run the recycles - Uplike proactive backup standard scripts follow a strict and predicatble timeline.

 

For example:

Create a proactive script with destination Backup set A

Create a proactive script with destination Backup set B

Using identical sources for each script specify that the sources get backed up every 1 day. This means each machine is getting backed up twice in a day but it also means you have 2 full backups on hand.

 

Create one standard script with both Set A and Set B as sources.

Use an empty local folder or your standard set of clients as sources.

Schedule regular Recycle backups in an offset schedule by adjusting the start date and "weeks" value.

The end result would be that you do a recycle of one set one week then recycle the next set the following week. Since you always have another set on hand you never run into the "hole" you mentioned.

 

You are right of course about the bandwidth. Once a set gets recycled Retrospect is going to hit every machine up for a full backup. Keeping many smaller sets is indeed a good way around the problem. There are some disadvantages too - no file matching between clients, larger number of sets and scripts to manage.

 

How many sets were you planning to create in total? Standard scripts might be a better way to go than proactive in your case.

 

Nate

Link to comment
Share on other sites

What Proactive should do is first check to see if a client is available. If it is, it runs the backup as one would expect--a normal backup or, if the "timeout" (e.g. every N backups, after N weeks, etc.) has expired a recycle backup. A normal backup or recycle backup only happens if the client is available.

 

I agree that there's a problem if multiple clients are in the same set, since the recycle will clobber the whole thing and then only available clients will get backed up. That's yet another reason why multi-client backup sets are a lousy idea. I'd much rather set things up for a single client per backup set to minimize recycle time, overlap, etc.

 

You seem to be missing the point on using a standard script (of perhaps you got it but don't appreciate the ramifications) on a proactive client--the client isn't there at known & expected times. That's the whole reasons and beauty of the proactive option since it scans for clients and backs them up "roughly" on a schedule (at least queues them to the top of the list when they reappear). If I use a standard recycle (besides all the separate manual work of setting that up), it can fail, and fail, and fail.. on the schedule if the client isn't there. This is *not* "strict and predicable" since the recycle may fail to run, virtually forever, it the client isn't on when the recycle(s) are scheduled.

 

I understand that I can use an empty folder from another location to force a recycle even a client isn't present. But this just guarantees that the backup set will be destroyed regardless of the presence/absence/existence of the client. Think about your own example if the client isn't available for a week (or weeks, or a month...). The forced "empty folder" recycle clobbers Proactive Backup Set A, then the second forced "empty folder" recycle clobbers Proactive Backup Set B leaving me with *no* backup. Neither of the forced recycles had any idea that the client wasn't on during that week and so it couldn't know that a recycle shouldn't have been performed to not erase the second set.

 

And we agree on the bandwidth issue. Based on what I'm seeing, it makes no sense whatsoever (other than a lot more manual setup), with the current poorly implemented Proactive script behavior, to group clients into sets. You guarantee maximum data destruction on a (scheduled) recycle with no way to guarantee that the client will be backed up again. And you guarantee maximum network and machine usage in a very short period after a recycle rather than spreading out the full subsequent backups.

 

I appreciate you making some suggestions, but I think it is glaringly obvious that the Proactive strategy has been weakly implemented and poorly thought out. It *does* need a viable "Proactive recycle" feature and it's not that hard to come up with one. It seems that the "we can't come up with one" is based on the idea that clients must be grouped into sets, but that's a poor backup strategy to begin with.

 

I've also been playing with setting up clients and think that that has been poorly implemented. Clients have to be "Added" one at a time (would be much easier to "scan for all new" and have a common password by default). There should be some way to tag clients as belonging to group so they would be handled in common ways as a group. I already suggested the idea of parameterized script where clients could be set to generate output File/Disk backups as Client1.rpf, Client2.rbf while using the same script (much easier to set up). The current system requires much too much manual setup and config (even with script copying) in an environment with a large number of clients.

 

On the "How many sets were you planning to create in total? Standard scripts might be a better way to go than proactive in your case" question, standard scripts will *not* work in my environment. The (school) computers are not on at known an predictable times. They are off at night. They are randomly turned on (and off) during the day. They can be off for long periods of time due to absences, holidays, long vacations, etc.

 

Link to comment
Share on other sites

It's also a shame that Proactive backups can *only* do Backups and not Duplicates. Although I'd prefer to do Backups, the (lack of) recycle fiasco makes it just about impossible. Doing a Duplicate would be a crude workaround (would only run if the client is present so no data-destroy-only recycle, wouldn't grow without bounds) but that doesn't seem to be an option with Proactive.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...