Jump to content

jef

Members
  • Content count

    10
  • Joined

  • Last visited

Community Reputation

0 Neutral

About jef

  • Rank
    Occasional Forum Poster
  1. You are both right. It's been like that since ages. Very annoying, especially because you get the feeling it doesn't have to be that way. I always wondered why polling the clients had to be a 'one client at a time' process. Concerning the standard backup jobs at night, I was actually hoping that I could increase the performance to that extent so that users can turn off their desktops at night. Save the environment and all that ;-) But standard backup jobs for the desktops are indeed an option. So long as you don't pool too many of them together in a backup set. Incidentally I also noticed that the 'retrospect defined grooming policy' only keeps 1 snapshot for the last/current day. So if you do 2/3 per day you'll only have the last snapshot. Not really a problem because you still have the sessions, but finer grained adjustments would be welcome. thanks for the input guys, I'll do some testing. BTW I also always wondered why the client doesn't actively keep track of all files that are being changed since the last backup. Would increase the scanning times (after all, the most time consuming part of the backup) considerably. Then again, it might not be -that- easy to implement. cheers, wim
  2. hi maurice, and merci for the interesting reply. Exactly as you said, I noticed yesterday that execution units preference field doesn't allow more than 8. I never even thought of touching it up until now, but I never expected it to only allow -less- execution units. And it looked so promising ;-) Concerning the 4 backup sets, my current experience is that the proactive scheduler is so inefficient, especially when a bunch of laptops are unavailable a lot of the time, that the server is not optimally used. I have the feeling that if you can guarantee a backup set to be 'available' when the client is checked (i.e. one per client) the usage of the server will go up and more backups can be made during the day. The 100 scripts would indeed be a pain and ask for careful planning. As to recreating the backup sets. I don't know if you have experience recreating a 1TB+ backup set, but I can assure you that in my case it can take 24 hours or more. And yes, as you said, they do tend to break up once in a while Perhaps the better solution is in the middle. Have 5 to 10 desktop clients per backup set and 1 for each laptop user. The latter to make sure that IF it's online it won't be skipped because the backup set is busy... No single-instance storage on OS files? Wow, that sounds odd, I always took those for perfect candidates. After all, if you're backing up 100 xp boxes... Again, thanks for your comments, and if you can spare some more I'd be happy to read them ;-) cheers, jef
  3. Hi all, wanted to run this by all of you. At the moment we have 100 clients and about 4 backup sets that are used with a proactive backup scheme. I.e. 100 clients mapped to all 4 disk backup sets. Because of scanning/polling/busy backup sets, I'd like to try the following and am wondering what you think about this setup. At first glance it seems to have only advantages (apart from the setup time). Suppose you put all backup disks in one big raid volume and then allocate a disk backup set on this volume for EACH client. So 1 client-> 1 disk backup set. advantages: - you can choose a reasonable size for each client and groom seperately - smaller/faster backup sets - You can have tens of clients performing a backup AT THE SAME TIME. Most of the time the backup takes is scanning the drive anyway, so the overhead for the server would be minimal - The way I figure it, it would even allow you to backup every client serveral times per day instead of once every day - Alternatively you could add a second raid volume so as not to put all your eggs in one basket - Or perhaps a nightly transfer of all latest snapshots to a one big 'backup' disk backup set Disadvantages: - overhead when adding clients - you loose deduplication. But if you add several disk backup sets to a pool of clients you loose a big part of the deduplication anyway. So, what's the verdict? sound like a plan? cheers, jef
  4. This proactive scheduling problem has been around for ages. I've been pointing at it since I first tested retrospect. Don't have the feeling it changed yet. A couple of remarks - First answer you get is 'proactive backup is not meant for standard clients'. Why not? If it worked like it should then it's the best backup strategy out there. Let the scheduler worry about the scheduling. - How come it takes -that- long to try to locate or poll a source? A second should do, no? Allowing the admin to change the timeout for these should solve half the problem already, regardless of the other quirks in proactive backup - Why is there seemingly only ONE poller? If you have different proactive scripts and different backup destinations and multiple cores to do all the processing there doesn't seem to be a good reason to do it all sequentially. Here's hoping for a fresh retrospect for windows in 2009 without proactive problems, cheers, jef
  5. Quote: Quote: Even if you ask retrospect to only recheck every xx minutes/hours? Don't you mean backup interval? If not, where is the setting you mean? (I have never seen it.) You can change these settings in the proactive backup script options under polling intervals Quote: Quote: And as to "Whenever a backup execution in the script finishes, the proactive polling starts from the top." part? Is that part of the design as well? I have never seen that happen, so I can't comment. I'm not sure, but I have the feeling it is easily reproduced jef
  6. Even if you ask retrospect to only recheck every xx minutes/hours? And as to "Whenever a backup execution in the script finishes, the proactive polling starts from the top." part? Is that part of the design as well? I agree there might be some logic in the fact that -every- finished execution frees up a backup set that could potentially be used to backup the proactive client with the greatest need, but other than that... thanks, jef
  7. gentlemen, It's been a long time. There doesn't seem to be a resolution for the problem yet, am I right? Today I figured, fair enough, let's give ordinary scripts a go. As opposed to managing everything with proactive. It's been noted that every time a proactive backup execution finishes, the polling starts from the top instead of continuing. Not good. BUT! This seems to be happening with EVERY SINGLE finished execution, including the normal script ones. So: I run a script to backup all my 80 or so clients and put all the laptops in a proactive pool. Whenever a backup execution in the script finishes, the proactive polling starts from the top. Where is the logic?
  8. Apparently, you can change some parameters on the interface level. Including the timeout for connection to a client. The minimum timeout is 30 seconds. Less would be better, but the program doesn't allow it. What's the effect? The 'source' status comes up faster now, which is good. More work is being done. BUT After EVERY succesfull job completion the proactive script is RESET, which makes the scanning start from the top again. This is no good, because the clients that need backup asap and are -available- hardly get reached. A complete traversal of the proactive job list is NEVER completed! The proactive script options allow for a inter-source poll time to be set, but it doesn't seem to do what I thought it would. I assumed it would ignore the source for a certain stated time period when not reachable. Unfortunately it doesn't. Is there a solution to this problem? Perhaps a way to turn off the 'start from the top when job script has run'? Thanks, Jef
  9. At the moment, it's about 20 minutes for 20 unreachable destinations. Which would mean 1 full minute to try and reach a machine that in most cases just isn't turned on.
  10. Hi all, I seem to be having a couple of machines that are momentarily unavailable. They do however need to be backed up pretty urgently according to the proactive schedule. The problem I'm having now is that these machines are on top of the ASAP list. Retrospect is trying to access them one by one, which takes a LONG time, before giving the 'source' status. This results in a long wait before reaching a machine that is 'available' for backup. And after that one backup it starts polling from the top of the list again, which would mean the same wait. Any ideas? Is there a possibility to adjust the timeout for trying to reach a client? Thanks
×