Jump to content

QUESTION: How many of you are running *multiple* proactive scripts?


Recommended Posts

I'm having a crashing bug (not fixed in 733) that EMC knows about.

 

My setup has 3 separate disk media sets with 3 separate "proactive" scripts doing daily backups -- one script with about 30 Mac clients, one with 2 OSX servers and one with about 8 Windows clients

 

My catalog files for 2 of my 3 disk sets -- the Mac and the Servers -- are over 2G each -- the Mac one is now about 4G -- compressed.

 

At random times -- with random clients -- the retrospect engine "respawns" -- it just quits and automatically restarts (you can see this in the log and a crash log is usually generated.) The few times I've *observed* this crash is during the "matching" phase when it's comparing files to the catalog.

 

Sometimes, I've seen the engine respawn on the same client, but then I reboot everything (or not!) and the client backs up correctly the next time.

 

Because of the size of my disk media sets (the two largest are over 400G each), I'm thinking the problem is either:

 

1) The individual catalog files are too large to potentially be scanning *both of them* at the same time for concurrent backups -- which actually doesn't happen that often because of when I have my servers backup.

 

2) The catalog files are just too large to be constantly scanned -- period -- and there's some kind of memory leak still in the program.

 

Is anybody running a similar setup? Multiple proactive scripts backing up to separate disk media sets for a large number of clients? With really big catalog files?

 

 

I'm curious to know if anybody is seeing anything similar. As far as I've been able to tell -- I've never had a "respawn" crash when my Windows clients have been backing up -- that catalog file is only about 120M, though -- a magnitude smaller.

 

With 733 out now, I'm likely to start *new* disk media sets, but I'm wondering if I should just dump everything into *one* set/script to eliminate the possibility of the issue being multiple threaded backups being the cause of my crashes

 

 

(And I've tried a clean "config80.dat" file -- readding all the clients and recreating my scripts/rules, etc.. -- that didn't help.)

 

 

Thoughts from anybody else doing something similar?

 

I don't *need* to have 3 separate disk sets, but I thought it would be useful to be able to do concurrent backups of my Windows/Mac laptop users to separate sets during the day, rather than having to wait (as well as the ability to more quickly groom each set, rather than have to groom one *really big* set.)

 

Thanks!

 

 

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...