Jump to content

Scanning incomplete, error -108 (not enough memory)


Recommended Posts

Recently, I run into this error frequently when I attempt to back up a large number of clients (well, large for me = 5 or 6).

 

Since I get no useful information from the Operations Log, here is a screen shot of the latest failed backup (I use Immediate Backup, initiate it before I retire for the evening, check it in the morning.) I tried to use the image tags option but it seems to not work on this forum, it hangs. So here is a link to the screen shot:

 

http://www.karma-lab.com/images-pub/scanningincomplete.jpg

 

I used to have 4 GB of RAM on the backup machine. Thinking it might be RAM, I recently upgraded the machine to 12 GB of RAM - that oughta be enough, huh? No change... obviously the RAM doesn't seem to affect the situation.

 

Main machine running Restrospect 6.1.230; driver update 6.1.15.101:

MacPro Quad (2 x 3.0 Ghz Dual-Core Intel Xeon) - OS X 10.4.11, 12 GB RAM

 

All clients are running the Driver Update 6.2.234. Several are PPC Macs, running 10.4.11; the last one in the list that failed is a MacBook Pro (Intel) 10.5.7.

 

The backup mechanism is a 1 TB Iomega UltraMax external hard drive. Currently with over 800 GB available.

 

I have heard reference in other posts to a mximum number of files; it's unclear to me whether that is per device, oer source, per the whole backup, or what exactly.

 

What can I do to rectify this situation?

 

===================

Other Question:

If I run into this situation, is there a way to deselect the last backup client in the list (which has failed) without having to rescan all 5 other sources? If so, I have not been able to determine it.

 

If I open Preview at this point, it will eventually try to scan the final entry in the list again. If I cancel out of that, and then hit the backup button, it starts trying to scan it again. If I open the Sources and deselect the last backup client, it will then completely start over.

Link to comment
Share on other sites

Retrospect 6 can't use more than 2 GB of physical RAM. Your problem is trying to back up a huge number of files using an Immediate Backup. As you have discovered, Retrospect will scan all of the source volumes in an Immediate Backup before actually performing the backup.

 

The solution is to write and save a backup script. Most of us use our scripts to schedule backups to occur automatically, but there's no problem choosing to run a script manually if that's what you want to do. (The list of all the scripts you have written are conveniently available from the Run menu.) The primary advantage in your case is that with a scripted backup, Retrospect will scan, match, and back up each source in sequence, which will reduce the memory demands.

 

You can still run into problems if a particular source volume has too many files on it. The solution in that case is to define smaller subvolumes in Configure> Volumes and then to use the subvolumes as sources in place of the parent volume.

Link to comment
Share on other sites

Main machine running Restrospect 6.1.230; driver update 6.1.15.101:

MacPro Quad (2 x 3.0 Ghz Dual-Core Intel Xeon) - OS X 10.4.11, 12 GB RAM

 

All clients are running the Driver Update 6.2.234. Several are PPC Macs, running 10.4.11; the last one in the list that failed is a MacBook Pro (Intel) 10.5.7.

Retrospect 6 can't use more than 2 GB of physical RAM.

Yup. This is a limitation of Rosetta / Carbon API of Retrospect 6. It's a miracle that it works.

 

Retrospect 8 does not have this limitation, and you might want to give it a try. With your configuration, doing backups to an external hard drive, R8 just might work for you. Try the 30-day trial edition and see - it can coexist alongside Retrospect 6.

 

Russ

Link to comment
Share on other sites

I believe the app has a memory leak. I get this error every couple of days when trying to back up my biggest Mac client. I quit and restart Retrospect and then it works fine for another couple of days.

 

Don't switch to 8.1 - it is far worse. I've gone back to 6.1 and dealing with this nuisance of an error because it works otherwise. 8.1 was nothing but problems for me. It's cost me days and some portion of my sanity.

 

-Mike

 

Link to comment
Share on other sites

Retrospect 6 can't use more than 2 GB of physical RAM. Your problem is trying to back up a huge number of files using an Immediate Backup. As you have discovered, Retrospect will scan all of the source volumes in an Immediate Backup before actually performing the backup.

 

The solution is to write and save a backup script. Most of us use our scripts to schedule backups to occur automatically, but there's no problem choosing to run a script manually if that's what you want to do. (The list of all the scripts you have written are conveniently available from the Run menu.) The primary advantage in your case is that with a scripted backup, Retrospect will scan, match, and back up each source in sequence, which will reduce the memory demands.

I tried this last night. I put the same 5 main sources into a backup script, and manually ran it.

 

At first, watching it work, I was excited - I liked how it was processing and scanning each volume sequentially, rather than all at once. That was how I always wanted it to work - so I'm glad you got me to try the backup scripts, which I've never used before.

 

So, I went to bed. This morning? Yep - the last volume failed, exactly the same way. So it seemed to have no real effect on how it worked.

 

You can still run into problems if a particular source volume has too many files on it. The solution in that case is to define smaller subvolumes in Configure> Volumes and then to use the subvolumes as sources in place of the parent volume.

I don't think it has too many files, as I can back up the volume by itself, with no problems. When it's at the end of the script or backup, after all the others, it fails. This does tend to support the idea of a "memory leak" or the fact that each volume backup operation does not refresh the RAM or something.

 

Questions:

1) You mention defining smaller sub-volumes. How does one actually go about taking a single hard drive, and making it into two or more sub-volumes, such that you've backed up the whole thing? When I click on a Mac OS X client, and choose "Subvolume" in the Volumes Database, it shows me the list of 15 or 20 folders at the root level of the drive, and lets me define (1) as a sub-volume. Do I need to make 15 or 20 sub-volumes to get a single drive backed up?

 

2) I'm looking into making scripts that only back up a few clients at a time, rather than all of them. What happens if you schedule two scripts to run, say the second one 2 hours after the first, and they "overlap" (i.e. the first one isn't done yet?)

Link to comment
Share on other sites

This morning? Yep - the last volume failed, exactly the same way... I don't think it has too many files, as I can back up the volume by itself, with no problems. When it's at the end of the script or backup, after all the others, it fails. This does tend to support the idea of a "memory leak" or the fact that each volume backup operation does not refresh the RAM or something.

The difference you observed in backing up a single source vs. a series is very interesting. You might try backing up the volumes with each one a source in a series of different scripts to see if that makes any difference. (Use the "Duplicate..." function in the Scripts menu to expedite the process.)

 

When I click on a Mac OS X client, and choose "Subvolume" in the Volumes Database, it shows me the list of 15 or 20 folders at the root level of the drive, and lets me define (1) as a sub-volume. Do I need to make 15 or 20 sub-volumes to get a single drive backed up?

Yes, unfortunately. You can't create a subvolume by combining multiple subdirectories. However, if what you're suggesting turns out to be true--that there's something akin to a memory leak going on--backing up the volume as a series of subvolumes may not help either.

 

I can't speak to the issue of a possible memory leak under Rosetta. We run on a PPC platform and have never run into memory problems despite the fact that our copy of the Retrospect app is often up and running for weeks and, during that time, performing hundreds of client backups. You might want to see how much memory Retrospect is using after each run of a script and whether the usage is going up.

 

What happens if you schedule two scripts to run, say the second one 2 hours after the first, and they "overlap" (i.e. the first one isn't done yet?)

The second script will run as soon as the first is finished. If several scripts are waiting to run, they will run in alphabetical order rather than by the originally-scheduled time order.

Link to comment
Share on other sites

The difference you observed in backing up a single source vs. a series is very interesting. You might try backing up the volumes with each one a source in a series of different scripts to see if that makes any difference. (Use the "Duplicate..." function in the Scripts menu to expedite the process.)

Actually, that is sort of what I arrived at anyway. What I ended up doing was creating 5 different scripts - two of them have two clients each, the others are single machines or sub-volumes. I've scheduled them to run basically 2 hours apart starting at 2:30 am, and when I get back in in the morning, they're done.

 

So far, I'm happy to report that this is working really well. :) I haven't had a single problem since starting this.

 

What happens if you schedule two scripts to run, say the second one 2 hours after the first, and they "overlap" (i.e. the first one isn't done yet?)

The second script will run as soon as the first is finished. If several scripts are waiting to run, they will run in alphabetical order rather than by the originally-scheduled time order.

Thanks for the info, that's good to know.

 

A few other questions about scripts, if you don't mind (or anyone):

 

- My backup strategy involves backing up to a different external hard drive every week; I have 4 of them, which I rotate. The newest one goes offsite each week.

 

So, there's a backup set on each hard drive; of course they have different names.

 

So it seems at the moment that I have to create duplicate scripts for each of the 4 named backup sets?

 

And then, since my "backup week" runs from Thursday to Weds eve, it seems I cannot really use the scheduler "day of week" option where you select all 7 days of the week and then offset by 4 weeks. It only wants to go from Sunday to Saturday (seems stupid, especially if you select a start day in the middle of the week; but no, it still schedules itself according to Sunday through Sat.)

 

So it seems I can do this by creating 7 different "repeating interval" schedulers for each set, and setting the weeks to 4?

 

At which point, I will have basically 5 different client scripts, x 4 different named backup sets (20 different scripts), each with 7 "repeating interval" scheduler documents? phew!

 

It's doable, but I can't help but feel maybe I'm missing something... ;)

Link to comment
Share on other sites

So it seems at the moment that I have to create duplicate scripts for each of the 4 named backup sets?

No need to do this. Just list all of your destination backup sets in one script. When you do this, you will notice that when you go to the Schedule section and add a Scheduler, there will be a new "To:" category with a drop-down menu to select the destination you want.

 

And then, since my "backup week" runs from Thursday to Weds eve, it seems I cannot really use the scheduler "day of week" option where you select all 7 days of the week and then offset by 4 weeks. It only wants to go from Sunday to Saturday.

Your last sentence is correct, but the rest is not. You could use two day-of-week schedulers for each of your backup "weeks." One scheduler will cover Thurs-Sat; the other Sun-Wed. When you create each scheduler, you will, of course, need to set the correct starting date for its first occurrence. Retrospect will then continue the pattern from then on. With your 4-week rotation scheme, your script would require 8 schedulers.

 

So it seems I can do this by creating 7 different "repeating interval" schedulers for each set, and setting the weeks to 4?

You could do that too if you want, but since you're backing up blocks of days to the same backup set, I'd recommend the day-of-week option as it has a lot fewer schedulers.

 

At which point, I will have basically 5 different client scripts...

...each with 8 schedulers, if you follow my recommendations.

Link to comment
Share on other sites

Thanks for the detailed help. I wasn't aware that you could put more than one backup set in the script and then choose it that way - great! I have implemented this as you have suggested and it seems to be working out so far. I'll see how it goes over the coming weeks as I rotate out the backup sets. Thanks again.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...