Jump to content

Recommended Posts

I upgraded to 5.0, and am now seeing Type 2 errors when trying to do an immediate backup of the local machine. It would get through about 40,000 files and 12,000 folders and then die. I upped the memory to almost everything available (180M) and it makes it through about 50,000 files before dying.

 

 

 

Anyone else see these errors?

 

 

 

Between this and the dreaded "elem.c-812" error, which I also get constently when trying to use the backup server to remote clients, I am deeply unhappy with this software release. Meanwhile, all of my machines are going without backups and I'm getting very nervous.

 

 

 

For the record, OS 9.2.2, VM is off, 256MB G4, backing up to firewire-connected Maxtor hard drive.

 

 

Link to comment
Share on other sites

I managed to resolve this one on my own. What I had done was to install OS X on one of my G4s, along with Retrospect Desktop 5.0, but left 9.2.2 running for the moment (training issues.) It looks as though Retrospect damaged the catalog when it split it out from the resource fork (I back up to files, which would not work for OS X backups in 4.3 because of the resource fork catalog size limit.)

 

 

 

It appears that, perhaps in the course of splitting the catalog out from the backup file, the catalog was damaged. Any further attempts to back up would fail with a type 2 error while doing the file scan.

 

 

 

The fix was to rebuild the catalog from the backup set--when I started the rebuild, Retrospect immediately complained that the catalog was corrupted, and after I completed the rebuild, it appears that I can back up the machine without any problems.

 

 

 

It is interesting that the backup process did not detect the corrupted catalog, but the rebuild process did.

 

 

 

Don't know if you're listening, Irena, but please add this to your bug list. I haven't the time nor patience to attempt to call in on the phone...

Link to comment
Share on other sites

I found a pattern to this problem which seems likely. I ran into this on two different servers attempting to back up the same machine (one was backing itself up locally, the other was backing up the first one remotely.) I had just installed OS X on the machine being backed up (though I was still running OS 9) so the file count had grown immensely. In both cases the first backup had aborted in mid-flight (on one machine due to the assertion bug, and on the other because of a communications failure.) I use files as backup sets, and because of the number of files being added, the catalog was split out of the resource fork and into a separate file.

 

 

 

The bug *appears* to be that if the backup is aborted on the run when the catalog is split out, the catalog is left in an indeterminate state which causes later backups to die horrible deaths. The fix is to rebuild the catalog, after which things seem to work fine. Interesting corner-case bug.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...