Jump to content

Assertion Check at arc.c-1563 failures


Recommended Posts

I have run into this bug about 5 times now, and seemingly always when my Catalogs get large (over a terrabyte). Happens each and every time my catalog gets too big, and Dantz doesn't seem to want to hear about it from me unless I pay a $69 incident charge, even though I bought the server app and 50+ clients less than 4 months ago.

 

Well, here's a standard bug report since no other avenue has gotten me anywhere:

 

Reproducable:

Each and every time a catalog breaks one terabyte barrier

 

Steps to Reproduce:

Build a set of catalogs and run a lot of backups wink.gif

 

Expected Results:

backups work and are usable

 

Actual Results:

I've taken a couple of screen captures of what happens - http://macdiscussion.com/retrospect/sm_arc-c-1563.jpg

 

And then when following the directions, at seemingly random places/files - http://macdiscussion.com/retrospect/sm_arc-c-1563-1.jpg

 

 

Attempted Workarounds:

1) completely rebuilt a brand new set of catalogs, scripts, and preferences on a clean install of Retrospect Server

2) changed to a completely new set of brand new tapes

3) Re-cataloged numerous times (a very lengthy procedure at best resulting in no backups for days each time)

4) Installed on a completely different machine, with a brand new OS install (9.2.2), and built a whole new set of Catalogs by hand each time

 

 

Any suggestions (short of making my catalogs stop by a terabyte) would be appreciated. TIA.

 

Link to comment
Share on other sites

Also for competion's sake:

 

Current Hardware:

Blue and White G3 450

512MB Ram

40G Hard drive

Adaptec 2940U2B SCSI card

Quantum SDLT Drive

Mac OS 9.2.2

 

Previous Hardware (which had same problems):

Blue and White G3 300

256MB Ram

12G Hard drive

Adaptec 2940UW SCSI Card

Quantum SDLT Drive

Mac OS 9.1

 

 

Is there any other info I can post that can be useful for this?

 

Link to comment
Share on other sites

Mark,

 

I just read your note on Macintouch, and after also seeing Eric's suggestion that you visit the Forum I thought I'd pop in and see if you'd posted. Since you were wise enough to include the appropriate information, an answer to your question is easy.

 

Retrospect 5.0 and 5.1 do not support catalogs larger then a terrabyte.

 

Dave

 

Link to comment
Share on other sites

Doh!

 

That's right, a catalog file itself can grow to 2 gigabytes (catalog compression prevents that limitation from being an issue for most users), while any single catalog file can only refer to up to 1 TB of data (no matter the size of the catalog file itself).

 

Mark, can you confirm that your Backup Set is larger then 1TB (as shown in the Summary window of configure->Backup Sets), not the catalog file?

 

Dave

Link to comment
Share on other sites

Ok so that explains my problems for sure.

 

It is in fact a bug in Retrospect that I have hit (a nasty one at that since it destroys my ability to even use the backup set once it hits this). The fact that it doesn't stop/warn/or error nicely is rather nasty. Another terabyte or so of my data gone, *sigh*.

 

I really must say that I am quite dissapointed that it can't handle such a meager amount (these days) of data. Had I known this I would _not_ have spent the money for the server and 50+ licenses.

 

I have more hard drive space on my network than a backup set can handle, ouch. I guess I'm off to look for an app that can handle what I need to do. Thanks for the replys.

 

P.S. I'm still aghast at the way this is handled by tech support. What happens to the stuff from the feedback form? I've sent this info as a bug at least 3 times now and not received so much as a reply.

 

Link to comment
Share on other sites

Quote:

CallMeDave said:

Mark, can you confirm that your Backup Set is larger then 1TB (as shown in the Summary window of configure->Backup Sets), not the catalog file?

 


 

Yep, sure can: 928.8G on the damaged (and now unusable BTW) Backup Set.

 

My other few sets are hovering very close to the same mark so I guess for the next 2 weeks I will be redoing my whole backup structures and abusing my servers/workstations :/

 

/me seriously considers just whipping up a nice python wrapper for rsync + tar

 

 

Link to comment
Share on other sites

Anyone else have words of wisdom to offer on this?

 

The thing that is ringing in my ears is why does retrospect, knowing the 1TB limit allow a backup set to get past this limit and be large enough to destroy itself ? I've lost 5TB or so of data to this problem, not a great track record for something you rely on to backup your data, not to mention the huge amounts of time involved in recovering from it repeatedly.

 

It kind of defeats the purpose of backups if you can't retrieve anything from them, and the Dantz folks don't seem to be too forthcoming about this problem, or a workaround for it ... aside from manually monitoring your backups and your potential data for backups - as this is how it would have to be done considering it allows itself to overrun it's limit. Now that I think of it, in my setup here it's almost impossible to do this.

 

 

 

Link to comment
Share on other sites

Quote:

the Dantz folks don't seem to be too forthcoming about this problem, or a workaround for it ...

 


 

It's hard to see how Dantz could be any more forthcoming about a problem then having a System Engineer post a personal reply to you that confirms the limitation of the program, and then stating that their next version will address the issue.

 

This limit has been discuessed here on the Forum. For example:

http://forums.dantz.com/ubbthreads/showflat.php?Cat=&Board=Desktopworkgrupx&Number=22112

 

Here's another one that includes a reply from a Dantz employee:

http://forums.dantz.com/ubbthreads/showflat.php?Cat=&Board=Desktopworkgrupx&Number=21000

 

Both threads include information on signing up for a notification specifically about this. Doesn't sound as if they're hiding anything.

 

I agree that between 5.0 and 5.1, if they weren't going to solve the data size limit, they should have put something in that would have stopped backups from self-destructing this way.

 

>...aside from manually monitoring your backups and your potential

>data for backups - as this is how it would have to be done

>considering it allows itself to overrun it's limit. Now that I think

>of it, in my setup here it's almost impossible to do this.

 

Why is it impossible? Is there no way for you to use multiple Backup Sets and juggle the data between/among them? Are you hand-swapping tapes? Or do you have a loader that could contain members from multiple Backup Sets?

 

Dave

Link to comment
Share on other sites

Hi Dave

 

I guess they have replied quite well to the matter but something in their installation notes or reame's (or manual) might have been a nice addition for something as much of a potential problem as this. I havent scoured the docs top to bottom, but I do make a habit of reading this type of stuff fairly thoroughly and didn't see anything of this problem. I don't generally go out and try and commit to memory or do super extensive searching in forums or knowledgebases for things that cause such critical failures as this does and rely on release notes and readme's to provide information of this severity. The single worst thing I can think of for a backup application is to have your backups destory themselves.

 

You are right in that they responded quite well though, and I apologize if I've come off heavy on this issue, but I have lost 5TB of data, which is kind of scary and puts me in quite a situation. When you couple that with the fact that they seemed quite unwilling to accept bug reports without having to pay money up front to speak to someone about it I was in yet a worse situation with things again, especially after having submitted feedback several times with zero response. I've done bug reporting with quite a few of the major mac software companies out there, and this is a very unique setup.

 

I am changing tapes by hand, and the nature of my network and it's users lend to making it quite difficult in splitting up sets. For example I have desktop and laptop users that are in for very brief periods of the day, quite often at very odd hours only, whereas some other users are strict daytime hours, etc. and take their machines with them. This list of users changes all the time, so I can't even really group them.

 

There are some obvious things that I can split off, but it still leaves me at best with some potentially HUGE sets, and in the design world they can get big fast with little to no warning... without the ability to easily monitor this, and the fact that if not closely monitored I can lose my data still makes me quite uneasy. Unattended (mostly) backups are no longer an option. Had I been relying on archival type backups with this loss I would have been in a very bad situation.

 

I'll be re-reading portions of the manual tonight to try and assess the best way to go from here. There must be a way I can have my cake and eat it too without having to live in my server room for large portions of my day.

 

 

Link to comment
Share on other sites

Quote:

When you couple that with the fact that they seemed quite unwilling to accept bug reports without having to pay money up front to speak to someone about it I was in yet a worse situation with things again, especially after having submitted feedback several times with zero response.

 


 

Ric Ford published Eric Ullman's answer to your original query, which included a web form to report bugs at no cost. And if you already submitted feedback using this form, why do you insist that there is no channel to do so?

 

>I am changing tapes by hand, and the nature of my network

>and it's users lend to making it quite difficult in splitting up sets.

 

Everybody's setup is different. but if you have a single tape drive without robotics you might have good luck with a Backup Server script. This will write to whatever defined Backup Set members are available, allowing you to feed the drive tapes from alternate Backup Sets. I've never done this, and there may be issues I'm missing, but it's worth considering. Perhaps some other high-capicity users might add to the thread here.

 

>There must be a way I can have my cake and eat it too without having

> to live in my server room for large portions of my day.

 

I always found earplugs to help with the opressive noise of enterprise server closets. Try the AOSafety sleep/rest plugs. They're really soft and comfortable!

 

Dave

 

 

Link to comment
Share on other sites

Quote:

CallMeDave said:

Ric Ford published Eric Ullman's answer to your original query, which included a web form to report bugs at no cost. And if you already submitted feedback using this form, why do you insist that there is no channel to do so?

 

 


 

I'm not saying that at all. I'm saying that they do not answer, reply, or even acknowledge that they got the report in any way, shape, or form. I reported this problem 3 times with not even an email saying "we got your information, for more support options please visit blah...". This is not a _proper_ avenue for reporting bugs. For all I know that message went to /dev/null in someone's spam filter. bug reporting generally involves a 2 way communication model, or at least it does with Apple, Adobe, Macromedia, and Quark, all of whom I have done quite a bit of bug reporting/troubleshooting/resolution work with.

 

Quote:

CallMeDave said:

Everybody's setup is different. but if you have a single tape drive without robotics you might have good luck with a Backup Server script. This will write to whatever defined Backup Set members are available, allowing you to feed the drive tapes from alternate Backup Sets. I've never done this, and there may be issues I'm missing, but it's worth considering. Perhaps some other high-capicity users might add to the thread here.

 

 


 

That is in fact what I've been running here, and I have 4 backup sets sitting just below their 1TB capacity. I haven't been able to find a way to nicely rotate my tapes with no loss, as you can with other enterprise calls backup solutions (Veritas, etc). The recycle option in retrospect will and does destroy data. The only hackish workaround I've found so far is to set tapes as missing when I want to recycle them, which I think will still land me at the size limit once again. If a tape is set missing does it remove the data from the Backup Set? From everything I tried in testing it does not.

 

Quote:

CallMeDave said:

I always found earplugs to help with the opressive noise of enterprise server closets. Try the
. They're really soft and comfortable!

 

 


 

Thanks for the tip hehe. I'm also a pro soundman/engineer and have some really nice earplugs wink.gif The problem here is that I run this fairly large LAN by myself, so I really don't have time to burn sitting in the server room or monitoring it all day long to swap out tapes.

 

Sorry this is getting so drawn out, I don't mean it to be, but I'm trying to illustrate that this size limit is a HUGE factor, and people should be made more aware of it. I've spent 3 days trying to evolve a new routine for backups that will work for my/our needs, and I just really think it either can't be done with Retrospect or I'm missing something very important.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...