Jump to content


  • Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About wmconlon

  • Rank
    Occasional Forum Poster
  1. Quote: You just appear to want to rant. As previously explained, you don't have sufficient hardware to support your needs, and Retrospect is just a program that needs lots of RAM to deal with that many files. If you get more RAM, you will have better results. It's like trying to run a digital photo editing program in 512 MB of RAM. On the contrary. I provided a clear test case that demonstrates that hardware is NOT the issue. Case 1: 2x400MHz G4/1.25 GB RAM/~10GB free on boot Case 2: 1x2GHz G5/0.5 GB RAM/~200 GB free on boot Yet the same exact fault occured. The common factor IS Retrospect. More to the point, it is not only reasonable to wonder whether Retrospect can handle re-catalogging larger removable drives, it is prudent to assume it cannot re-catalog until EMC proves it can. In fact if other hardware is required, then it is incumbent upon EMC to provide a HCL that specifies the requirements. I don't care if it's a Beowulf cluster usng MPI. The fact is Retrospect failed repeatedly (on both machines) at about 375 GB into a re-catalog. That's not a rant. It's a statement of fact based on my repeated experiments. I'm sure others would welcome reports about re-cataloging terabyte-size drives, so we can determine the budgetary requirements needed to support this software.
  2. This procedure did work to rebuild the catalog file, so thanks very much for the suggestions! Of course, it's hardly intuitive that to re-build a multi-disk catalog file, you must give the incorrect answer (NO there are no more disks in the set), and then do an Update to add in the next set. In observing this very slow process, it really makes me wonder why Retrospect fails. It's NOT a matter of the hardware, as it happened on two machines, at the same point in the re-catalog process. The slower 450 MHz DP-G4 had 1.25 GB; I double-checked the faster 2GHz iMac and it only has 512 MB. Maybe it's an absolute number of files. But this would be the MOST troubling cause. Fortunately my disks were 250GB each, so the failure at about 375 GB could be managed with the above procedure. But what if I had been using 500GB removables. Would it still happen around 375GB? Does anyone know for sure. I doubt anyone wants to find out the hard way. So I have to stand by my opinion that this product is no longer suitable for SOHO/SMB. It always seems to take a fair amount of time and effort to recover from one of Retrospect's problems. And as the size of our backups has increased, Retrospect's frailty is becoming increasingly apparent.
  3. Procedure is as you described. Drives are mounted one at a time. After the first drive is cataloged, Retrospect asks for the second. I unmount the first and mount the second. Retrospect then chugs along until it barfs with the modal error window. It's the Ops log that reports "Out of Memory". The procedure was run on two systems: DP G4 400MHz, 1.25 GB RAM, about 13 GB free on the boot drive (where I was rebuilding the catalog). iMac G5 2GHz, 2GB RAM, 260(?) GB HD, well over 200 GB free space. How is it the same problem? It's the same re-building the catalog process, using the same three disks, I get the same error message, occuring at the same point in the rebuilding process. The point of the exercise is to de-bunk the assertion that my "unsupported" hardware is the issue. There are three 250 GB FW drives in the BackupSet 1-BackupSet-F -- full 2-BackupSet-F -- full 3-BackupSet-F -- about half full (as reported by Retrospect)
  4. The same problem just occured on an iMac G5, 200GB (mostly free space), 2 GB RAM. After cataloging 359 GB I get a message saying "Save the Partial Session" Save or Revert. Activity monitor showed Retrospect using about 250MB RAM and 2.59 GB VM. I wasn't aware that DP G4 w/ 1.25 GB RAM OS X.4 was unsupported. I'm pretty sure it was a supported configuration when I paid for the software. These are removable disk backups, with 3 disks of 250GB each. The last backup set is about half full.
  5. The response on this forum always seems to be to give Retrospect more resources. But as I've posted in the past, this is a UNIX world, a shortage of resources reduces speed, but does not cause failure. IMO, the problems are the result of fundamental design flaws in the software (probably race conditions), that are merely masked by faster processors. But it wouldn't surprise me if there were lots of issues with malloc() as well. If the solution is to buy new hardware, then the answer is a BRU appliance, NOT retrospect!
  6. ∆ Retrospect version 6.1.126 launched at 2/20/2007 12:05 AM + Executing Recatalog at 2/20/2007 12:11 AM To backup set Backup Set F… Not enough memory 2/22/2007 8:30:01 AM: Execution incomplete. Completed: 2692612 files, 359.9 GB Performance: 109.1 MB/minute Duration: 56:18:14 (00:01:29 idle/loading/preparing) <Rant>What am I most distressed at? 1. The catalog itself got damaged when the machine failed. How is it possible that Dantz wrote unsafe code for updating the Catalog file? 2. The performance is so abysmally slow. 109 MB/min to read and recatalog from Firewire hard drives. Sure I'm surfing and reading email on this machine, but I sure notice Retrospect hogging the CPU. 3. Out of memory. There's 10 GB of VM on this system, and the original catalog that Retrospect trashed was only 4GB, and besides if it could build the catalog in the first place during the backup, why can't it do the same during the recatalog? </Rant> My conclusion is that this product is no longer suitable for SOHO, never mind SMB.
  7. You can use the WAKE-ON-LAN capability built into most Ethernet NICs.
  8. Yeah, and unless you can ssh into the backup server to wake it or kill the screen saver, Retrospect will do you the favor of trashing your catalog if it doesn't get to quit cleanly.
  9. I can't believe that a squirrely permission or name will cause Retrospect to NOT backup. It really seems like the default for Retrospect is to fail. And only in the unlikely event that the moon and stars are aligned will it actually complete. Most recent case in point (from the log): - 12/30/2005 3:08:42 PM: Copying / on white… While scanning volume /, Folder /home/judith/.Trash/0„◊/, Scanning incomplete, error -24263 (file/directory not found) Huh? I say to myself. Retrospect doesn't back up the files and directories that ARE found, because a file or directory is NOT found????? Who would write such a program? So I just: [root@white judith]# cd .Trash/ [root@white .Trash]# ls -al total 12 drwx------ 3 judith tothepoint 4096 Dec 20 14:10 . drwxrwx--- 31 judith tothepoint 4096 Dec 20 15:11 .. drwxr-sr-x 2 judith tothepoint 4096 Dec 20 14:06 0??? ... [root@white .Trash]# ls 0?^?^H/ [root@white .Trash]# rm -R 0?^?^H/ rm: remove directory `0\343\177\b/'? y And problem solved. BUT WHY IS THIS A PROBLEM IN THE FIRST PLACE?????? So there's a problem with a file. Big deal -- log it and move on. But to halt the backup? Come on, Dantz. BTW, Retrospect Backup Server 6.1.126. The backup client was linux 6.5.108.
  10. I'm not looking for sympathy. I've described a verifiable, repeatable defect, whose simplest explanation is a race condition, i.e., a threading problem. Explain this: a machine ("supported" hardware or not) runs a full backup to DAT just fine, with millions of files and the backup lasts 3 days. But running on the same machine, even an incremental backup of a few thousand files to disk is unreliable. I'm sorry it wasn't clear that backups are similarly unreliable on systems d and e. And by the way, there are two different firewire hard drives in the mix. So I think that qualifies as showing that it fails on "supported" hardware. There have been assertions in this forum that Retrospect is somehow a hardware stress tester -- that it provides a great stress on a system and thereby exposes latent defects that no other software exposes. Possible I suppose, but extraordinarily unlikely. The fact that slower backups are flawless, in combination with failing fast backups on different hardware, suggests a defect in Retrospect is far more likely. Other than Retrospect failing, the system in question (the PowerMac 7300 with a Sonnet G3 upgrade running X.2.8) never crashes. It runs as a file server and never needs a reboot (though it gets one sometimes when Apple provides a security update). And re-booting it doesn't seem to make Retrospect run better. Since this is the most popular thread on the forum, I've got to believe this not just my "unsupported" hardware. Of course, it's possible that many of us have defective hardware (supported or not) and the assertion/checksum/consistency errors are indicative of those defects. But if that's the case, Retrospect should (as I've posted elsewhere) provide some graceful means of handling and reporting these errors. Retrospect doesn't do that. The way I find out Retrospect isn't working is by NOT getting an email saying a Script is finished. I can't be the only one who thinks this logic is inverted. I would prefer to get notified when something goes wrong. But that's the subject of another thread. I only hope that Dantz will structure some tests to look into this.
  11. You miss the point completely. Disk backups don't work reliably. DAT backups do work. Same system, same software. What's the diff? Speed. Retrospect can't keep up with itself. The problem is Retrospect!!!!!!!
  12. ok, finally identified the problem. Retrospect has a race condition that causes it to damage its catalog. So here it is in a nutshell. I'm running old hardware -- a powermac7300 (50MHz bus) with a Sonnet 400 Mhz G3 upgrade card. When I back up to firewire (400 Mbps) hard disks, Retrospect will NEVER run a full week, usually much less, before halting with an assertion check, consistency check or checksum error. This has been going on for more than a year, so I finally pulled out my old DDS-3 DAT drive and hooked it up to the external SCSI connector (5 MB/sec). Lo and behold -- no errors. Backed up everything on the network, some 80 plus GB, over some three days. (It's not an autoloader, so it depends on me to change tapes. Sure would be nice to get an email alert). I have run the DAT backup script regularly without trouble, but the scheduled firewire backup fails as regularly as ever. I just finished another trouble-free backup to a new DAT backupset. Again problem free! So what's happening? A race condition between the cataloging threads and the I/O threads. When I/O is faster than cataloging (my case with a slow computer and fast disk) The catalog gets out of synch. But slow down the I/O and Retrospect can keep up. Kind of suggests an explanation for the old OS 9 Virtual Memory problem -- paging out the catalog probably slowed it down too much. This also explains the "superstition" in this forum that faster/better hardware is needed to run Retrospect. So how do fix this? Dantz would have you buy fast processors to cover up their sloppy programming. I kind of would like to see this race condition removed, however, since it is a long-term, persistent bug.
  13. Quote: Depending on how the connection is dropped the client machine can get stuck thinking the connection is still active. Its rare but I suspect that is whats happeing in your case. Windows 98 is much more prone to this than NT based OSes. You could try reinstalling or updating the network drivers on the 98 machines. Otherwise you might just have to reboot the machines regularly. Yes, that's what happens, but whether it's rare or the fault of windows is questionable. It happens to me several times a week on a linux client. This is another example of the lousy design of both the Server and client. In my case, the Server (6/204 on MacOSX.2.8) fails of its own accord with an Assertion check. The server requires manual intervention to quit the server. It's a good thing this server isn't headless, or I would have to kill the process in the shell, but of course wouldn't have any idea that there is a problem because of the lack of decent logging. In any event, this backup system is so poorly designed, that both the server and client are incapable of handling any upset conditions. When the server fails (for its own self-determined arcane reasons) it fails to close its open connections. So the next time you try to backup, the client will report as busy. Now how would a competent programmer handle this? 1. Server fails gracefully. 2. Server reports these conditions in both GUI and non-GUI manners. 3. Server closes its connection on failures. 4. Server recovers from failures. At the very least it should have a monitor to HUP itself. 5. Client times out its connections.
  14. Let me add one other point, that should relevant to EMC CEO. You guys are in the data protection business. If there is a risk of loss of data because something has failed in my hardware, I expect you to protect me. If that means Retrospect needs to run memory checks and interface tests, so be it. But this forum is littered with people who can't get their data backed up, and worse still, can't retrieve it. It 's unconscionable.
  15. 1. I ran Retrospect 5.1 on OS9 on the same machine that had been running 4.3. So it's not just changing OS -- it's Retrospect. 2. Regarding hardware configurations, the one you cited is only one of many I have tried. It happens to still be my desktop. The only reason that one came into the thread is Dantz support's assertion that I should try backing under X. At the time that was the only X system in our shop. In addition to that system, I have tried: a. PowerMac 7300, Sonnet G3/400, 320 MB RAM (balanced DIMMs), MacOS 9.1 to: SCSI DAT b. same as a, OSX.2.8 to Firewire HD c. Powermac 7600 108 MB, unknown G3/400 upgrade, MacOS 9.2.2 to SCSI DAT, Firewire HD d. iMac DV SE (400 Mhz, 384 MB) MacOS9.2.2 to Firewire HD e. same as d, MacOSX.2.8 to Firewire HD f. same as d, Mac OS X.3.6 to Firewire HD I think my suite of hardware tests is sufficient to convince me that there something rotten in Retrospect. 3. Regarding the paltry amounts of RAM, I think you've hit the nail on the head. Retrospect has a history of failing when there is paging. Readers may recall that Retrospect would not run with Virtual Memory on an AppleShare system. But under *NIX, why should Retropsect care. The only issue with paging should be speed, not failures. My instinct tells me there is something subtle about how you build and manage memory resident indexes. But since it's been broken for so many years, how about keeping all your data in mySQL, instead? 2. I am sure that backup failures were due to hardware2. I