Jump to content

dzeleznik

Members
  • Content count

    20
  • Joined

  • Last visited

Community Reputation

0 Neutral

About dzeleznik

  • Rank
    Occasional Forum Poster

Profile Information

  • Gender
    Not Telling
  1. Backup file matching problem

    FWIW, I have used Cygwin and "cp -pr" to replicate volume directory structures. All attributes have been replicated and Retrospect recognizes that existing files match those already in the backup set. Both source and target volumes have to be mounted onto the same filesystem though.
  2. Verify reports "chain broken", how to fix?

    I have completed my testing of the situation using much a larger (hence the delay in reporting back) backup set: Original backup set created and maintained with v11, incrementally verified monthly with no errors until upgrade to v12. Backup set has been maintained with scripts with block level incremental backup enabled. First incremental verify after upgrade to v12, 18 chain broken errors were reported. Some files reported are large files that were supposed to be backed up using block level incremental, others are misc. smaller files where the only "chain" is a single session and it is missing. Performed complete verify which reported 33 chain broken errors. The errors reported are NOT a strict superset of the 18 reported by the incremental verify in step #2. The 33 files in error are again a mixture of large dbs that are backed up using block level incremental where 1 or more sessions are missing, and other smaller files that have a single session and it is missing. Rebuilt the catalog, log says it was a "fast rebuild", no errors reported Repeated a complete verify to determine if the rebuild solved any issues. Answer is no, the exact same 33 errors are reported as in step #3. Transfered the backup set to a new one using a selector to exclude the 33 files that are reported to be broken. Full verify on the new backup set is clean Next backup using the new backup set backs up the files (if they still exist on the client) that were excluded. Conclusions: My backup set had real latent errors that were never reported or detected by v11. v12 incremental verify on a v11-maintained backup set reports a mixture of real errors and false positives v12 full verify on a v11-maintained backup set reports the true set of errors Rebuilding the catalog solves nothing, the errors are missing sessions in the backup set. If they are missing, all the cataloging in the world will not bring them back from the dead. Transferring the backup set minus the files in error generates a clean starting point for future backups I honestly can't say whether any of this is related to the bug 6668 mentioned by the previous poster. There does seem to be a bug related to false positives reported by incremental verify on a backup set that has real errors in it. I will report this to Retrospect. In the meantime, since all of my backup sets have some errors my strategy is to: Run a full verify on each backup set to determine the true list of file errors Transfer/dupe the backup set filtering the broken files to create a fresh starting point for each of my backup scripts Run a full verify on the new backup sets and then continue with monthly incremental verifies
  3. If you copied the disk backup set files (.rdb, .session, etc.) to the new disk using the same directory structure, you should be able to rebuild the catalog so it points to the files in their new location. Note that it is the catalog that uniquely identifies the volume location of the backup set, not the backup data files themselves.
  4. Verify reports "chain broken", how to fix?

    Latest status Original backup set created and maintained with v11, incrementally verified monthly with no errors until upgrade to v12. At first incremental verify after upgrade to v12, 6 chain broken errors were reported. Of special concern were several crucial application databases. Rebuilt the catalog and performed complete verify which reported 18 chain broken errors. If the catalog rebuild did not fix any of the errors, I would have expected the reported errors to be a superset of the errors reported by the incremental verify in step #1. This was not the case. Some of the errors reported during the incremental verify were not reported after the catalog rebuild and complete verify. In fact, none of my crucial db's are reported broken anymore, the only errors seem to be relatively minor log and misc. files. I cannot be sure if the catalog rebuild resolved some of the broken chain errors or whether the first incremental verify using v12 generates some bogus results. I will need to test this out on a different backup set. Transferred the backup set in it's entirety and ran complete verify on the new dupe. The same 18 chain broken errors were reported as for the original backup set in step #2. I am now in process of doing another transfer of the original backup set, but this time using a selector to exclude the 18 files that are reported to be broken. Stay tuned..... My conclusion so far is that: v12 reports chain broken verify errors on backup sets that have verified continuously clean using v11 Rebuilding the catalog *may* fix some of the chain broken errors, still TBD Transferring the backup set in its entirety does not fix chain broken errors
  5. Verify reports "chain broken", how to fix?

    I have just rebuilt the catalog for the offending backup set and ran a complete verify. This took quite a while sandwiched in between other scheduled Retrospect activity because the backup set is one of my largest at ~3TB. The complete verify turned up additional "chain broken" errors further back in time. During the interim, the scheduled monthly incremental verifies on my other backup sets completed and each one of them also reported 4 - 30 chain broken errors. For now, I am going to assume that the errors are real latent ones that were only just reported because v12's verification process is more thorough than v11's was. Since a catalog rebuild does not seem to solve the issue, I am now starting with my smallest backup set to save time (<300GB) and transferring it to create a duplicate. I will then run a complete verify on the dupe. If all is well, I will swap the new set into my scripts and forget the old one. I'll then run a backup and check that the files that were previously "broken" are freshly backed up. If *that* all goes well, I'll rinse and repeat for my other backup sets. Definitely a PIA and will take quite a while, but hopefully will scrub my backup sets to be clean and dependable. I'll report back when I finish my testing on this small backup set. Fingers crossed...
  6. I run a verify "on all backups not previously verified" monthly for each of my active backup sets just to keep tabs on their integrity. I have one backup set that just ran its monthly verify pass and reported 10 files with "chain broken" errors. This backup set has never (nor have any of my others) reported this error in the past. This backup set has been in use for 2 years and has cleanly verified every month until now. I upgraded from Retrospect 11 to 12.1.0.174 two weeks ago, so all previous clean verifies of this backup set were with the older version. Not sure if that is a coincidence or not. Bottom line, how do I repair this error? I can find no help online via countless searches. Will rebuilding the catalog solve it? Or do I need to do something else. Thanks!
  7. Emergency Restore of Win7 Boot Drive - Error 625

    Thanks for the explanation, very helpful
  8. Emergency Restore of Win7 Boot Drive - Error 625

    Thanks Scillonian, that is exactly what I'm doing after spending several hours frustrated because I discovered the hard way that a Windows 7 oem install disk that I had easily at hand will not work with a Windows 7 upgrade license. It took me a while to find the Windows upgrade disks and reinstall yet again from scratch. Restore is in progress and while waiting have been rereading the Retrospect manual. The last time I did a disaster recovery from scratch was several years ago with a much older version of Retrospect. From what I can tell, instead of using the iso downloaded from the website, I should have used my copy of Retrospect to create one? Do you know if Retrospect on the recovery disk would then be version 11 and not give me out of memory errors? I guess once I am back up and running I will create a disk, boot from it, and see what is on it.
  9. I detected corruption of my Win7 backup server's boot C: drive and set about to do an emergency restore using my Retrospect 11 emergency recovery CD. This is a local restore since the file backup sets and catalogs are on locally attached USB hard drives. The Win7 machine has 8GB memory. Using the recovery CD I: - Choose Restore - Switch to Advanced Mode - Select Restore an entire volume - Choose any of the appropriate backup sets, there are 1703 sessions for this machine Pass 1 progress pie starts going and it gets to matching 36,300 of 512,000 files at which point I get the error "Sorry, restore preparation failed, error -625 (not enough memory). Just as a test, I tried selecting much smaller backup sets for other volumes that have only 2000 files and I get the same error. Another data point is that I downloaded the emergency recovery iso from http://www.retrospect.com/en/products/bmr and plugged in my Retrospect Pro 11 license key. When the recovery environment executes, the version of Retrospect is 7.6.205 What can I do to restore my backup server's system partition?? This is critical.... Thanks!! Dave
  10. @haggis999- Thanks for the feedback. Yeah, I think something is foobar in my setup. I definitely have Retrospect set to Exit after a backup job with lookahead time of 1 hr. But when it is launched from the background retrorun service, it never exits. I have no scheduled jobs pending on the horizon, the retrospect.exe process is still running (quiescent with minimal memory and 0% cpu), only the retromonitor will launch from my interactive session. I have tried retrorun both logging in as my account and as the local system, but behavior is the same. For now, my only solution is to keep a login session active on the server and keep the full Retrospect gui running at all times. I'll contact support and see what they say.
  11. I am wrestling with this issue as well, so please post back when you find a solution. What has worked partially for me so far is to: Configure the Retrospect Launcher service (retrorun) to login under my account instead of the default SYSTEM Configure Retrospect to login as "current user" instead of explicitly specifying my account Disabling UAC on the backup server I have verified that Retrospect now launches automatically and runs the scheduled backup jobs successfully under my account. However, the full ui is not available for some reason, which is a real PIA. All I get is the totally useless little monitor window. I have no way to look at the current logs in progress, check on completion eta, or whether the job is prompting for media! Checking in task manager shows that Retrospect is running under my account, but the monitor window still says that "Retrospect is already running in another user's session". My next step is to see what happens if I have the full Retrospect UI always launched and open on the backup server, with "Stay in Retrospect" enabled.
  12. v7 Recatalog with Missing Files

    Does anybody have an answer on this one? When I rebuilt my catalog, I am now missing quite a few of my oldest backup sessions. So, obviously Retrospect completely ignored the 21 RDB files and provided me no way to have them recognized during the catalog rebuild. How can I rebuild my catalog and have *all* of my RDB files included????
  13. v7 Recatalog with Missing Files

    Hi, I am running Retrospect v7.0.344 on WinXP Pro. I backup all of my clients to disk backup sets located on an array of external USB drives. I use the Retrospect grooming policy on all of the backup sets. A couple of days ago I discovered that one of my catalog files is corrupt, which is when the fun started trying to rebuild the catalog. - I choose Repair Catalog > Recreate from disks > All Disks - I select the directory that contains all of the RDB files for the backup set - I am prompted for the backup set to repair - I answer No to the prompt on whether there are more disks in the backup set So far, so good. Now, I am prompted whether to "Continue with the recatalog with missing files?" I select "View to see the missing files and look for them". At this point, I get a list of 21 RDB supposedly missing files, yet I have manually verified that all them do exist in the backup set directory!. FWIW, this particular disk backup set's directory contains 1052 RDB files total. The kicker is that despite the original dialog's hopeful announcement that I would be able to "look" for these "missing" files, the only option I am given is to click OK, at which point I am back to where I started. My only true option seems to be to "Continue with the recatalog" and the information in the supposedly missing files will "no longer be available" What am I doing wrong and why can't I get the catalog rebuild to recognize the existence of these 21 RDB files that clearly exist in the disk backup set directory? Thanks for any help or advice on this very frustrating issue.
  14. Disastor Recovery CD-R disc image too big

    Maybe these links can help: http://kb.dantz.com/display/2n/_index1.asp?tab=faq&r=0.829632 http://forums.dantz.com/ubbthreads/showflat.php/Cat/0/Number/98261/an/page/page//vc/1
  15. Complete system restore

    When I am in the office, my laptop sits on a shelf and acts as a file server to my desktop. Therefore, I only use the desktop environment on my laptop when I need to grab it and go for a business trip. However, the other day I did briefly check and it seems that all of my Intel wireless profiles are missing after the restore! I have not tested any Office apps yet, but I have not experienced any of the other issues that you describe.
×