Jim_Correia Posted November 22, 2006 Report Share Posted November 22, 2006 I'm backing up to a file based backup set over AFP to an Infrant ReadyNAS NV+ RAID. Is there any value in leaving verification turned on? i.e. is it likely to find a problem that isn't the result of the source file legitimately changing between the time it was backed up and the time it was verified? The obvious advantage to turning it off is it will have the backup time :-) Jim Quote Link to comment Share on other sites More sharing options...
CallMeDave Posted November 22, 2006 Report Share Posted November 22, 2006 Quote: Is there any value in leaving verification turned on? No. Verification has no value, and it is only included in Retrospect to make backups take longer. \snark Seriously, although you'll get lots of entries in the Operations Log that will be caused by normal operation, if the Backup Set for some reason is not copying valid files, the only why you'll know is through the Verification pass. If your time constraints force you to make a choice between Verification and backing up all your files, I'd of course choose the later. But if you can squeeze out the time, you're better off letting the program do all its work. Dave Quote Link to comment Share on other sites More sharing options...
Jim_Correia Posted November 22, 2006 Author Report Share Posted November 22, 2006 Dave, No need for the snarky portion of your reply. It wasn't a wise ass question. Certainly there is value in the feature in at least some situations. (For example, when backing up to tape where media related errors are much more frequent than when writing to a hard disk over a reliable TCP/IP link.) I just want to know if I'm getting real value for the time spent verifying in my particular situation. (Recycle backups take longer than overnight with my data set.) If there is a real error (i.e. not a file which was legitimately updated between copy and verify phases) is there a reliable and quick way to spot them in the error log besides the usual eyeball approach? Quote Link to comment Share on other sites More sharing options...
CallMeDave Posted November 22, 2006 Report Share Posted November 22, 2006 I don't think it matters if a SCSI tape drive is _more_ prone to media related errors then writing to a File Backup Set stored on a remote CIFS volume; unless the latter is 100% imune to such errors, I'd go with the secondary check. Sadly, there is no way to filter out noise in the Operations Log. Some sort of smarts, similar to the Summary Service built into OS X, would be very welcome indeed. Quote Link to comment Share on other sites More sharing options...
Jim_Correia Posted November 23, 2006 Author Report Share Posted November 23, 2006 Quote: I don't think it matters if a SCSI tape drive is _more_ prone to media related errors then writing to a File Backup Set stored on a remote CIFS volume; unless the latter is 100% imune to such errors, I'd go with the secondary check. I'm using AFP, but point taken. I agree that to be completely paranoid is to verify the backup set. My question was basically this: TCP/IP guarantees error free data transfer. We trust hard disk drives to read and write our files all day long without high level software verification. (And in fact, if we can't trust reads, we are doomed because we need to read the files in order to back them up. ) So is the Retrospect case any different; what is the verification protecting us from? A hard disk error that would go unnoticed in other circumstances? A bug in Retrospect itself? Meanwhile, I'll be paranoid and leave verification on. Quote: Sadly, there is no way to filter out noise in the Operations Log. Some sort of smarts, similar to the Summary Service built into OS X, would be very welcome indeed. I'll add it to my long list of Retrospect wishes. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.