Jump to content

Data loss during Duplicate function


Recommended Posts

During an upgrade to our RAID, I had to copy all of the data off the RAID to a 200GB external drive. The best way to do this unattended was to use Retrospect's Duplicate function. It was a large amount of data (100GB or more).

 

 

 

The Duplicate seemed successful, no errors in the Log.

 

 

 

After copying the files back to the newly upgraded RAID, my production artists were find many files (images) missing (100s of images). There is no tracing what happened to these images in the Log. The files do not exist on the hard drive that Retrospect duplicated too.

 

 

 

I am having to recover the entire image database from DLT1 backups in order to right what happened.

 

 

 

This is a HUGE deal and need some answers!

Link to comment
Share on other sites

In reply to:

The Duplicate seemed successful, no errors in the Log.


 

 

 

Did you have Verification enabled on the Duplicate (the default setting)?

 

 

 

After Retrospect writes files in a Duplicate, it goes back to the Source and reads each file, one by one, and compares it with what it wrote to the Destination. If files on the Source are not found on the Destination during the Compare phase, Retrospect would most surely log the error.

 

 

 

How many total files were in the Duplicate?

 

 

 

Dave

Link to comment
Share on other sites

I had verification on for some of the scripts. This particular volume that was duplicated did not. To my knowledge, that should not make a difference. I was duplicating a volume, thats what it should have done. If not having verification on caused this, it should not be an option to turn it off.

Link to comment
Share on other sites

> I had verification on for some of the scripts.

 

 

 

a) Scripts? Do the scripts covering the volumes with missing files use a selection criteria?

 

 

 

B) What source did you use for the duplicate, a Retrospect client installed on the RAID server, or via file sharing? If you did it via file sharing, how did you have the volume mounted? File sharing privileges come into play if you are doing the copy via file sharing, so it matters what username was used to mount the volume.

 

 

Link to comment
Share on other sites

It's very simple:

 

 

 

I have a RAID (RAID 0) with several volumes (partitions). I needed to upgrade that RAID (to RAID 5) with new hard drives and needed to make a copy of all the data before doing so. I decided to use Retrospect's Duplicate feature because I was transfering the RAID data to a external firewire drive, so the backup feature was not an option. I set up folders on the firewire drive with identical names to the volumes I was copying. I then set up a schedule for each volume to create a duplicate onto the firewire drive, giving each volume enough hours to complete (volumes ranged from 9GB-28GB in size). The server software (Appleshare IP) was on during the Duplication, but I was the only user allowed access.

 

 

 

I new the amount of time, apprix., it would take for each volume because of a previous attempt to make copies manually (drag and drop). Figured the Retrospect approach was better as each volume could be done indepentantly and unattended, also have a log to reference.

 

 

 

Verification would have taken too much time (which I did not have), so that was disabled on a couple of the schedules after looking at the schedules I had already finished.

 

 

 

It's not that complicated, i just want to find out why it did not copy a significant number of files.

Link to comment
Share on other sites

In reply to:

It's not that complicated,


 

 

 

Perhaps not, but your messages have not yet made clear exactly what you did.

 

 

 

This particular Forum board is for Retrospect running on Mac OS X. Is that what you're doing?

 

-What machine has ASIP?

 

-What version of Mac OS?

 

-What version of Retrospect?

 

-Was Retrospect running locally on the Macintosh to which the RAID was connected?

 

 

 

John's question is quite important; was your Duplicate script configured to select "all files?"

 

 

 

>>This is a HUGE deal and need some answers!

 

 

 

That's gonna be hard without knowing more about the question.

 

 

 

But the best advice I could suggest is that before you intentionally delete data (by erasing the Source drive(s)) you should make every effort to be sure that all your data had been copied.

 

 

 

The most obvious way to do this would be to allow Retrospect to compare the Source with the Destination directly after the files are copied.

 

 

 

Having the Verify option turned off did not _cause_ your files to be omitted during the Duplicate. But it would have a alerted you to the difference, unless your script was configured to omit the files, something often referred to as "user error."

 

 

 

 

 

You can easily compare the number of files in a directory in OS X using the following Terminal command (thanks to MacOSXHints.com):

 

 

 

cd /path_to_directory_containing_folder_to_count/

 

find "folder_to_count/" | wc -l

 

 

 

The resulting number is all the files and folders, including invisible unix files.

 

Do this on both the Source and Destination to see if they're the same.

 

 

 

Dave

Link to comment
Share on other sites

Source: AppleShare IP Server 6.3, Mac Server G3 w/OS 9.04, RAID with 5-24GB

 

 

 

Retrospect system: PB G4, OSX (Jag), Retrospect 5.0238, Lacie 200GB firewire drive, logged into the server via netork (100 BASE-T).

 

 

 

Scripts were written to make duplicates of the volumes on the server to the external Lacie drive. No files were ordered omitted, All Files was selected. Verification was enabled on 2 of the 5 scripts. I disabled the last three to run because of the time that I had to get the data duplicated. Duplication was to a clean, formatted hard drive, no files were being replaced. Folders with the same names as the volumes being duplicated were created on the drive.

 

 

 

I did get this error on one script: "Scanning incomplete, error -127 (volume corrupt?". This was a folder of logos, in which i copied manuals to insure all were duplicated. Any other errors were questioning file creation dates, I also copied thos files manually to insure duplication.

 

 

 

The log on the script in which I did have files missing, is as follows:

 

+ Duplicate using Catalog at 12/7/2002 11:58 AM

 

 

 

- 12/7/2002 11:58:15 AM: Copying Catalog Server…

 

12/7/2002 2:42:32 PM: Execution completed successfully.

 

Completed: 3962 files, 25.9 GB

 

Performance: 160.9 MB/minute

 

Duration: 02:44:17

 

 

 

+ Duplicate using 3M Server at 12/7/2002 5:00 PM

 

 

 

- 12/7/2002 5:00:16 PM: Copying 3M Server…

 

12/7/2002 6:57:09 PM: Execution completed successfully.

 

Completed: 11107 files, 17.5 GB

 

Performance: 153.1 MB/minute

 

Duration: 01:56:53 (00:00:02 idle/loading/preparing)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

In reply to:

logged into the server via netork (100 BASE-T).


 

 

 

You didn't say it specifically, but should we assume that you logged into this server via AFP?

 

 

 

Accessing AFP volumes takes some special care with Retrospect running on OS X. As John asked, did you configure your volumes (as described in Retrospect's ReadMe file) and have Retrospect mount them?

 

 

 

I would also add that you could have done a File Backup Set instead of a Duplicate. Backing up to a File would probably have been faster, since in a Duplicate session Retrospect takes a lot of time individually setting the permissions of every file.

 

 

 

I know it sucks to work on a Saturday, when it's the only time you can take a server off-line. But you disabled Retrospect's built-in confirmation mechanism and traded security for comfort.

 

 

 

Dave

Link to comment
Share on other sites

One unlikely thing to try is running Disk Utility on your firewire drive in case the files/folders reappear if a problem is repaired.

 

 

 

However, I'm still wondering about file permissions. I have not used ASIP for a while, I think the permissions issue might be avoided when connected to the ASIP server as the administrator account. Were you connected as the ASIP administrator account? If so, I give up. :-)

 

 

 

The reason I am still asking is because the symptoms fit your description, and I haven't though of anything more likely. To illustrate the problem with a small example, on my Mac OS 9 "server" using personal file sharing I create

 

 

 

Share Me [shared folder, everyone: read & write]

 

Public [everyone: read & write]

 

public file

 

Private [everyone: none]

 

private file

 

 

 

I then connect as guest from my Mac OS X computer. Running a duplicate using Retrospect shows no errors, but I do not get the "private file" duplicated because although Retrospect can see that the folder exists, it can not see the contents of the folder.

 

 

 

- 15/12/2002 12:28:39 AM: Copying Share Me…

 

15/12/2002 12:28:40 AM: Comparing trstest on Nine…

 

15/12/2002 12:28:40 AM: Execution completed successfully.

 

Completed: 3 files, 4 KB

 

Performance: 0.4 MB/minute (0.2 copy, 0.0 compare)

 

Duration: 00:00:01

 

 

 

Retrospect does not warn that folder contents are not reachable, although perhaps it should!

 

 

 

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...