Jump to content

Recommended Posts

Hi,

Retrospect v9.01 running on 10.7.3 Mini Server, clients either v6.x or v9.x running 10.5.8 or 10.6.8.

Backups go to external USB-drives, on the clients only one folder ("• Folder Retro" on / ) is backed up.

 

Problem: Retrospect always does a FULL backup. If eg. client X has 55GB to backup, the first run copies 55GB, the next run 55GB and a another again 55GB.. The old server with Retro 6.x was running fine, no such problems. Tested with client v6 and v9, same problem.

Example:

 

+ Retrospect Version 9.0.1.401

Gestartet am 01.03.12 13:59:41

 

+ Normales Backup mit Clients_DiDoSa am 01.03.12 (Ausführungseinheit 1)

To Backup Set DiDoSa...

- 01.03.12 19:28:00: Kopieren von • Folder Retro auf User Xs MacPro

01.03.12 20:00:41: Snapshot gespeichert, 637 KB

01.03.12 20:00:42: Vergleichen von • Folder Retro auf User Xs MacPro

01.03.12 20:27:59: Ausführung wurde erfolgreich beendet

Erledigt: 1853 Dateien, 55,3 GB <----- copied 55GB

Durchsatz: 1.918,3 MB/Min (Kopieren: 1.762, Vergleich: 2.106,3)

Dauer: 00:59:59(00:00:58 Leerlauf/Warten/Vorbereiten)

 

- […]

 

01.03.12 20:28:11: Ausführung wurde erfolgreich beendet

Gesamtdurchsatz: 1.917,8 MB/Min

Gesamtdauer: 01:00:11(00:01:01 Leerlauf/Warten/Vorbereiten)

 

A few minutes later, a second backup is started:

 

+ Normales Backup mit Clients_DiDoSa am 01.03.12 (Ausführungseinheit 1)

To Backup Set DiDoSa...

- 01.03.12 21:17:28: Kopieren von • Folder Retro auf User Xs MacPro

01.03.12 21:50:25: Snapshot gespeichert, 637 KB

01.03.12 21:50:27: Vergleichen von • Folder Retro auf User Xs MacPro

01.03.12 22:17:44: Ausführung wurde erfolgreich beendet

Erledigt: 1853 Dateien, 55,3 GB <----- copied again the same 55GB!

Durchsatz: 1.912,9 MB/Min (Kopieren: 1.751,1, Vergleich: 2.108,9)

Dauer: 01:00:15(00:01:05 Leerlauf/Warten/Vorbereiten)

 

- […]

 

01.03.12 22:17:56: Ausführung wurde erfolgreich beendet

Gesamtdurchsatz: 1.912,4 MB/Min

Gesamtdauer: 01:00:27(00:01:08 Leerlauf/Warten/Vorbereiten)

 

Though no files were changed, Retrospect copied the whole folder again --> 110GB on backup target used. Any ideas? This problem makes Retrospect 9 totally unusable for now...

Link to comment
Share on other sites

I'm not using version 9 yet, but on version 8 there are some option you might want to check out:

"Match source files against the Media Set "

"Don’t add duplicate files to the Media Set"

"Match only file in same location/path "

 

"Match source files against the Media Set " is set in all scripts, the others not. Shouldn't that work - or what's the difference between "Match source files against the Media Set" and "Don’t add duplicate files to the Media Set"? The seconds sounds to me like an option to avoid multiple backups of the same file from different sources and directory locations (file based dedup)? The third option uses the file-ID, which is a problem if eg. a client gets a new hard drive. In this case all files will be backed up again because all file-ID's will have changed - which I want to avoid, harddrives are nowadays not a premium product at all...

The script to back up the local fileserver shares (nested on a Pegasus RAID) has "Match source files against the Media Set" and "Don’t add duplicate files to the Media Set" set - and works as expected. If I switch to only "Match source files against the Media Set" all files are backed up again - so the first option is ... totally useless respectively doesn't work at all?

As a workaround I can set both options, but I wonder why "Match source files against the Media Set" isn't working for neither local backups nor client backups.

Link to comment
Share on other sites

The option are described in the manual.

 

Yes, I wonder too why it isn't working for you. I mean, this is such a basic feature so if it was broken we would have a LOT of posts like yours. But we don't. So what is different in your setup compared to "everybody" else's?

Link to comment
Share on other sites

Silly me. You must have "Don't add duplicate files to the media set" checked.

 

The manual says no:

"Match source files against the Media Set: This option directs Retrospect to identify previously backed up files during normal backups. This function is a key component of Retrospect’s Smart Incremental backups. Retrospect compares the files on the source volume to file information in the Catalog for the destination Media Set.

The Mac OS file matching criteria are name, size, creation date and time, and modify date and time.

[...]

Retrospect considers a file already backed up if all of these criteria match."

 

So the first option should be enough for an incremental backup without dedup. But the description of the second option states:

 

"If this option is deselected, Retrospect adds all files, including previously backed up files, to the Media Set every time a Normal Backup is performed."

 

Kinda weird and misleading, because then the first option doesn't make any sense to me.

Anyway, thx for pointing me to those funny options, I'll now set both, hoping that dedup is working without much problems. Does anyone know how it works? File-hashes?

Link to comment
Share on other sites

thx for pointing me to those funny options, I'll now set both

 

Since those two options are on by default for all new Scripts, you must have turned them off yourself.

 

 

 

I'll now set both, hoping that dedup is working without much problems. Does anyone know how it works? File-hashes?

 

"dedup" probably isn't the right term to use (which is probably why Retrospect doesn't use it).

 

Retrospect won't copy a file to the Media Set if that file is already there, matching the criteria noted above.

 

That's it. It doesn't go through the existing files on a Media Set and de-anything.

Link to comment
Share on other sites

"dedup" probably isn't the right term to use (which is probably why Retrospect doesn't use it).

 

Wrong.

"deduplication – A method for reducing the amount of data stored in a system by eliminating redundant data, replacing it instead with a pointer to the first-stored copy of that data. Retrospect employs a method of deduplication known as file-level dedu- plication or single-instance storage." - from the manual. So yes, they DO dedup on a file based method (not block based as many SAN do). They call it "smart", whatever, it's simply a standard way of deduplication for backups. But to be worse, they don't seem to use any hashes? Really bad idea. "The Mac OS file matching criteria are name, size, creation date and time, and modify date and time." It might be rare, but there is a good chance to lose data this way.

 

Retrospect won't copy a file to the Media Set if that file is already there, matching the criteria noted above.

 

That's it. It doesn't go through the existing files on a Media Set and de-anything.

 

Wrong. The criteria's of the first option describe a plain simple incremental backup (but also with path!), the second a method normally called file based dedup, that simple. To stop Retrospect from doing deduplication, the option "Match only file in same location/path" must be set. But by also using the HFS specific file-ID I recommend against it as stated above.

Why they decided to force you to use the first two options for a normal incremental backup is beyond my understanding.

Link to comment
Share on other sites

replacing it instead with a pointer to the first-stored copy of that data

 

Fine; Retrospect builds a Catalog and tracks files, copying only the first instance and noting the specifics of any additional matching file(s).

 

But that doesn't invalidate the next assertion, to wit "Retrospect won't copy a file to the Media Set if that file is already there..." It won't, unless you configure it to do so.

 

And the program _certainly_ doesn't delete anything already written to a Member of a Media Set; did I misunderstand you to be claiming that? Apologies if so.

 

But it's still unclear what exactly is beyond your understanding. With default settings the program behaves as advertised; a unique file is copied once, additional instances of that same file are noted in the catalog for later Restore but not copied again, and if you want different behavior there are options available. It does this during every Normal backup. What are the "two options" they are forcing you to use?

 

The program has always used a combination of file attributes and location to provide its feature set. No it does't use any HFS specific anything (due to their apparent goal of cross platform compatibility) or hashes. Yes, it can get fooled (there were some early OS X updates from Apple that matched the file attributes while containing different binary data; something with the version control they were using at the time), yes it is rare. Probably a trade off or two involved in the decision (Richard Zulch is a very, very smart engineer...).

Link to comment
Share on other sites

And the program _certainly_ doesn't delete anything already written to a Member of a Media Set; did I misunderstand you to be claiming that? Apologies if so.

 

No, but it doesn't backup a a file if another copy is already backed up - this it called deduplication. If you have several folder containing the same files, a dedup-Backup won't copy all of the duplicates. This is fine to save space on the backup target but is also a bit risky if you don't do hashes (like SHA-256) and only rely on the filesystem attributes.

 

But it's still unclear what exactly is beyond your understanding. With default settings the program behaves as advertised; a unique file is copied once, additional instances of that same file are noted in the catalog for later Restore but not copied again, and if you want different behavior there are options available. It does this during every Normal backup. What are the "two options" they are forcing you to use?

 

The second option enables file based dedup - which is not always a good idea, but depends on the sources. But the first option only works if the second is also enabled.

 

The program has always used a combination of file attributes and location to provide its feature set. No it does't use any HFS specific anything (due to their apparent goal of cross platform compatibility) or hashes.

 

Wrong, it uses the file-ID if you enable the third option. The HFS-specific file-ID is a very basic feature (and the reason why aliases work on a Mac) but is a bad idea if you need to exchange a drive, because all file-ID's will have changed. Example: You have a client (or server) with a full 500GB drive and give it a 1TB one. Depending on how you clone that drive (eg. not asr in block mode or dd), all file-ID's for all files will have changed on the new drive, so - if the third option is enabled - Retrospect will copy everything from that client again. Each time you replace a drive you waste a lot of space on the backup targets.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...