Jump to content

Deduplication 7.7 vs. 8(.1)


Recommended Posts

Hello forum,

 

We are seeing and hit hard by the fact that Retrospect 7.7 on Windows will backup everything even when the file has NOT changed at all. Altered ACL or location is enough to trigger copying a file. This does not usually matter much but when arranging 10 TB of stuff, it really does suck especially because R7.x is quite slow and not using the system resources to the fullest.

 

How is this with 8+ versions? Not that I am willing to risk anything and everything by upgrading yet. I did that once (7.6 -> 7.7) and lost thousands of euros worth of working time, not to mention the frustration, oops, did I mention the frustration?

 

I would like to know how R8 behaves in this respect. I'm more than ready to make the change and face all the trouble needed, since there is not one single more problematic product in our environment than Retrospect has proven to be, but I'm also willing to give it a chance.

 

If you have working knowledge to share on this, I would like to hear it. Other (real-world based) info on R 8.0 and later is welcome, too.

Link to comment
Share on other sites

  • 1 month later...

We have the location options as in the picture above. But we do backup security information, all of it. That seems to be the problem. When we have moved the files to another location, their ACLs have also changed (according to the shared location another set of domain local groups have access). This change in ACLs triggers backing up all the files although they have not changed at all.

 

Retrospect can not backup only NTFS security information, it copies the files even when there is no reason to back them up. I'll consider not backing up security information for certain locations, but I think this is a case for the backup solution to handle better.

Link to comment
Share on other sites

  • 1 month later...

We've been plagued with similar problems. I hadn't associated them with ACL changes, so that's a nice clue. Thanks!

 

I'm not sure that completely explains the phenomenon.

 

When our backup set grows too large due to the redundant copies, I've tried doing a transfer backup set to a new set. That indeed seems to eliminate the redundancy, so apparently ACL differences in copies inthe origirnal set are not causing the files to be duplicated in the new copy.

 

Then I reset the original backup set and do a transfer backup set from the copy to the empty original. That does result in a copy the same size. But the next backup to the reconstituted set seems to once again copy most (but perhaps not all) of the source files to the backup set. There's usually a net space saving via this recover process, but not nearly as much as there should be. Since these backups are a couple of Terabystes, this process takes days to accomplish.

 

So I'm hoping someone has a good answer to the original question. Does v8 fix this problem? Or is there a good workaroud?

 

Thanks,

 

  Henry

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...