Jump to content


  • Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About jetownsend

  • Rank

Profile Information

  • Gender
    Not Telling
  1. I've written *nix fs code, most recently for dealing with 40G of log data imported every 24 hours from 300K servers. I agree that it's quite likely that metadata about a file will change but the contents of the file do not change. Someone reads the file or makes a copy of the file, and that changes the read data, the location, and the creation time. However, the *contents* of the file don't change. (If the contents did change I would be in big trouble with the people reading the stats data I cranked out!) If the MD5 of the file doesn't change, but the metadata changes, then only back up the metadata changes with a pointer to the original file. Let's say I distribute a UNIX kernel to 300 machines. It's the same file, but the creation dates and modification dates will all differ by seconds, if not minutes, on each of those machines. Do I really need to back up 300 copies of the kernel file? Or do I need 300 copies of the meta data and one copy of the kernel? (I'm happy to work with Retrospect on a fix for this, I'm a consultant/contract *nix person who started when Bush I was POTUS.)
  2. You're suggesting this option will never work, that it's impossible to not recognize a file as a duplicate using a SHA or MD5 hash of the file's contents. I've worked with *nix filesystems for quite some time and have had few problems using hashes to recognize identical files. Storing related metadata for a file -- say last access time -- is not rocket science. This also happens when the files are on the same machine. In my second example above: if I have /Volumes/foo/data, back that up; then make a copy as /Volumes/bar/data, why would /Volumes/bar/data get a full backup? Simple hash checks would recognize these as duplicate files. An even simpler case: I download a 4G .iso and it shows up in ~/Downloads. I copy that file to ~/Public so I can easily download it from other machines on our local network. There has been zero changes to the file, it's still the same 4G .iso, but the copy in ~/Public will get a full backup as if it were a unique file.
  3. Retrospect 11 Version 11.5.3 (103) OS X 10.9.5 (13F34) Collection of OS X and Linux cilents. I have "Don't add duplicate files to the Media Set" checked but duplicate files are being added to media sets. Two examples: - a folder of distribution images, .iso and .dmg files that are often 4G DVD images. They are backed up on the host system script "home" and clients are also backed up using the "home" script. If I copy one of these images to a client for installation, they are backed up by the "home" script to the same media set holding the host system backups - a folder of binary content that is rarely modified, but new content is added. I moved this folder to a larger filesystem, added the new location to the script, and the next backup included a backup of all binary content in the folder. It behaved as if I'd started a new backup from a new media set.
  4. Is phone support still available? Retrospect 8.1 is almost unusable for me many days of the week, I am beginning to regret upgrading from 6. We're getting 530 messages on multiple clients, all of which are online and idle. The backup server is a PowerPC G5 host, 10.5.8, Retrospect 8.1 (626). We drive it from an Intel/OSX client. The first client is a Win XP box ("gir"), the rest are OSX clients. The error message even appears before a successful backup of one of the clients: + Normal backup using HomeRegular at 1/4/2010 10:18 AM (Execution unit 1) To Media Set Home [001]... > Can't access volume Documents and Settings on gir, error -530 ( unknown) > Can't access volume Library on lenore, error -530 ( unknown) > Can't access volume Users on lenore, error -530 ( unknown) > Can't access volume Users on sliver, error -530 ( unknown) > Can't access volume Library on sliver, error -530 ( unknown) > Can't access volume Installers on sliver, error -530 ( unknown) - 1/4/2010 10:18:56 AM: Copying Installers 1/4/2010 10:18:56 AM: No files need to be copied 1/4/2010 10:27:24 AM: Snapshot stored, 2232 KB 1/4/2010 10:27:37 AM: Comparing Installers 1/4/2010 10:27:57 AM: Execution completed successfully Duration: 00:09:01 (00:08:34 idle/loading/preparing) In 6. there was a recurring problem where having an XP host first in the list could cause problems. Is there a way to re-order the list in 8. to rule this out?
  5. Problem is back. Backups fail when Retrospect tries to talk to the XP client. Even going to the "Configure" tab causes the error message to happen. I forget the XP client and added it back, I'm able to back things up without errors. Maybe since there's a workaround, this is low priority for being fixed?
  6. Shows you how long I've been using Retrospect. I know EMC owns them now, but "Dantz" is permanently burned into my head...
  7. Heya, Just started having this problem today. When I launch a backup I almost immediately get the elem16.c-687 message and the backup ends. I can also create the error by going to the "Backup Client Database" tab, selecting the first system that would normally get backed up, then selecting "Configure". (That is, Configure->Clients->(client name)->Configure). Suspecting that dropping/adding the first client would fix the situation (it always has in the past), I made a tarball backup of /Library/Preferences/Retrospect. I then dropped/added the first client and now backups are working again. Assuming I get permission, I'm happy to provide the before/after copies of the Retrospect Preferences directory and all my system configuration details to someone at Dantz.