Jump to content


  • Content count

  • Joined

  • Last visited

Everything posted by Jim_Correia

  1. I'm back to using Retrospect after a hiatus. There are certain directories I want to exclude from all backups. I want to match these directories, with wildcards, independent of the (sub)volume being backed up. (In other words, some backup sets back up the entire machine, others just /Users.) In SuperDuper! I'd match these with the rule ignore /Users/*/Music ignore /Users/*/tmp How can I get this functionality in Retrospect (without false positives)? How can I quickly test the selector? (It appears that I must do a full disk scan to validate the selector. While this gives me a complete world view, it is also incredibly time consuming while trying to get things right.) Any suggestions appreciated. Jim
  2. So is there any way to write the selector I need? I want to match (so that I can ultimately exclude from the main backup) /Users/*/Movies /Users/*/Music /Users/*/tmp
  3. Quote: Retrospect supports "*" and "?" wildcards as the absolute path matching conditions when you define your own selector. Please go to Configure>Selectors>New, you can add your matching rules via the Windows(FAT/NTFS)>Path option. Yingqing, I'm running on Mac OS X (Retrospect 6.1.). There is no "Windows(FAT/NTFS)>Path option". Is there equivalent functionality on the Mac side? Quote: And then choose the ""Check Selector" icon(a tick icon) from the selector toolbar to specify a volume/subvolume to check your selector. Hope it helps. Yingqing The Mac version has a Check Selector command, but it scans the entire volume/subvolume which makes for painfully slow testing. I know this is the only way to get the complete picture, but when trying to build a selector, testing against an individual file/folder for a quick check would be extremely useful.
  4. Quote: I don't think it matters if a SCSI tape drive is _more_ prone to media related errors then writing to a File Backup Set stored on a remote CIFS volume; unless the latter is 100% imune to such errors, I'd go with the secondary check. I'm using AFP, but point taken. I agree that to be completely paranoid is to verify the backup set. My question was basically this: TCP/IP guarantees error free data transfer. We trust hard disk drives to read and write our files all day long without high level software verification. (And in fact, if we can't trust reads, we are doomed because we need to read the files in order to back them up. ) So is the Retrospect case any different; what is the verification protecting us from? A hard disk error that would go unnoticed in other circumstances? A bug in Retrospect itself? Meanwhile, I'll be paranoid and leave verification on. Quote: Sadly, there is no way to filter out noise in the Operations Log. Some sort of smarts, similar to the Summary Service built into OS X, would be very welcome indeed. I'll add it to my long list of Retrospect wishes.
  5. I'm backing up to a file based backup set over AFP to an Infrant ReadyNAS NV+ RAID. Is there any value in leaving verification turned on? i.e. is it likely to find a problem that isn't the result of the source file legitimately changing between the time it was backed up and the time it was verified? The obvious advantage to turning it off is it will have the backup time :-) Jim
  6. Dave, No need for the snarky portion of your reply. It wasn't a wise ass question. Certainly there is value in the feature in at least some situations. (For example, when backing up to tape where media related errors are much more frequent than when writing to a hard disk over a reliable TCP/IP link.) I just want to know if I'm getting real value for the time spent verifying in my particular situation. (Recycle backups take longer than overnight with my data set.) If there is a real error (i.e. not a file which was legitimately updated between copy and verify phases) is there a reliable and quick way to spot them in the error log besides the usual eyeball approach?
  7. I was excited to see that selectors had been enhanced in 6.0, but then disappointed when I grabbed the trial. Unless I'm missing something, full path selectors are still impossible to write if you a) sometimes backup /Users and sometimes backup the entire disk, because the "volume" needs to start the path and cannot have wild cards. What I want to do is match /Users/*/Music/* /Users/*/tmp/* I don't want to create a selector for every user, nor one with each volume/subvolume at the start of the path. (And for the music case I don't want to rely on file type - there could be sound files in application packages, garage band packages, etc. that need to be backed up.) Is this impossible, or is there a straight forward way to do this? Jim
  8. Quote: Jim_Correia said: Is there a demo or trial where I can look at the new file selectors before upgrading? (If you look at some of my past posts file selectors are a sore spot with me and if they solve my problem I'd upgrade right away.) I found the trial page on the site. Jim
  9. New and Improved File Selectors (Filters) Existing file type selectors have been updated and new selectors have been added to make it easier to precisely select files and folders to be backed up. Is there a demo or trial where I can look at the new file selectors before upgrading? (If you look at some of my past posts file selectors are a sore spot with me and if they solve my problem I'd upgrade right away.) Jim
  10. Is it possible to write a selector which will match /Users/*/Music/ that I can use across backup sets - some of which backup the whole volume and some of which backup sub-volumes? -- Re: pricing - it appears the upgrade is $60 but the annual contract is $240 - that is a factor of 4 and not "very close." Is there some other pricing option that I am missing?
  11. Does this update have true unicode filename support? Does this update have improved support for selectors? I want to write one selector to match folders by absolute path with wildcards: /Users/*/Music/ and have it match whether I am backing up the whole disk or a subvolume. Thanks, Jim
  12. Jim_Correia

    Speed of packet writing

    FWIW... I almost exclusively do backups using my DDS-3 tape drive. However, I also have a Yamaha 8424s CDR drive. Using that drive with Retrospect 5.0 on Mac OS 9.2.2 gives me ~50 MB/min backup speeds. On Mac OS X I only get <10 MB/min backup speeds. (The performance of the tape drive and file system backups don't show a huge discrepency between the two os versions.)
  13. >>> I'm aware of this. It was simply quicker to run the terminal as root. To kill one process running the terminal as root doesn't strike me as dramatically more dangerous than sudo-ing to root from the terminal to kill the same process. Since my normal login account is not an administrative account (security), it's less convenient to log in to the terminal as an administrative account, then sudo to root, just to kill one process. <<< I still think it is more appropriate to do su adminUser sudo kill pid rather than run the terminal as root, but it is off topic and I don't care what you do to your own machine :-)
  14. > I just ran the terminal as root (using Brian Hill's "Pseudo" application), and killed the process Don't do that. There should rarely be a need to run a GUI application is root. There should never be a need to do this to the terminal. sudo kill pid for a one shot command. sudo -s to get a root shell. > Then I launched Retrospect, and RetroRun came back alive using about a meg and a half of real RAM Are you reading the right column in top? My RetroRun process starts out with 968k of private memory and grows from there. > Now, if I can just remember to kill the sucker once a week or so until Dantz comes up with a patch... If you are really worried about it, you could write a script which found the RetroRun pid, killed it, and restarted it, and make it a daily (or more frequent if necessary) cron task.
  15. > Has anyone tried just killing off RetroRun? Yes. > Will Retrospect restart it after it's been killed Retrospect will restart it the next time you launch it the GUI application. Of course, it will automatically restart the next time you restart the machine. (If you get rid of the startup item, it will write it back out next time you startup the GUI application.) > Or is there a way to restart it from, e.g., the command line? You can certainly restart it from the command line. I run all my backups by hand right now, and I think it is only used for unattended backups. (I don't have the hard copy of the manual yet and haven't perused the PDF completely.)
  16. Turns out that won't work, per se. Retrospect recreates the startup item at every launch. (A cron task that periodically shoots the process will probably work until such time as the leak is fixed if you aren't used timed scripts.)
  17. I strictly do all of my backups by manually launching retrospect and executing a script off the run menu. Can I just remove the RetroRun startup item until such time as the memory leak is fixed? Is it used only for timed execution?