Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Maser

  1. I don't think Retrospect 10 changes this operation -- regardless of how you add the client (I add mine by hostname), if it sees the client on the network anywhere -- it will try to back it up. I have this issue with clients who go travelling and then connect to the campus network over VPN -- they start getting backed up no matter how slow the connection is.
  2. In my testing, the RetroISAScans *folder* -- should never be deleted (unless you do it manually). The .dat/.inf file pairs for the volumes *might* be deleted if the process determines there was a problem with the files (after all, these files must be bullet-proof containing the data that the computer says should be backed up). If the .dat/.inf files are deleted (manually or automatically), it can take about an hour before the system realizes they are gone before they start being recreated (in which time a backup of the client would revert to the old behavior...) In general, though, the .dat/.inf files for non-Firewire volumes -- should be there all the time (and update every 5-7 minutes automatically). I agree about the 5 minutes CPU-load thing. Trust me, it was a *lot* worse before the GM release...
  3. were there any errors in the log after you rebuilt the catalog? Can you do a non-scripted backup to the media set?
  4. Details I provided were with Retro 9. I have only tested Grooming with Retro 10 on some test media sets -- not my production media sets yet. I have been gathering some data on my grooms for the past few weeks so I'll have some comparision data when I install the 10 engine (when I get my code -- I don't want to install it in trial mode...) My backups are stored on a Pegagus Thunderbolt RAID. My engine machine is now an i5 Mac Mini. Really -- increasing the speed of your CPU for your engine machine is probably the only thing that will make grooming faster (all other things being equal). Moving from FW800 to Thunderbolt had no affect on the grooming speed.. I think I mentioned above how many files were in my media sets. The sets with more files (3M files) take much longer than the sets with fewer files (27K). I retain a fixed number of past backups (60 for most sets, 90 for one, 180 for another). I groom my two small media sets every Friday night and one large media set each Saturday night (these are scripted grooms.) The amount of time *really* depends on the number of files in the set. In my larger post above, my "B" set -- took 19 hours. My "C" set took 10 hours. (My "D" set will be groomed this weekend...) My B Media set has 3.3M files, my C Media set has 2.1M. My D set has 2.8M files, so I expect it to take closer to 19 hours than 10 hours to run its groom...
  5. The .dat files will be named with the UUID of your volumes -- so you can compare those names with the UUID of the volumes as the show in Disk Utility. If you've excluded the right volumes, the .dat files will never get updated again (you could probably also *delete* the .dat/.inf pair and they should never get recreated.) TextEdit should be fine to modify the file, but doing it in Terminal is probably best (and you have to stop/start the process to get the file to be reread -- which may be your issue.) Just reboot after you edit the file and you should be fine. With your initial observation that only *one pair* of .dat/.inf files being created -- if you have multiple partitions, it creates the set of files one volume at a time -- and doesn't move on to the next volume until it's done with the current volume. So the "4 more files" -- may have been from the FW drive *or* may have been from one partition on the FW drive and one partition on the internal drive (or possibly two partitions on one of the drive, etc...) You *should* (eventually) have generated 7 sets of files (4 from the internal and 3 from the FW drive). It can take a while to generate these sets of files, though, depending on how many files are on each volume. When you connect a FW drive, it will generate new .dat/.inf files for those drive (but not for a mounted disk image, IIRC). When you disconnect the FW drive, it *should* delete the files associated with those volumes (or it removes them after a restart). I think the last time I tested, they were gone the next time the "retroisa" process did a scan. (I just tried this -- I have a FW drive with 5 parititions on it and just one partition on my internal drive -- while typing this up, I now have 6 sets of files -- as expected. Once I disconnected my FW drive (ejecting all 5 partitions) -- only the internal drive set of files was still there...) I don't think "quitting" the process actually deletes the files. I think you have to "unload" the process to stop it from running (otherwise it restarts itself automatically)
  6. I believe you could also get around this by sticking with the Retrospect 9 client -- only the 10 client adds the "retroisa" process that is looked for by the 10 Engine. I went around with them on this during the testing process and you are essentially correct -- you'd need to stick with the 9 *client* or edit the retro_isa.ini file. That said, if you watch the "Scans" folder in /library/application support/retrospect -- you'll see how often the .dat/.inf files are updated for your volume. In general, though, -- unless you are installing an operating system upgrade (which touches thousands of files) -- the process seems to update every 5 minutes or so. I think they posted 30 minutes on the FAQ for outlier systems (such as those that have multiple hard disk partitions that are being touched consistently.)
  7. Or, I think if you stop the engine and remove the catalog file for the media set, the groom script will fail when you restart the engine. You'll have to rebuild the catalog file at that point, though (which you probably need to do anyway at this point...)
  8. When you have a media set you can't groom (based on experience) what you should try: 1) rebuild the catalog file. 2) Do a "copy media set" script and copy everything from the possibly-bad media set to a new media set. 3) See if you can groom the new media set. For a core2duo CPU, grooming a media set with "a few million files" sometimes took me a full 24 hours. The more files, the longer the groom. But if your catalog file can't be rebuilt for some reason or the "copy media set" script doesn't work -- that indicates other problems with that set and you may just want to start with a new set and use that old set for restoring as necessary.
  9. The "grooming segment" status is really the *only* status information you get -- that's the status that occurs when the actual grooming of the media set files is taking place. The non-indication-of-activity in the status *prior* to that is when the temporary file that is created that contains *what will be removed* is being generated. Unfortunately, there's no way (and probably never can be a way) to get a status on that progress of generating the list of what is going to be removed. It would be nice, but I don't see how you could estimate that for a progress bar...
  10. - how many machines are people backing up with retrospect server? any small/medium businesses out there backing up ten or more? Yes -- I back up approximately 55 clients spread out over 5 media sets (4 sets of Macintosh clients, 1 set of Windows clients) -- I also backup two servers -- each to their own media set) - how large are your media sets? In general, my Mac sets are (size-wise) about 600G each with about 1-3M files. My server sets are somewhat smaller and my Windows set it much smaller. - is anyone really using retrospect grooming in any serious sustainable workflows? if so, how? how does it perform? what is your hardware setup like? Yes! I have my client media sets all set to keep 60 backups. One server set keeps 90 backups and the other 180 backups. Each weekend, I groom one media set (because of how long it takes). The last groom I ran groomed 92G from one Mac set and took 19 hours -- and this is on an i5 Mac mini. This was one of the longer grooms since moving to the i5 mac mini, though. I don't keep log records that far back to know if the groom of the same set took longer 5 months ago. This particular Media Set has (currently) the most files in it -- 3M. I do a weekly groom of my Windows set and my 90-backup server set. Together, that groom takes 1.25 hours. The windows set is about 400G with 300K files, the server set has only 30K files, but is about 500G - anyone know what the limiting factor is on grooming performance? RAM? speed of access to catalog file? speed of access to member file system? There are two factors that affect this: 1) Number of files in the media set 2) Speed of the CPU You can have a 1TB media set -- with only 400K files in it. That will groom *much* faster than a 500G Media set that has 3 Million files in it. I tend to reboot my engine machine on Friday afternoon (when I remember) so it's a bit more cleared out before the grooms run. I don't think that makes much of a difference, though. I, too, am interested to see what happens with Retrospect 10. My intent is to document a few more weekly grooms before upgrading to 10 (shortly) for comparison.
  11. I think it's supposed to be "improved" in that it's supposed to be more stable. In my (limited) tests with Retro 10, I haven't seen a whole lot of difference in grooming over Retro 9 -- but I have yet to put it to pace against my large, many-millions-of-files media sets yet. What *mostly* reduced the amount of time (in Retro 9) to Groom my large media sets -- was upgrading the CPU on the system. Going from a core2duo to an i5 -- cut my Groom times in half.
  12. RetroISA takes a long time to first generate the "scan" file(s) for the InstantScan feature to work. On my "OS only" test partitions, it was taking about 30 minutes to fully run. If you have more than one partition, it needs to make one for each new partition. After that, I believe (at least from the last time I looked), it updates that file for changes every 5 minutes or so. You can watch the construction of the file in /Library/Application Support/Retrospect/RetroISAScans to see the timestamps on the file(s) as to how they are progressing. The more actual *files* you have, the longer it takes to create this initially.
  13. From what I remember about this, you have to remove and readd the clients to get this to be functional -- even though everything in the Engine preferences looks correct.
  14. Can you do manual backups of the clients that are not backing up in your proactive scripts?
  15. I believe he means for the *client* -- not for the engine.
  16. So, I bounced my engine machine to 10.8. From what I can tell, all my backups -- including the few clients I updated to 10.8 -- are working. Seems pretty painless so far...
  17. And, FWIW -- I did not have to modify GateKeeper settings to launch the 9.0.2 Console on my engine computer --it is currently set to "Mac App Store and identified developers". Which goes against what your KB article says. Am I missing something?
  18. I have not seen this problem in the machines I've upgraded. If the client was on prior to the upgrade, it's been on after the upgrade finished...
  19. Is there an official statement on this? Is the current 9.0.2 build (client and server) compatible with ML? - Steve
  20. And you need to have enough free RAM to be able to actually run that many concurrent threads. With a 4G RAM backup machine, I was never really able to get more than 5 activities running concurrently (on rare occasion a 6th was started). I've bumped my backup engine to 8G, but haven't tried to get more than that going.
  21. I have not experienced this issue. I have always (at least as far back as I can recall) had this value set to the maximum (currently it's set at 16). I've never had this revert back to a lower number unexpectedly...
  22. Try stopping the engine via the system preference rather than just restarting the Engine computer. A clean stop of the engine will force any cached writes to the config80.dat file. An OS restart of the engine computer might not do this all the time (depending on what was going on with the engine computer...)
  23. You can always install the engine on another computer and use that for your testing with that client -- I do that all the time.
  24. I would make a proactive script for this scenario (or two -- one for M-F and one for SS). That should work.
  25. I use Carbon Copy Cloner to do the final "TD" copying from my backup volume to my backup-backup volumes. For this (incremental backups of backups), it's faster than Retrospect ever would be. My backup volume is the same since 2009, but I have rebuilt most of my catalog files at least once (for testing purposes) at least once -- I think my oldest is about 18 months old, but most are about a year old at this point.
  • Create New...