Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Maser

  1. I have never defragmented my Lacie 4Big (exact same one that you have, I believe), but I have defragmented the internal hard disk that contains my OS and my compressed catalog files (which range from 5G to 10G) using iDefrag a couple of times to test things out. There was no noticeable difference in the speed of backups -- or grooming (which is what I was really testing out) -- after the defragmentation at all. iDefrag (which I've used elsewhere) has been perfectly safe for me and I've never had any issues with the program. (Whether or not it actually *does anything useful*, I don't really know...) I occasionally to D2D2D to keep a backup of the Lacie to two smaller (and older) Lacie raid drives using Carbon Copy Cloner. The speeds for doing that backup of the volume seem pretty reasonable to me. I groom 2-3 media sets weekly and have gotten down to about 300G of free space on the drive (I have about 1.5 TB free at the moment) and have been using this particular drive since April 2009. Your post made me look at this and iDefrag says my fragmentation on this drive is 18.6/30.9% (which wouldn't be unexpected considering every disk media set on this volume has been groomed probably at least 50 times -- some of them get groomed weekly, so those have been groomed even more often -- many hundreds of times by now. However, I can't imagine that defragging this volume would make that much of difference in doing my final "2D" step considering how much time it would probably take to optimize 3TB of data and how fragmented the drive would get again after just a few months. But if I *were* to do this, I'd certainly do it with the engine off. No question about that...
  2. When the user is logged in and on the network -- are you seeing port 497 open on the client computer? I'm wondering if this is a firewall issue...
  3. For me, the 9.0 build gives me SPODs when closing my disk media set catalogs (which are fairly large) -- and because I run multiple concurrent proactive backups, I get a lot of SPODs with the console, unfortunately. This appeared to be new behavior in 9.0.x. Other than that, I don't really have any other problems and for how I use it, it's not that much different in performance/behavior of 8.2. I'm not really using any of the new features of 9.0 (like self-restore), actually. If you are still running 6.1, it's great (how we lived without grooming still baffles me...)
  4. Release notes here: http://kb.retrospect.com/articles/Retrospect_Article/Retrospect-9-for-Mac-Release-Notes/
  5. When I've tested this, if the media set is in use by another active process, the client has no way of knowing this (and that adds to the delay.) *Usually*, you should see a process start on the engine machine when the restore activity is running. Is that activity not running on the engine when you initiate the client-based restore?
  6. Not unless you have explicitly set your client/source to back up "all volumes".
  7. If it were me, I'd just let the client do the work -- unless there was such a large incremental amount of data to be backed up that the "mounting" method was so much faster...
  8. Are you certain you could isolate that to the client version? What I see -- and I run multiple concurrent proactive scripts with really large disk media sets containing between 1.5 and 2.0M files each -- is that when an activity is in the "closing" phase, *that* can often cause the console to SPOD for a while until it's done. But I only have about 2/3rd of my clients update to the 9.x client at this point, but I see this issue regardless of which version of client is backing up.
  9. The retro 9 client doesn't directly allow you to specify what folders to back up -- the engine sources would be used for that. But the client can mark folders "private" from the backup. From testing I did ages ago, I found it was faster to use the source mounted than to back up something via client. But, really, that was years ago when I tried that. For all I know, it's not faster any more to do it that way. You didn't say if this was a one-time backup, or if it was an ongoing backup of incremental data, etc. If on-going, I'd probably use whatever was faster *first*, then let the client do incremental backups so you don't have to worry about the drive being mounted, etc. (if that's what works faster the first time...)
  10. Or you might want to use something easier like Carbon Copy Cloner for something like this.
  11. Old backups do not get removed from a media set unless you groom them out. If you don't have grooming turned on for the set, then you could turn it on and set it to a fairly high number (like 150, maybe? -- it would depend on how old the media set is) and once the catalog has updated itself after making that change (which can take a while), you'll see the clients backups in Past Backups (or at least 150 of them...) You can then manually remove all the clients past backups -- again, this can take a *long time* to process -- and then run a groom on the media set (which, again, takes a long time to do -- depending on the size of the media set. However, if you have more than 150 backups of all your clients, you don't want to do that groom -- as it'll only keep 150 backups of *all* the clients. However, I would suggest you avoid grooming *individual* Past backups -- that can take a long time to do to get them all, but it depends on how many backups you have and (again) the number of files/size of the media set... But, that *would be* the safest way to groom out Past Backups -- to do them one at a time. But it'll be the longest way to do them. It all depends on your retention policy. I keep a "60 backup" backup of my client machines and my groom settings reflect that and I groom a media set a week. When I remove a client, I set aside an evening to grab the remaining 60 visible backups of the removed client and manually delete those past backups. When the next scheduled groom of that media set takes place, then all that clients files will be deleted at that point.
  12. Well, you *can* add the server to the console twice -- once as the native server (if you are running the Console on the Engine machine) and then once again by IP address/hostname (I just did this...). I can't add it a third time, though. But why it would randomly show one server sometimes, but not the other -- would seemingly only make sense if (for example) if you added the server by IP address, but the *server* getting a different IP address from a DHCP server every so often. Something like that, maybe?
  13. Sounds (maybe) like a daylight savings time difference on the Windows client side with the AVG files? (I don't have AVG on my windows clients, so I couldn't confirm/deny this supposition...) That's a guess -- especially if the problem only started since Sunday.
  14. Well, you should probably report this to Retrospect support. It sounds like a bug they need to fix.
  15. Restore the last working "config80.dat" file from your backup, stop the engine, replace the "wrong" version with the backed up version, then restart the engine. If you never backed up the "config80.dat" file, then you will have to readd your scripts/clients, locate your media sets, etc...
  16. Out of curiosity -- what if you add your clients *without* using the Public key authentication? Does the problem go away? (Maybe try to revert a few clients back to the 6.3 client and try a couple others with the 9.0 client added without keys?)
  17. How did you add your clients? Private/Public key? Or by browsing/hostname addition?
  18. do you have grooming turned on for your media sets? If so, that's expected to see only a specific number of backups.
  19. If, for some reason, that isn't working, you can "show package contents" for the console app and drill down to: Contents --> Resources --> updater and run the "Retrospect Engine Installer.pkg" installer manually.
  20. http://www.retrospect.com/en/support/downloads Have at it and report here if it fixes (or not) your bugs!
  21. Did you upgrade the clients to v 9.0.0 as well? You could just try toggling that notification option off (I have it off for my computers...) Are the clients *fully* successfully backed up? Or do any of those still in the "no backup in 7 days" list have alert messages on their backups in the past activities?
  22. The restore issue is a bug that should be fixed in the soon-to-be-released (hopefully?) 9.0.1 update...
  23. Yes -- Retrospect (still!) is not multi-core aware as an application. I have high hopes that with the restructuring of the company, we'll now see faster improvements in the software for things like this...
  24. Well, what is happening is that Retrospect is unexpectedly quitting for some reason and, because of that, isn't saving your changes to the configuration file, so it reads from the backup configuration file -- which probably didn't have your source added to it. My suggestion would be to try to figure out what file the copy process is barfing on to see what the underlying issue is. Maybe break down the source hard disk into a series of Favorite Folders until you can figure out what file/folder causes the Retrospect Engine to "exit" and then work from there? (It might be something you need to open a support case for if -- for example -- you can do a Finder copy of the same files without issue). *Or* it might be an actual problem with a folder/file on the disk that Retrospect can't handle.
  • Create New...