Jump to content

Nigel Smith

Members
  • Posts

    363
  • Joined

  • Last visited

  • Days Won

    26

Everything posted by Nigel Smith

  1. Try C:\ProgramData\Retrospect -- IIRC, in Win 10 "All Users" is just a link to ProgramData and is only there when required for backwards compatibility. While there, take a look at "retrospect.log" in case it has any clues.
  2. I don't -- in fact I guarantee that right now, despite the majority of my users being post-grads and higher and so certainly not stupid, there will be a handful who haven't bothered to plug in their external drives for a couple of weeks. And I don't care 🙂. Our PhDs/PostDocs want (and have) a high degree of IT autonomy, and with that comes responsibility for their data. We can't make them plug in their HDs, any more than we can make them store their scientific data on the central servers -- but they know it's their future they are risking so they are generally pretty good about these things. They were all using external HDs and Time Machine/Windows Backup anyway -- the disks issued were so they run duplicates, backup up any home machines they might be using for work, etc. The same is true for the Admin side, plus Biz-Op data is held on our servers (accessed over VPN etc) and is still being backed up as usual. We use RS in addition to external HDs. I'd love to have RS working remotely the way I need it to, but until it does I can't rely on RS even in more "normal" times, so external HDs are required. Being useful when home-working was an unexpected benefit of such a "portable" backup solution. RS's Remote Backup is, frankly, half-baked. As you say, it needs either Multiple "Remote" tags; making the "Remote Backup Clients" tag a special case which "AND"s with other tags (rather than "OR"s); or making tags usable in Rules. Then we can back up different clients to different sets, still use file-level de-dup, etc -- basically, treat a Remote Client the same as a local one. Feel free to use any of my ramblings in any way you see fit. If I get time I'll be spinning up an evaluation of RS17 (put on hold since the lockdown) -- I'll try and include Remote tests in that since my previous experience is probably out of date.
  3. Again -- you're assuming "Transparent operation mode" (auto-unlock). If OP is using "User authentication mode" (pre-boot PIN or password), "USB Key mode" (pre-boot hardware authentication), or a combination that includes either or both of those mechanisms then what you describe will not happen and user intervention will be required. Most people don't use anything other than "Transparent operation mode" so, as we've said, OP should be OK regardless of his backup methodology. But OP and any others reading this should be aware that if their security requirements are more stringent (or they're running hardware that doesn't support "Transparent operation mode") then there may be problems with RS access following an unattended boot/restart. As always, something as important as a backup routine should be checked under operational conditions -- I'm sure we all have stories where things should have worked but, for whatever reason, didn't!
  4. Wrong -- we're now into Week 6 of working from home. ...was to issue everyone an external hard drive to use with Time Machine or Windows backup. Rest of my attempted reply just got eaten. Suffice to say: Have tried Remote Backups before -- fail for us because you can't segregate clients into different sets Keep meaning to try Remote combined with Rules -- can you run multiple Proactives using Remote tag, each using a different set, each with different Rules to filter by client name? Previously felt client list was too long for this to work, but RS 17's faster Proactive scanning may make it feasible Tags need an "AND" combiner and not just an "OR". That may not be sensible/possible -- include ability to use Tags in Rules and you'd make Rules way more powerful
  5. Which means that all a thief needs to do to get round BitLocker protection is... nothing? That doesn't sound right. There must be *some* authentication mechanism -- how strong that is, and whether it would effect Retrospect in the outlined situation, will depend on how OP sets up BitLocker. Requiring a PIN at startup, a USB key, biometrics, maybe the device has a TMP and he's chosen to auto-unlock (which sounds like what you're doing), perhaps the data to be backed up is on an encrypted non-system partition, etc, etc. With so many options, I wouldn't blindly trust Retrospect (or *any* backup software) to work as expected in any situation where the main admin-level user isn't logged in and active. So while I may have overstated the problem, because I'm used to systems which *do* require active user authentication after startup, OP should test and make sure he gets what he wants.
  6. Limited to 10.7.5, so still stuck at RS 16.1 as per the KB article David linked. Should work OK, and that Mac could potentially run Catalina if it has a compatible graphics card (see here). But maybe it's time to revisit your whole system -- transfer those old RS6 backups from tape to disk (or more modern tape), indulge in a Mac Mini with current Retrospect, backing up to a NAS which is replicated to the cloud for off-site storage, etc. Things have come on a long way in the last 20 years, so it's worth taking a fresh look at how you might do things.
  7. I think I understand what you're trying to do -- your use of "client" had me confused, but I'm not sure what we should call a machine that runs the Console app. Manager? But yes, it appears that the advertised "Console only" installer also installs the Retrospect Engine. Two simple solutions spring to mind: Run the full install, then stop the Engine in System Preferences (remember to uncheck "Launch on Startup") Download and run the .au app/installer, run the uninstaller as you describe, delete the /Library/Application Support/Retrospect directory. You can then run the app without it re-installing the Engine TBH, I'd just do the first -- a lot less trouble, less chance of unintended consequences, and only ~70MB of "wasted" disk space. Though I agree, it would be nice if the installers behaved as advertised!
  8. My only caveat would be regarding how you leave your laptop pending those "automatic nightly backups". If you shut it down or hibernate and use some scheduled startup mechanism just prior to the backup window, obviously it'll fail unless you are there to enter your BitLocker key 🙂 If you just leave it on (you can log out) and walk away, you should be fine.
  9. Quick check -- is Retrospect client running and, if so, which interface is it listening on. Fire up a Command Prompt, type in "netstat", scan the list for either ":dantz" or ":497" with a "LISTENING" state. If it's there and you are getting the 530 error, check that the listed local address from above is the same as your server is trying to contact. If they don't match you can either rebind the client IP (<https://www.retrospect.com/uk/support/kb/client_ip_binding>) or change the source IP on the server. If you get the 505, I've found the quickest/cleanest solution is to turn off the RS client, delete the "retroclient.state" file (I *think* that's in "C:\Documents and Settings\All Users\Application Data\Retrospect", but don't have a Win RS client in front of me right now), then turn the client back on. Usually clears the error without having to re-add to the server after the uninstall/reinstall solution.
  10. Didn't miss it -- but I may have missed the point of it! As I understand it, I can do similar by running multiple Proactive/Scheduled scripts, each targeting their own backup set, all stored on the same destination resource. You end up with more catalogs and sets to manage, but the benefit is that each is smaller. You lose file-level dedup in both scenarios. I can see why Storage groups might be useful, but I'm already running multiple sets and so parallelise operations across the whole estate rather than within a single set. As always, YMMV as you'll have different requirements and resources (where "you" is everyone reading this -- not specifically you, David!).
  11. I won't comment on storage groups, since I don't use them (nor, tbh, do I see the point of them on most situations...). If you aren't already, make sure the catalogs are getting backed up daily. That way, when trouble strikes, you can restore the last known good catalog and rebuild from that -- a lot faster than rebuilding the catalog from scratch. You can then use the time saved to find out why you are having to rebuild catalogs so often, because that doesn't match my experience (previously, yes, because hardware was less reliable -- but not now).
  12. Can you not just play with the original's timestamp? Pre-run script to store original timestamp and set timestamp to "now", do the backup, post-run script to restore timestamp to original. Easy to do on Mac/Linux clients with "touch", and I believe Win PowerShell has [Get|Set]-ItemProperty to do similar. Much quicker without the overhead of the copy and delete. But is any of that really necessary? As I understand it, all you need to do is turn off the "Don't add duplicate files to the Media Set" option in the "Matching" section of you script's "Options" and files will be backed up every time, regardless of metadata matching. You might want to set up another script, narrowly aimed at the files you want this behaviour to apply to, so you get "normal" matching across the rest of your data. Don't know if you'd get your block-level efficiency though.
  13. Stupid question but, given the lack of column headers, one that I need to ask -- in the backup set screenshot, is that the file modification date that's listed or the backup date?
  14. Different folders, but still using an external drive? Or different folders on the system drive? Just trying to eliminate USB from the equation. Also, soon after my last post I got an announcement about RSv17 -- so you could grab the trial and try that.
  15. You don't say what version of Retrospect you are running, nor Windows. Are they the latest of both? Are you backing up from/to USB devices? Take those out of the testing by using RS to back up a small folder on your system disk to a new disk set on the same drive. More data == better bug report == more chance of a quick fix.
  16. Obviously it'll miss anything not in those paths. Most users will be covered by these "defaults", those who aren't (eg people using Homebrew to install *nix utilities in /usr/local) should know enough to not get caught out. Catalina has made it very difficult to "accidentally" save files in weird, random, places (though my users still try!). So your backups will be fine for recovery as long as you (and your applications) are following Apple's guidelines. And you only need to back up Preboot if you want to do a bare-metal restore without having to fresh-install the OS (eg you have no, very slow, or expensive Internet access and no USB installer). Using CCC for BM-DR is a good idea. Using CCC and RS is probably even better -- restore your last CCC backup then use RS to overlay that with more-recently changed files. Best of both worlds!
  17. In no particular order: .TemporaryItems directory is... temporary. No need to back up, no need to worry about the errors. "folder.501" implies it came from something you were doing (501 is the ID of the first-created "real" user account on your Mac, Retrospect runs as root and would create "folder.0"), possibly a result of moving the data to the volume, more likely due to currently running processes. I see a bunch of cache files which aren't being picked up by the "All except cache files" filter -- you didn't want to back these up anyway, so no worries there. Otherwise you've got some database and plist changes, which looks(as with the above) more like the files were in use at the time than anything else.
  18. Yep -- that's a pressing need (I'm dealing with similar at the moment, and have had to use the same AFP work-round). Shouldn't need much, if any. Volumes definitions, direct mounting, etc, can be be done on the RS server without impacting the Synology. I'd hoped your aggregated link would be resilient enough that you could simply re-route one cable to the trashcan instead of the switch and then change the network settings, but I've just done a quick test and it looks like you can't just remove a NIC from the LACP bond -- you'll have to delete the Bond then re-create it with three NICs. Maybe 5 minutes of downtime. Did you start new media sets when you upgraded? If so, it could be that the "old" system was skipping already backed-up files which the new system is now trying to first-time access -- only there's a problem on the Synology. Have you run Data Scrubbing (Storage Manager->Storage Pool->Data Scrubbing) lately? I'd be happier doing that after a complete, successful, backup, but needs must when the devil drives...
  19. Stop the movie, then right-arrow to forward frame-by-frame, left-arrow for backwards -- works for QT Player anyway, don't know about VLC.
  20. You sure? If I lose my SO's photo archive[1], I'll be "audited" very severely... [1] Yes, it'll be my fault that she didn't read the "Are you sure you want to delete all these photos?" dialog before clicking "OK".
  21. Done. If anyone wants to see the problem, zipped screen recording attached. If you step frame-by-frame from just past halfway you'll see the switch. Screen Recording 2020-02-24 at 12.59.02.zip
  22. No help, but can confirm -- upgrading clean Mojave/RS16 install to Catalina gave me the same icon problem. Icon 3 in /Library/Application\ Support/Retrospect/RetrospectEngine.app/Contents/Resources/app.icns is: ...although, stupidly, I forgot to check before I upgraded the OS. Edit to add: Interestingly(?) a fresh-install Catalina client has the correct icons in System Prefs->Privacy->Full Disk Access, but the banjaxed ones in System Prefs->Privacy->Files and Folders where the icons appear to be smaller.
  23. So it's the Synology that's "shutting down", and not the Mac. Quite a useful data point! Still sounds like a network problem -- AFP is a lot more resilient than Retrospect, and can retry/recover from a network flutter when RS will just say "Nope, server's gone", because the OS-recovered connection will often be named differently in /Volumes. Try manually mounting the share on the Mac (rather than adding it to RS as a server and letting RS login) and backing up as a local volume. And as David suggests, unless you have a pressing need for AFP, try using SMB. Also try taking the switch and aggregation out of the equation by running a single direct connection from Synology to trashcan. I'd do this as disk-based backups, so as not to waste time/tape -- maybe also by creating a small-sized "Favourite" on the Synology to save time, then expanding the backup to a full share, then all the shares... Basically, start as simple as possible and see if it works -- then add complexity until it breaks!
  24. I know that soft links are treated differently -- with SMB1 they are resolved by the client, with SMB2+ by the server, and I remember various warnings in our old Isilon docs about unpredictable behaviour when using them with SMB1 clients. Assuming hard links are treated similarly, it looks like you've nailed it. Would still be interesting to know why things are handled differently in the read and verify phases but, at this point, it may be better to leave it working well and not look too hard at why, just in case...
×
×
  • Create New...