Jump to content

Nigel Smith

Members
  • Content count

    343
  • Joined

  • Last visited

  • Days Won

    25

Nigel Smith last won the day on June 28

Nigel Smith had the most liked content!

Community Reputation

29 Excellent

About Nigel Smith

  • Rank
    Retrospect Junkie

Profile Information

  • Location
    UK

Recent Profile Visitors

992 profile views
  1. Agree about the Transfer and Verify (although Rules are hardly irksome), but I don't understand about Restore -- sessions are listed by Machine + Volume + Media Set, so it's trivial to do a point-in-time (or Browse and select particular files) for any client/volume pairing in any Storage Group. In Windows it's actually more clicking to get to previous snapshots/sessions. I've not tried it, but in Windows do you have to restore from a particular source? Can you easily search for every occurrence of "Important.docx" across a Storage Group, or would you have to search (and restore from) each "component set" individually? As you rightly say, it's potential in improving Rebuilds is massive, if it can be done. I'm not sure how similar Mac and Win RS actually are! There may be more to it than just how their respective consoles allow you to view what's under the hood. Creating Storage Groups by splitting a single over-arching catalog into a "meta catalog" plus a bunch of client/volume "component catalogs" was a very pragmatic solution to a pressing problem -- whether that could be done the same way on both platforms is something way outside my understanding. What we really need is a preference so we can switch both Mac and Win consoles to our preferred method of display 😉
  2. I know what RS does at the filesystem level. But in the UI, which is where I suspect most of us interact with RS most of the time, it is presented as "just another media set" on the Mac and as something different in Windows. Again, I prefer the Mac software's consistent approach -- you obviously don't. And I can't disagree with your preference since Storage Groups aren't "just another media set" (eg lacking cross-client dedup within the set). I'd prefer it even more if there was a consistent approach, one way or the other, across platforms -- but that's probably asking too much. It may have changed in v18, but in v17 you had to rebuild the entire Storage Group on both platforms, even if only a single "component catalog" was damaged. A pain, but at least the rebuild is also parallelised across multiple threads. See this old post for my workarounds, either of which could possibly be adapted to twickland's particular problem.
  3. Trust me, it isn't! Agreed. But whether Mac or PC handling of them is correct (c'mon -- Mac wins every time!) is a matter of perspective. Looking top-down, a media set only has one catalog. A Storage Group only has one catalog. It looks like any other media set in the RS UI, and the manual even states it should be treated the same as any other media set. So it *shouldn't* show the component directories, since no other media sets do. When it comes to UX and abstraction, consistency trumps absolute accuracy every time. And why display something that shows no usable purpose? I don't know, and I don't really care, why Mac and Win do this differently. I know which I prefer, so do you, but those are only opinions and neither of us is "right". While I would hope that you are right about this, I'd recommend that anyone who wants to use this in production thoroughly tests it first. There are limits on destination volume size -- my "use as much space as you want" Disk Media set splits every 8TB, for example -- and I'd want to be sure that the above allowed you to separate across "proper" logical/physical volumes rather create more RS "media set" volumes in the same place. Nice work on the password protection checking!
  4. Which just goes to show how tricky UI design can be because, IMHO, a Storage Group should be treated like any other media set -- those don't show internal client/volume pairs, so why should a Storage Group? Mac consistency, FTW! I think the generic use-case for parallelising backup clients is that "the server processes data faster than a single client supplies it". You can parallelise with multiple scripts, each with its own destination set, or with one or more scripts writing to one (or, less often, more) Storage Groups. My personal experience suggests that for "local" clients (those your RS Server can direct-IP or multi/subnet broadcast to wherever they are in the world), multiple scripts/sets is best. The benefits to me are organisational, de-dup within sets, faster catalog rebuilds when things go wrong (because you only have a subset of clients per media set), etc. But if you have "remote" or hybrid-working clients then Storage Groups are the way to go, mainly because (as at my last set of tests, anyway) there is no way to back up different remote clients to different media sets -- it's "one big bucket", hence the slower catalog rebuilds, although there's no de-dup (even across volumes of the same client). The only way to parallelise write operations to a single set is by using a Storage Group. And having multiple remote clients, which often have restricted upload speeds, are when parallelising on the server can really help you get all your backups done. As always, YMMV.
  5. Try the newest version of Retrospect. If you can, try v17 as well, on that same machine. The more relevant information you can send to Mayoff & team the better (your star sign probably isn't important at this point 🙂 )! You could also try a simple Finder Copy between the two images, just to check that works as expected. Certainly a Copy script should maintain the copied files' metadata, and it does on my tests with macOS 10.15.7, RSv17, and mounted encrypted disk images. But there's always a chance that something's been changed on the OS side, and it's just that you're seeing the effects in RS...
  6. Give the just-released 18.1.0.113 a try -- not all fixes are listed in the release notes.
  7. Nigel Smith

    Retrospect 16.6.0 (114) crashing

    Sounds like the Retrospect Engine is spontaneously quitting/restarting for some reason. Try stopping it, and turning off "Launch on System Startup", in the Retrospect pane of System Preferences. Restart the computer so you are working off a clean sheet. With Activity Monitor open, start the Retrospect Engine. Note the PID in Activity Monitor, go away for 10 minutes, come back and check again. If the PID is the same then the Engine hasn't quit, so try launching RS Console and seeing what happens. You may even find that switching to a manual start cures the problem -- IIRC there was someone last year who was seeing similar, and it appeared to be a timing issue where Engine was active before an external device and got into an endless restart loop when the device appeared. Failing that, keeping an eye out for the PID change (indicating a restart) which will give you a time period before that to check Console logs for problems. Try all the above in "standalone" mode -- disconnect any external devices bar keyboard/mouse/display, unplug the network and turn off wireless. It may be something totally unrelated, so keep it as simple as possible. It also looks like you didn't completely remove all preferences -- there are scheduled activities showing. A complete uninstall/re-install might help, if only for troubleshooting.
  8. Interesting question! I think not, because I'm guessing that WoL is part of the discovery phase only. Unless that's how they did the fix you mentioned way back when? And sleep is only for security if you've got "require password on wake" set. IMO it's more about energy saving, extending component life (not just disks) etc -- especially when moving laptops around (I get so annoyed when I see people walking up and down stairs at work with laptop open and running -- and also get a desperate urge to trip them up, just to see what happens!).
  9. Client? Often-- eg when a backup is in progress and the user sleeps their machine, pulls the network plug, etc. Client remains "in use" so the server can't back it up. Either restart the client machine or (if your RS client permissions allow) turn the client off and on again(Option-click on the client "Off" button to fully kill the client process).
  10. In my defence, your honour... My personal preference is to lock things down because I don't trust ordinary users. But there are some who cause so much aggravation that, for the sake of my own sanity, they get the "This is why you shouldn't do that..." speech during which it is made clear that if they do do that then it is completely on their own head when they do it wrong. And I've got the email trail to prove it... Plus, in this case it's jethro who wants to choose his backup times -- and I'm sure he can be trusted to do it right! And if he doesn't and it all goes pants, I've got the forum screenshots to prove that it wasn't my fault, your worship 😉
  11. Totally untested, but you might be able to spoof what jethro describes by having a 24hr/day Proactive script with a large "days between backups" setting -- RS wouldn't back up the client whenever that client contacted it, only when the user used "Backup Now". That setting would, of course, apply to all clients in that script, which would be all those tagged for Remote Backup. I'd question why you'd want to do that though! Far simpler to just set things up as normal for Remote Backups, and if you only wanted those to happen at certain user-decided times (perhaps they always want to wait until they've finished both their work and their evening's Netflix viewing before clogging their network connection with RS traffic) allow them to turn their client on and off as it suits them.
  12. If you can't merge multiple saved rules (I'll test later), try chaining them: Rule A -- Include "Saved rule includes All Files Except Cache Files", Exclude "Folder name is Temp" Rule B -- Include "Saved rule includes Rule A", Exclude "Folder name is Downloads" Rule C -- Include "Saved rule includes Rule B", Exclude "Folder name is .dropbox" etc. Obviously each Exclude could contain as many clauses are you like. So a run using Rule C, above, would exclude ".dropbox", "Downloads", "Temp", and all cache files.
  13. Check by using the RS Console to "browse" the mounted volume. If you can, and given that your backups are working, you can consider it a spurious alert message. (Full Disk Access -- SystemPolicyAllFiles -- includes SystemPolicyRemovableVolumes so Engine and Instant should be OK.) My favourite quote about FDA/F&F is "Currently, this system is so complex that it appears unpredictable." I guess that extends to application's error messages, too 😉
  14. Everything David says. And I'd add that: Most VPN servers don't allow "network discovery", either Bonjour (like you'd use to list available printers etc) or Retrospect's version, between subnets. Remote Backup is a lot more flexible in that the client wouldn't need to be on the VPN to be backed up. That also reduces the load on your VPN server, helping the people that need to use it. If the use of VPN is a requirement, eg compliance issues, you can actually use Remote Backup through, and only through, your VPN. Otherwise you'll have to open the appropriate ports on the server to the Internet (probably including port-forwarding on the WatchGuard). Most home connections are slow. Get initial backups done while the machines are directly connected to the "work" network, then use incrementals when the machine is out of the office. In your situation you could try a transfer from your current backups to a new, Storage Groups based, set (I've not tried this myself, so don't know if you can). RS will do this switch to incrementals seamlessly, just as it would usually. There's no deduplication across different volumes in Storage Groups, so you may have to allow for more space on your target media. Deffo upgrade to v17!
  15. That isn't exactly a surprise 🙂 But there's a few more things you can try first (I'd still recommend upgrading to v17 eventually, for better compatibility). So that is completely different -- an external drive rather than a system disk, HFS+ rather than APFS, Sierra rather than Mojave? Sounds like the only constants are the server and the client version, yet it isn't happening with every client... What are the commonalities/differences between the clients for which this happens and the ones that don't? Don't just look at software versions, but also what's installed and what is or isn't enabled eg FileVault, sleep options. Give the v15.5 client a go, even if you haven't a spare test machine. If it doesn't work you can simply uninstall it, re-install the v14 version, and re-register the machine with your server. And ultimately -- if it is always "first attempt fails, second attempt works" as you describe... Simply schedule a "primer" script to hit the troublesome systems before your real script 😉 You could even do it with a rule that excluded everything -- all files would be scanned, priming the target for the real script, but little or no space would be needed on your backup target.
×