Jump to content

Nigel Smith

  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Nigel Smith

  1. Trying not to quote Ralph Waldo Emerson, but you can't have it both ways -- either rhwalker is right and "All of the backup data (files, metadata, etc.) is in the .rdb files which, taken together, comprise Retrospect's database", or you are and they don't. I'll let you decide for yourself, and leave well alone until twickland reports back.
  2. No, you are. Yes, the .rdb files are the database and yes, the catalog file is the index. But a component of the Retrospect Engine is a database engine because that's what manipulates the database. Is it the same on Mac and Windows versions of RS? We don't know, and even if the inputs and outputs are the same we still don't know -- it's a black box. To (rather sadly) quote myself: You can see evidence of that last line for yourself by comparing Mac and Win release notes -- most fixes are in both, but a few are specific to one or the other.
  3. Don't know why -- since a database engine is what CRUDs the data in a database, Retrospect has a database, and records in that database can be created, read, etc, there's some kind of DB engine in there. I'm not saying it's MySQL or similar, but there's something. I'd hope that, with the push to a "common engine", it's the same on all platforms. But that's hope, not certainty. And anyway, it's only an example of the possible differences between nominally common code compiled to different platforms. I'm sure the meat of the RS engine is more-or-less the same, but it's the gristle round the edges that always gets stuck in your teeth...
  4. Apologies, I was less than clear (rushed, after my first attempt got eaten). I can see a lot of space in "same code but compiled for different platforms" for quite a lot of variation between the final products. The example was supposed to suggest they might have used different DB engines -- maybe for performance, licensing, or compatibility reasons. There could be lots of these variations, or very few -- I simply don't know. Your catalogs are as I'd expect, the meta holding "summary data" for the set and each client/volume (including links to their component catalogs) and the component containing the meat of that client/volume's backup history (metadata for every file backed up in every session, etc).
  5. I think the "meta catalog" just stores the "higher level" info -- client/volume pairs in the SG, when each was last backed up -- and the "component catalogs" store the session info data. In Mac RS the search "chains" -- in the GUI you search the meta cat, but RS searches each of the component cats then aggregates the results for presentation. I'd hope that Win RS does the same and that the manual's "To restore from a specific source, you need to select the specific source within the Storage Group" is an "and you can restore from a specific source..." rather than "you can only...", but I can't test that at the moment. Assuming that's the "universal Retrospect engine", that still leaves an awful lot of wiggle room! Same design and pseudo-code? Same actual code but compiled per platform? Even if it's the latter ( as I suspect, comparing version fixes across platforms) that still leave space for things like "if macOS then mac_db_code else win_db_code". Not saying they have, but we can't discount the possibility!
  6. Agree about the Transfer and Verify (although Rules are hardly irksome), but I don't understand about Restore -- sessions are listed by Machine + Volume + Media Set, so it's trivial to do a point-in-time (or Browse and select particular files) for any client/volume pairing in any Storage Group. In Windows it's actually more clicking to get to previous snapshots/sessions. I've not tried it, but in Windows do you have to restore from a particular source? Can you easily search for every occurrence of "Important.docx" across a Storage Group, or would you have to search (and restore from) each "component set" individually? As you rightly say, it's potential in improving Rebuilds is massive, if it can be done. I'm not sure how similar Mac and Win RS actually are! There may be more to it than just how their respective consoles allow you to view what's under the hood. Creating Storage Groups by splitting a single over-arching catalog into a "meta catalog" plus a bunch of client/volume "component catalogs" was a very pragmatic solution to a pressing problem -- whether that could be done the same way on both platforms is something way outside my understanding. What we really need is a preference so we can switch both Mac and Win consoles to our preferred method of display 😉
  7. I know what RS does at the filesystem level. But in the UI, which is where I suspect most of us interact with RS most of the time, it is presented as "just another media set" on the Mac and as something different in Windows. Again, I prefer the Mac software's consistent approach -- you obviously don't. And I can't disagree with your preference since Storage Groups aren't "just another media set" (eg lacking cross-client dedup within the set). I'd prefer it even more if there was a consistent approach, one way or the other, across platforms -- but that's probably asking too much. It may have changed in v18, but in v17 you had to rebuild the entire Storage Group on both platforms, even if only a single "component catalog" was damaged. A pain, but at least the rebuild is also parallelised across multiple threads. See this old post for my workarounds, either of which could possibly be adapted to twickland's particular problem.
  8. Trust me, it isn't! Agreed. But whether Mac or PC handling of them is correct (c'mon -- Mac wins every time!) is a matter of perspective. Looking top-down, a media set only has one catalog. A Storage Group only has one catalog. It looks like any other media set in the RS UI, and the manual even states it should be treated the same as any other media set. So it *shouldn't* show the component directories, since no other media sets do. When it comes to UX and abstraction, consistency trumps absolute accuracy every time. And why display something that shows no usable purpose? I don't know, and I don't really care, why Mac and Win do this differently. I know which I prefer, so do you, but those are only opinions and neither of us is "right". While I would hope that you are right about this, I'd recommend that anyone who wants to use this in production thoroughly tests it first. There are limits on destination volume size -- my "use as much space as you want" Disk Media set splits every 8TB, for example -- and I'd want to be sure that the above allowed you to separate across "proper" logical/physical volumes rather create more RS "media set" volumes in the same place. Nice work on the password protection checking!
  9. Which just goes to show how tricky UI design can be because, IMHO, a Storage Group should be treated like any other media set -- those don't show internal client/volume pairs, so why should a Storage Group? Mac consistency, FTW! I think the generic use-case for parallelising backup clients is that "the server processes data faster than a single client supplies it". You can parallelise with multiple scripts, each with its own destination set, or with one or more scripts writing to one (or, less often, more) Storage Groups. My personal experience suggests that for "local" clients (those your RS Server can direct-IP or multi/subnet broadcast to wherever they are in the world), multiple scripts/sets is best. The benefits to me are organisational, de-dup within sets, faster catalog rebuilds when things go wrong (because you only have a subset of clients per media set), etc. But if you have "remote" or hybrid-working clients then Storage Groups are the way to go, mainly because (as at my last set of tests, anyway) there is no way to back up different remote clients to different media sets -- it's "one big bucket", hence the slower catalog rebuilds, although there's no de-dup (even across volumes of the same client). The only way to parallelise write operations to a single set is by using a Storage Group. And having multiple remote clients, which often have restricted upload speeds, are when parallelising on the server can really help you get all your backups done. As always, YMMV.
  10. Try the newest version of Retrospect. If you can, try v17 as well, on that same machine. The more relevant information you can send to Mayoff & team the better (your star sign probably isn't important at this point 🙂 )! You could also try a simple Finder Copy between the two images, just to check that works as expected. Certainly a Copy script should maintain the copied files' metadata, and it does on my tests with macOS 10.15.7, RSv17, and mounted encrypted disk images. But there's always a chance that something's been changed on the OS side, and it's just that you're seeing the effects in RS...
  11. Give the just-released a try -- not all fixes are listed in the release notes.
  12. Sounds like the Retrospect Engine is spontaneously quitting/restarting for some reason. Try stopping it, and turning off "Launch on System Startup", in the Retrospect pane of System Preferences. Restart the computer so you are working off a clean sheet. With Activity Monitor open, start the Retrospect Engine. Note the PID in Activity Monitor, go away for 10 minutes, come back and check again. If the PID is the same then the Engine hasn't quit, so try launching RS Console and seeing what happens. You may even find that switching to a manual start cures the problem -- IIRC there was someone last year who was seeing similar, and it appeared to be a timing issue where Engine was active before an external device and got into an endless restart loop when the device appeared. Failing that, keeping an eye out for the PID change (indicating a restart) which will give you a time period before that to check Console logs for problems. Try all the above in "standalone" mode -- disconnect any external devices bar keyboard/mouse/display, unplug the network and turn off wireless. It may be something totally unrelated, so keep it as simple as possible. It also looks like you didn't completely remove all preferences -- there are scheduled activities showing. A complete uninstall/re-install might help, if only for troubleshooting.
  13. Interesting question! I think not, because I'm guessing that WoL is part of the discovery phase only. Unless that's how they did the fix you mentioned way back when? And sleep is only for security if you've got "require password on wake" set. IMO it's more about energy saving, extending component life (not just disks) etc -- especially when moving laptops around (I get so annoyed when I see people walking up and down stairs at work with laptop open and running -- and also get a desperate urge to trip them up, just to see what happens!).
  14. Client? Often-- eg when a backup is in progress and the user sleeps their machine, pulls the network plug, etc. Client remains "in use" so the server can't back it up. Either restart the client machine or (if your RS client permissions allow) turn the client off and on again(Option-click on the client "Off" button to fully kill the client process).
  15. In my defence, your honour... My personal preference is to lock things down because I don't trust ordinary users. But there are some who cause so much aggravation that, for the sake of my own sanity, they get the "This is why you shouldn't do that..." speech during which it is made clear that if they do do that then it is completely on their own head when they do it wrong. And I've got the email trail to prove it... Plus, in this case it's jethro who wants to choose his backup times -- and I'm sure he can be trusted to do it right! And if he doesn't and it all goes pants, I've got the forum screenshots to prove that it wasn't my fault, your worship 😉
  16. Totally untested, but you might be able to spoof what jethro describes by having a 24hr/day Proactive script with a large "days between backups" setting -- RS wouldn't back up the client whenever that client contacted it, only when the user used "Backup Now". That setting would, of course, apply to all clients in that script, which would be all those tagged for Remote Backup. I'd question why you'd want to do that though! Far simpler to just set things up as normal for Remote Backups, and if you only wanted those to happen at certain user-decided times (perhaps they always want to wait until they've finished both their work and their evening's Netflix viewing before clogging their network connection with RS traffic) allow them to turn their client on and off as it suits them.
  17. If you can't merge multiple saved rules (I'll test later), try chaining them: Rule A -- Include "Saved rule includes All Files Except Cache Files", Exclude "Folder name is Temp" Rule B -- Include "Saved rule includes Rule A", Exclude "Folder name is Downloads" Rule C -- Include "Saved rule includes Rule B", Exclude "Folder name is .dropbox" etc. Obviously each Exclude could contain as many clauses are you like. So a run using Rule C, above, would exclude ".dropbox", "Downloads", "Temp", and all cache files.
  18. Check by using the RS Console to "browse" the mounted volume. If you can, and given that your backups are working, you can consider it a spurious alert message. (Full Disk Access -- SystemPolicyAllFiles -- includes SystemPolicyRemovableVolumes so Engine and Instant should be OK.) My favourite quote about FDA/F&F is "Currently, this system is so complex that it appears unpredictable." I guess that extends to application's error messages, too 😉
  19. Everything David says. And I'd add that: Most VPN servers don't allow "network discovery", either Bonjour (like you'd use to list available printers etc) or Retrospect's version, between subnets. Remote Backup is a lot more flexible in that the client wouldn't need to be on the VPN to be backed up. That also reduces the load on your VPN server, helping the people that need to use it. If the use of VPN is a requirement, eg compliance issues, you can actually use Remote Backup through, and only through, your VPN. Otherwise you'll have to open the appropriate ports on the server to the Internet (probably including port-forwarding on the WatchGuard). Most home connections are slow. Get initial backups done while the machines are directly connected to the "work" network, then use incrementals when the machine is out of the office. In your situation you could try a transfer from your current backups to a new, Storage Groups based, set (I've not tried this myself, so don't know if you can). RS will do this switch to incrementals seamlessly, just as it would usually. There's no deduplication across different volumes in Storage Groups, so you may have to allow for more space on your target media. Deffo upgrade to v17!
  20. That isn't exactly a surprise 🙂 But there's a few more things you can try first (I'd still recommend upgrading to v17 eventually, for better compatibility). So that is completely different -- an external drive rather than a system disk, HFS+ rather than APFS, Sierra rather than Mojave? Sounds like the only constants are the server and the client version, yet it isn't happening with every client... What are the commonalities/differences between the clients for which this happens and the ones that don't? Don't just look at software versions, but also what's installed and what is or isn't enabled eg FileVault, sleep options. Give the v15.5 client a go, even if you haven't a spare test machine. If it doesn't work you can simply uninstall it, re-install the v14 version, and re-register the machine with your server. And ultimately -- if it is always "first attempt fails, second attempt works" as you describe... Simply schedule a "primer" script to hit the troublesome systems before your real script 😉 You could even do it with a rule that excluded everything -- all files would be scanned, priming the target for the real script, but little or no space would be needed on your backup target.
  21. I had completely forgotten about that button! Probably because it's always dimmed for me... Ah... Grooming a set stored on a NAS is an absolute shedload of network traffic, especially in storage-optimised grooming. See the "Grooming Tips" page, section 11. Do everything you can minimise problems, ideally (if possible) setting up a short-cabled direct connection from your Mac to the SAN -- you may want to use different interfaces on both machines and hard-code the IP addresses if you normally use DHCP. And make sure there's no scheduled processes on either machine that might interfere. Since the Synology checks OK, try copying the "bad" files to your Mac -- if that works and the "bytes" size shows the same in Finder's Get Info, it's even more likely that there was a network glitch. Experience has shown that RS is a lot more sensitive to such things than eg a Finder copy, so a NAS connection that seems OK in normal use can be too flakey for a "big" RS operation.
  22. I leave it off myself, for that very reason. I asked because it may have resulted in the client mistakenly using an "old" file listing from the last backup (so nothing needs to be backed up because you already have). If not on, that's probably not the issue. Also, Instant Scan isn't compatible with APFS. So "jorge" is your user folder and you've defined that as a Favourite in RS? What state is the client machine at when the backup fails -- logged out, logged in but asleep, logged in but on screen lock, logged in and in use? -- and when it succeeds? But... Certification for Mojave only arrived with RS v15.5. If you've a Mojave test system handy you could try downloading the 15.5 client, installing it, and seeing if your v14 server can still back it up and, if so, if there's any improvement. Otherwise, it's worth noting that even if your server is limited to High Sierra, you can still upgrade to RS v17.
  23. The logs are saying something different -- that the remote client is contacted/scanned just fine but no files are found that need to be backed up, while the local volume does have files to be backed up and they are. So it's a problem with the remote source, and you need to include OS version, RS client version, and disk format of that. Also whether "jorge" is the system volume, a Favourite Folder you've defined in RS, or another volume (icons may not have pasted into your post). It doesn't look like you are using any Rules, but check just in case. I would have guessed at an issue with Instant Scan but, on my system at least, use of that is included in the logs...
  24. As of RS v13(? -- David will know) Fast Catalog rebuild is always "on" for Disk Media Sets unless you enable grooming, and then it's "off". In v17, and maybe before, it isn't even shown as an option, but I suspect the UI took time to catch up with the behavioural change and they disabled the option rather than making a new layout. Which is why I was asking if you'd enabled grooming after the rebuild. It may just be that the logs are mis-reporting on that line. What I don't understand is your first sentence: As I understand it, grooming via Media Set options is either off, to a set number of backups, or to a defined policy -- not much scope for you triggering grooming yourself. So how did you do this? That may have a bearing on the matter. I'd also try doing the rebuild then backing something up to that set, even just one small file, before the grooming operation. Other questions: How much free space do you have on the system disk? Where is the 10TB+ of data stored, and how much free space is there on that? When was the last time you disk-checked it? Rebuild log says RS is skipping a file -- check that on your disk media, can you replace it from eg tape. Same for the files mentioned in the groom log. It might also be worth downloading the v17 trial, maybe on a another machine, and trying the rebuild on that. If successful you might even (I haven't tried it!) be able to copy the new catalog back to the v15 machine and use it there -- you can move catalogs up versions, but I've never tried down! If you can't but the rebuild worked, at least you'll know upgrading to v17 is one way out of your problem.
  25. And do you still have "two mounted disks named prepress", as per the previous warning? Might explain it... If so, unmount them all (may be easiest to just restart the RS Server machine), mount the volume (I'm assuming you do that manually. And check with "ls -l /Volumes" in Terminal, making sure "prepress" is listed only once), and run the script again.
  • Create New...