Jump to content

Nigel Smith

  • Content count

  • Joined

  • Last visited

  • Days Won


Nigel Smith last won the day on July 28

Nigel Smith had the most liked content!

Community Reputation

16 Good

About Nigel Smith

  • Rank
    Retrospect Junkie

Profile Information

  • Location

Recent Profile Visitors

673 profile views
  1. It would appear that "sloppily written" really means "Goes against my belief that..." ...so the official documentation stating that you can switch seamlessly between local and remote backups -- including an example using the industry standard method of onboarding work computers that are to be used remotely -- and the way Retrospect has been shown to work in practice are, quite simply, wrong. They must be wrong, because you say they are. So there really is no point continuing...
  2. You might want to edit your post following the results of my last test in the post before, where an "AlwaysRemote" client was indeed added without user intervention to the server and can therefore be seen in the server's Sources list -- what I'm calling a "client record" because that's where you set things like client options and tags.
  3. And the bad news is -- it does... "But Nige," I hear you say, "surely that's a good thing, allowing us to onboard Remote clients without making them come to the office?" I don't think so, because Remote Clients are automatically added: ...without the "Automatically Added Clients" tag, so there's no easy way to find them ...with the "Remote Backup Clients" tag, which means they will be automatically added to all Remote-enabled Proactive scripts ...with the client's backup "Options" defaulting to "All Volumes", so external drives etc will also be included I let things run and, without any intervention on my part, the new client started backing up via the first available Remotely-enabled script. Edit to add: I didn't notice this last night, but it appears that the client was added with the interface "Default/Direct IP", in contrast to the clients automatically added from the server's network which were "Default/Subnet". I don't know what this means if my home router's IP changes or I take the laptop to a different location (will the server consider it to now be on the "wrong" IP and refuse to back it up?) or if I know take it into work and plug in to the network (will the server not subnet-scan for the client, since it is "DirectIP"?). End edit Given the above I'd suggest everyone think really carefully before enabling both "Automatically add.." and "Remote Client Backups" unless they retain control of client installation (eg over a remote-control Zoom session) -- especially since I've yet to find out what happens if you have a duplicate client name (the next test, which I'm struggling to work out how to do).
  4. Nigel Smith

    No Proactive scripts running

    Jon, did you ever get anywhere with this? Just had a silly thought -- if the problem was that your Proactive scripts were running but none of your remote clients were getting backed up, make sure you didn't install Retrospect client on your server (see the very last point in the Remote Backup KB article).
  5. The trite answer is "Easily!". Note that tags are not applied to "client machines" -- they never have been, and the "Remote Backups Clients" tag is no different. They are applied to the "client record" on the server. They effectively say "allow incoming calls from this client" (when in the client record) and "allow incoming calls to this script" (when used in the script's Source). So I'm proposing a "class" of Remote tags, each instance having a user-defined attribute -- they all say "allow incoming" but finesse it with "and direct them to scripts with the matching attribute". Perhaps it is easier to think of them the other way round -- they would behave exactly the same as all other tags and, additionally, allow remote access. Are you sure? 😉 You've actually raised the next issue I want to test -- does automatic onboarding work with Remote clients? I haven't bothered with that yet, because our use of Favourites mandates local addition, but now I've time to scrub a machine and start a client from scratch again. Sorry David, but that's just plain wrong. From the KB article: "Clients can seamlessly switch between network backup on a local network to remote backup over the internet. You do not need to set up the remote backup initially. You can transition to it and back again" -- true as far back as Nov 2018.
  6. Briefly stepping away from the "David & Nigel Show" and back to OP's original question -- Fred, I hope you're still with us! At the moment, no. But rebuilding is parallelised, and once any client is complete it can be backed up again even while others are rebuilding. But I don't fancy doing multi-terabyte rebuilds for just one corrupt catalog either, and I can't (yet!) find a way of using backed up catalogs and the Repair function to shorten the process. I suspect that the "gold standard" work-around would be to: Remove "broken" client from original set's Source list (either machine entry or tags) Create a new Storage Group media set Move the folder of the backup files of the "failed" client to the new Storage Group, putting it in the same place in the SG hierarchy as it was Rebuild the new Storage Group media set Check you can restore from the rebuilt set Delete backup files and client catalog from original set Do a Transfer of the now-rebuilt backups from the new set to the original Check that original set's version of client is now working again Re-add client to original set's Source list Delete new Storage group data and catalogs If haven't tested this, but it seems like it should work. Try yourself on test data before using in anger! This shortcut, however, I have tried: Remove "broken" client from original set's Source list (either machine entry or tags) Create a new Storage Group media set Copy the folder of the backup files of the "failed" client to the new Storage Group, putting it in the same place in the SG hierarchy as it was Rebuild the new Storage Group media set Check you can restore from the rebuilt set Replace original catalog with the rebuilt one Check that original set's version of client is now working again Re-add client to original set's Source list Delete new Storage group data and catalogs And it works! But I'd urge you to give it a thorough thrashing in test mode before entrusting any of your precious data to this method... Both methods will do want we want -- rebuild just one client catalog out of many. The first requires an extra Transfer operation, the second the extra disk space for the data copy. neither comes with any official stamp of approval 🙂 Try them both and see what you think.
  7. The videos don't show that (although that's how the clients were installed, registered, and Favourites defined) as you can see, the tag changes were done to a machine that couldn't be contacted by the server (because it was at home and turned off!). The videos show that the list of machines ProactiveAI is going to try to back up is based solely on server-set tags, without reference to or need of contact with the client itself. So it's pretty obvious that I'm not forgetting that tags don't appear on client machines. Yes, but it isn't a huge leap to have multiple Remote tags that can be assigned server-side to both script and client and that message interpreted by the RS engine as "back me up with any matching Proactive script; legitimised because I have this public key stored in my pubkey.dat." It's almost the same solution as you propose but with three small and one big difference. The small ones are "no faffing with the client when changes need to be made", things are obvious to the Admin (no digging for secret strings), and the fact that adding Remote clients to scripts is exactly the same as adding any other tagged client (consistency is good). The big one is that you could apply multiple Remote tags to the client record so they can be backed up with multiple scripts (doable with multiple secret strings but nowhere near as easy or obvious to the Admin). I think they've assumed you'd have be a real idiot to want more than one remote-enabled script -- they've got me down to a T 🙂 From my playing, it alternates between the two available scripts -- but that's probably because there's no contention due to low client numbers and use of Storage Groups on both. I would hope it follows the "usual rules" -- if more than one destination is available for backup, pick the least-recently (yuck! Pardon my English, but I can't think of a better way of putting it) used.
  8. SMB is using the Windows short name because the name isn't being encoded for compatibility by the OS X SMB daemon -- and that's because the data is being put on the server by AFP clients. The permanent fix to this is: Lock out all your users On Mac Pro, mount the share using "afp://IP_address/sharename" Still on the Pro, mount the share again using "smb://IP_address/sharename" Move the troublesome data from the AFP-share window to the SMB-share window Turn off AFP on the Synology and force your users to SMB from now on, and maybe look at how to turn off signing in SMB config Step 4's the tricky bit -- set the windows to different views so you can keep track of which is which and decide how you are going to manage the transfer wrt clashing names, deletion of "AFP versions" when complete, etc. I'd probably do one folder at a time, copy that into a folder at the same level named "Transfer", delete the original, move the contents of "Transfer" to where the original was. (All the above just tested and confirmed with Catalina, an up-to-date Synolgy, a folder on the Mac named as yours with contents created by echo "Here's some text in a file" | tee "Animation (smaller)?"{0..9}.txt > /dev/null ...with an additional "testFile.txt" file to make sure "normal" files were also OK, and then copied to the Syn using AFP.) Or you could insist that everyone stays using, and only using, AFP, and pray it remains supported for as long as you need it... I may have given you a bum steer -- it looks like (for AFP and SMB at least) RS is using the system APIs to mount shares. If I turn the RS engine off and restart, my server shows no network mounts in /Volumes. If I turn on the RS engine the shares connected to via RS's Sources then appear in /Volumes, and if I add another share there it appears in /Volumes too. RS adds them as root/wheel while mounting via the Finder (non-root account) adds them as admin/staff -- entirely reasonable. Both work as expected from above -- SMB-mounted shows AFP-added files by short name and SMB-added files by full name complete with non-standard characters. AFP-mounted shows both versions with their full names. Both mount methods back up all files without error, so it doesn't look like the filenames are causing your errors in and of themselves. I still think you are losing connection with the server for some (unrelated to filenames) reason. As said before, RS backup-in-progress is much more sensitive to temporary disconnects than "normal" file sharing is, so it may be that your other users simply don't notice when it happens. It sounds like you have some time for testing -- if so I'd suggest you go back to scratch. Static IP on just one Syn interface, static on Mac Pro, direct cable connection between the two, does the problem still occur? If not, slowly add complexity until it does.
  9. It'll still show as a share (because it is!) but you should see a difference in the "Name" and "Machine" columns of the Shares list -- a share mounted via the server's OS will show the volume name in "Name" and the server's name in "Machine", a share mounted by RS will show "user@IP_or_FQDN/share" and either IP, FQDN or name.local depending on how you addressed it. We're trying to make sure that if you are using SMB you are doing so via OS X's smbd -- I don't know if RS uses that or its own routines, but I do know that OS X's smbd does lots of trickery to encode/decode funky Mac filenames for use on less supportive SMB shares. Have you got any examples of filenames that are failing? I can try and replicate your problem, albeit using RS17 rather than 16, if you think it might help.
  10. As excuses go, that's definitely in the top three! I hope all the important things -- the people! -- were undamaged. Have you tried mounting the share on the Mac (still using AFP) and treating it as a local volume, rather than letting RS do it? Also this thread throws up an interesting suggestion -- use the IP address rather than relying on Bonjour resolution (static IP on the server an advantage, obv).
  11. My testing setup was as close as I could get to our proposed "production" use -- no point in doing otherwise in this case. So... No -- I took the laptops into work, wiped them, got hem on the ethernet, installed RS client, registered them with RS Server and created Favourites. I then took them home. From home I VPNed from my desktop into the work RS Server and set up the test scripts and tags. I then turned on the laptops, got them connected to my home wireless network, and let things happen -- the laptops were at no time connected via VPN. Sidenote: If it was created using port forwarding, the client record "Interface" would necessarily be "Direct IP" and it would be my router's public address that was shown. That's because there's no way (at least that I can find, correct me if I'm wrong) to change from "Direct IP" to "Multicast" or "Subnet Broadcast" if you can't actually contact then select the client using the changed method. Sidenote 2: Apologies for any confusion, but the RS Server I'm testing with is the latest update to v17. "RS_16_Test" is the previous v16 test server which now has the engine turned off so is client-only. I was short on time and so grabbed an available machine -- I really should have thought ahead and renamed it! Again no -- I've shown that the list of clients that Proactive is waiting to access is generated on the fly using the script's Source list, without reference to the clients' locations or availability. And that the Source list is updated, without need to start/stop the script, either periodically or in response to tag changes. You can try this yourself by setting up a new Proactive script using a new tag that hasn't been applied to any clients, start it running, then applying that tag to a client -- after a few moments the client will appear in the ProactiveAI list. And if you can't do that yourself -- here's a video: https://youtu.be/LiZ8b6KZ3OY And inB4 "but Remote Backup Client tags are different": https://youtu.be/4Ok_z2Lrn9A And again no -- though I'm much less certain on this one 😉 -- the ProactiveAI list doesn't contain IP addresses, they're held in the Sources record for the client. I'd guess that Proactive signals the Engine which client is to be backed up and Engine refers to the Sources record for the client to determine connection type etc. I don't know how that works with Remote clients. But it isn't really important to the discussion. Agreed. But... The current kludgey perversion binds all so-tagged scripts to all so-tagged clients. I want to limit that binding so that a client can be bound to one, some, or all scripts as I desire. It can't be beyond the wit of man to have multiple Remote Backup Clients tags, each with an unique identifying attribute, so that matching-tagged scripts bind to matching-tagged clients -- after all, that's exactly how all the other tags work, and there's no messing around with the client installation required for those.
  12. Sorry, but that's irrelevant. "Automatically add clients" is for onboarding (note "check network for new clients" [my emphasis] in your quote) so Joe Bloggs can do his own client install and the server will automatically register the new client -- rather than the usual install client, go to server, manually discover client on network, add to server. The tag is preserved so you can use it scripts later on. It's neat, but not so useful to Admin's like me who always define Favourites on clients -- since the client must be online to do that, you may as well add them manually. I can see it would be good for anyone who always backs up whole clients (you can also apply Rules, obviously), since it removes the requirement for both client and Admin to be available when onboarding. Anyways... Take another look at my first screenshot. That's the list of clients the SG_Test Proactive script is waiting to back up, straight after a server restart. The MacBook Air is asleep, and has been for the previous 3 hours -- it isn't online, it can't have sent any message to the server since the restart, yet the server knows it is to be backed up to SG_Test. So the list must be generated regardless of any client's availability. Note: To be even clearer -- the server, "Luggage", and "RS_16_Test" are at work, and all on the same network. "admin2's MacBook" and "Admin's MacBook Air" are at home, behind a NATing router with no port forwarding, not using a VPN -- they cannot be contacted by the server, and any backup network session must be initiated by them. And that's why, as you point out, it says what it says on p228 of the UG. Server generates the Proactive script's client list from the script's source list, without reference to availability, the script then iterates through the list in order of "need". (I don't know what it does for "unavailable" clients, whether it skips them or tries a multicast/subnet broadcast before moving on. That "last seen at home" Remote client may now be back on the work ethernet -- does the now-local client report its presence to the server or does the server discover it?) Seriously -- try it for yourself. Add a new client to the server, but don't tag it. Turn the client computer off. Make a new Proactive script but don't run it, create a tag for the client, add the tag to the script's Sources, then start the script and look at what it is waiting to back up -- you'll see your turned-off client in the list. I don't have enough test machines to find out what happens when a Remote client that is due for backup advertises its availability but the script has other, more "in need" clients available. Perhaps another good reason to use Storage Groups, to avoid that very problem... No, it's a different approach to the same end -- you talk of "storing a suffix string on the client", I'm doing it wholly with tags on the server. Analogy time! (Yes, I'm twiddling my thumbs while a web server rebuilds -- apologies...) Acme Explosives are a careful company -- as you'd expect, given the business they're in. Every day each manager (Proactive script) phones each of their members of staff (RS client) to ask for a report on what had happened (backup) since they last spoke. Then Covid-19 hit. Acme furloughed most of their staff, set a few to work from home (Remote clients), and kept one manager in the office. The manager didn't know anyone's home number so, instead, the home-workers phoned in their daily reports to that manager, and everything was good. As the pandemic dragged on, more and more workers were taken off furlough and set to work from home. The single manager couldn't handle all the reports so Acme had to put more managers on site. Although they each had a phone, they all shared the same number, so there was no control over who collected a worker's report -- each worker phoned in again and again until they'd all recorded it (roughly the current situation in RS with multiple Remote-enabled Proactive scripts) or each one listened to the report and either recorded it or said "Sorry, not my responsibility, please call back and try again" (my work-round using Rules). Being smart businessmen, Acme called in the consultants. One proposed that each manager had an extension number and that each home-workers' phone be programmed to speed-dial the appropriate extension. Easy for the both worker and Acme once set up, but if the home-worker's manager changed a technician would have to go to their house and reprogram the speed-dial. The other also suggested extensions, but that all calls in were to a central number and then routed by an operator to the appropriate manager, as shown by tags on the switchboard. A bit more work on every call -- but easily changed if required, without the operator leaving their desk. History has yet to record which solution they eventually went with. But it must have been successful because, since then, business at Acme Explosives is booming... Sorry -- just couldn't resist 🙂
  13. I believe you're wrong. I can't be absolutely sure of how it works, I'm not an RS engineer, but here's a couple of screenshots immediately following a full restart of the server: Apologies for the fuzziness -- remote desktopping in a remote desktop session can play havoc with screenshots 🙂 On the same network as the server, "Luggage" is a Direct IP client and "RS_16_Test" is subnet broadcast. At home, ie Remote clients, "admin2's MacBook" finished a backup to "SG_Test" one minute before the server restart and is still awake/available, "Admin's MacBook Air" has been asleep since its last backup more than four hours before the restart. As you can see, they are all in the Proactive list, even though two can't be contacted. RS might, of course, be including anything from the source list with a "known" IP address, as shown in the second screenie, but that also shows that the "known" IP address persists even across restarts -- so a client will always have a "last known" IP address even if that isn't its current one. (Note that a Remote Backup isn't just "here's my IP, come back to me when you're ready". I deliberately turned off all port forwarding on my router for this test -- the backup session must be initiated client-side, hence the private IP address.) So each Proactive script does appear to have a list of clients, generated from the script's Source list. This isn't points-scoring on my part -- it helps refine our ideas on how to move forward because: ...is more complicated than needed. If RS interpreted any tag name starting "Remote Backup Clients" as it does now, but also allowed you to create eg "Remote Backup Clients - Group 1" and "Remote Backup Clients - Group 2" so you could add different "remote groups" to different scripts, everything would work just as with other tags -- no messing around with the client and no problem with your China example, just update the tag selection on the server and you're done. If that's a bit clunky you could have a special class of tags, "Remote Backup tags" that could have any name and yet still be understood as Remote tags. Thanks David -- this is really helpful in trying to work out a way forward for our setup. Even if nothing comes of it, it's had me poking around and trying different ways of using what's already available. Including -- gulp -- Storage Groups!
  14. Agreed -- and, IMHO, that's the wrong place to do it anyway. Proactive scripts, the only ones we're concerned with, generate then periodically refresh a list of clients to watch out for. That's where the logic should be implemented -- list generation. Wrong problem. You can have multiple Proactive scripts for Remote Backup clients -- the issue is that all Remote Backup clients are included in all Proactive/Remote scripts. I do like your solution of multiple, user-definable, Remote Backup tags. Obvious to the Admin, in keeping with what is already there and presumably easiest to implement (always easy to say that when on the outside!). Another option would be to keep the single tag but to add AND logic to client list generation from tags. They could even go further and replicate Rules within source-selection-by-tag -- think how powerful that would be, with the advantage of using the same structure as Rules. A lot more work for the programmers though, and probably of little use to the majority of users. A third would be to divorce Remote Backups from tags altogether. It can instead be considered as both a script attribute (this script accepts Remote clients as sources) and a client attribute (this client advertises its IP to the server) and so be moved to being "Options" for both. That makes sense in that Remote Backups are a functional attribute whereas tags are organisational but, again, would be a lot more work for the programmers and could also throw up client compatibility issues. If I was in charge of road-mapping I'd put the first solution as a priority, the third as something to work towards, and dismiss the second as nice-but-unnecessary. Meanwhile I'll carry on looking for a way to work with what we've got, whether that means single Remote Backup set with a stupidly large catalog or taking the hit of the Rules-based solution in an attempt to keep catalogs a manageable size.
  15. Finally got around to having a play with this. While RS17 still treats tags as "OR" when choosing which clients to back up in script and you can't use tags within a rule, you can use "Source Host" as part of a rule to determine whether or not a client's data is backed up by a particular Remote-enabled script. It involves more management, since you'd have to build and update the "Source Host" rules for each script, but there's a bigger problem: Omitting by Rule is not the same as not backing up the client. That's worth shouting -- the client is still scanned, every directory and file on the client's volume(s) or Favourite Folder(s) will be matched, a snapshot will be stored, and the event will be recorded as a successful backup. It's just that no data will be copied from client to server. (TBH that's the behaviour I should have expected from my own answers in other threads about how path/directory matching is applied in Rules.) So if you have 6 Proactive scripts, each backing up 1 of 6 groups of clients daily to 1 of 6 backup sets, every client will be "backed up" 6 times with just 1 resulting in data being copied. That's a lot of overhead, and may not be worth it for the resulting reduced (individual) catalog size. Also note: a failed media set or script will not be as obvious since it won't result in clients going into the "No backup in 7 days" report, since the "no data" backups from the other scripts are considered to be successful. For me, at least, Remote Backups is functionality that promises much but delivers little. Which is a shame -- if Remote Backup was a script or client option rather than a tag/group attribute, or if tag/group attributes could be evaluated with AND as well as OR logic, I think it would work really well.