Jump to content

Nigel Smith

  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Nigel Smith

  1. Manual ipsave has been working for years, well before Remote Clients were even thought of. Particularly useful for dual-NIC machines where eg the primary interface (which RS Client will generally try to bind to) is the general network or "outside world" while the secondary (which you want RS Client to bind to) is connected to a backup or "internal" network. It's most useful for machines with static IPs since you just do it the once. As said before, it's also a good work-round for the "confused client" issue for those who don't allow their users to turn the Client off-and-on-again. But it is more involved since the user (or a script) has to determine the IP address to be used. Note that pretty much anything done server-side will not help because the "confused client" is rarely bound to an address that the server can reach.
  2. His contention is that it's a lot easier for his users to simply turn RS off and on than it is to find the new IP address and use that (somehow!) in an ipsave command -- and I have total sympathy with his view! That would be my default fix too -- as long as (as you've also said) admins haven't disallowed turning off RS.
  3. I'm a bit lost as to how far you have (or haven't!) got. Do you know the version of Retrospect used to create these backups? If it's *really* old you may be in trouble -- I don't think current versions can handle pre-v6 backup sets. Anyway, try Rebuild, select "Disk", "Add Member", navigate to and choose the folder containing your RDB files and you should get a list -- that'll also tell you if the files are encrypted. Pick the "earliest" RDB file -- and hope the files are undamaged after all this time. Choosing the correct directory in "Add Member" isn't always obvious, maybe post a screenshot of the directory structure of the external drive if you are getting stuck.
  4. The problem isn't with the network being joined. When you switch between networks on Windows there can be a "temporary bind" to the self-assigned IP because there's a period when no external network is available. Think of an old-fashioned A/B rotary switch -- as you turn it from A to B there's a moment when neither A nor B are connected. The problem is that when the new network is joined, either Windows doesn't tell RS Client or RS Client refuses to listen (or both! Or something else -- this is Windows, after all 😉). We can nudge things with stop/start, ipsave, or your suggested automation, but these are just workarounds until the engineers (from whichever company) fix the problem.
  5. Neither of which will work, because it is an RS client/OS problem. You can see how it should work with your Mac. Have the Client Preferences open while you are on your ethernet network, then unplug the ethernet and join a wireless network. RS Prefs will read "Client_name Multicast unavailable" in the "Client name:" section for a while (still bound to the unplugged ethernet) and then switch to the new IP address and read "Client_name (wirelessIPAddress)". (Going from memory, exact messages may be different, but you can see a delay then a re-bind to the new primary IP.) But in the same situation, Windows RS Client will go from the ethernet-bind to self-assigned-IP-bind but not then switch to the new wireless primary IP -- it gets stuck on the self-assigned address. Whether that's RS Client or Windows "doing it wrong" is something they can argue about amongst themselves... It does suggest another solution, though. That self-assigned IP is always in the 169.254.*.* subnet. If you are in a single-subnet situation and can configure your DHCP server appropriately you could have your network only use addresses in 169.254.*.* range, and both DHCP- and self-assigned addresses will be in the same subnet and the client will always be available.
  6. I'm constantly repeating similar to our Mac users. FileVault (macOS's similar feature) may be great for securing their files, but makes frequent usable backups even more important because a failed drive usually means the loss of the data on it. So we're on the fence whether to use it -- we have way more failed disks than lost/stolen laptops and work data isn't particularly sensitive/valuable so, in what is virtually a BYOD environment, it's up to the user whether they want the extra security for their personal stuff and if so they can take on extra responsibility for their backups.
  7. Try the following: Ctrl-click the "top level" item of the list. If your version of RS behaves like mine that will "open" all the sub-folders with just one click. You may have to try other modifier keys -- I'm currently using a Mac which is Remote Desktopping into another Mac which is Microsoft Remote Desktopping into the RS Server PC. So there's some confusion as to which keys are doing what and where!
  8. Most setups, unlike yours, use DHCP for the majority of their machines and/or have wireless enabled. While static addresses are a fix for the binding problem that isn't always practical (eg a workplace with more potentially-connected devices than available IP addresses) or a complete solution (eg my laptop may be static on "my" ethernet, but if I take it to another department I'll have to use wireless). Unfortunately my post was solving a different problem, which is probably specific to our setup, and won't help here. So currently the only solutions appear to be: Static IPs on clients (may not be practical), or Turn RS client off and on again (only possible where you allow this), or Use the "ipsave" command Note that rebooting the computer often doesn't solve the problem -- whatever glitch resulted in RS Client being bound to the Windows Private IP before the OS registered the NIC's "new" DHCP-assigned address is usually repeated. I'd use the "ipsave" command since we don't let users turn off RS Client -- ideally this would be scripted, since I don't trust users to notice there's a problem, and it'll be something I'll have to look at when we redeploy RS across our estate if this is a frequent problem for us. Shouldn't be an issue since you should be extending your network and so using the same router for all DHCP assignments, rather than adding a second network complete with second DHCP server and gateway connecting back to your original network etc. Extension will give you far fewer configuration headaches -- unless you need that second, segregated, network for some reason.
  9. Setting up the VPN(s) is easy -- using them can be a pain in the butt if you are frequently changing. Leave the current app, go to Settings, go to the VPN section, turn off the one you're using, find the one you want to use, turn it on, go back to the app, wait while Retrospect iOS resyncs with the now-available server (yawn...) and errors for all the others... Not a problem for me, with one VPN for RS. But for Malcolm, with maybe a dozen clients each with their own server on their own VPN -- that's a lot of swiping and waiting just to check that things are OK. And the above is exacerbated by the lack of iOS Shortcut support for VPN settings. On my Mac I just have to mouse up to the menu bar and select the VPN I want to use from my "Scripts" icon and the script does the rest -- closing anything particular to the current VPN, switching VPNs, opening anything particular to the new VPN, etc. "Single pane of glass" makes monitoring multiple RS instances much easier. Unfortunately, if those instances are on networks requiring different VPNs, that means RMC or a roll-your-own monitoring solution.
  10. Subnet detection works, and always has. The problem isn't detection by the server, but that the RS client stops listening. It doesn't "rebind" when the client machine's IP address changes -- both MrPete's stop/start and the "ipsave" command David mentions above will solve this, but both mean that the user has to know that there's a problem (unless you can automate it, triggered by the network change). This isn't just a problem with Retrospect and Windows, I've seen it with other "listener" daemons and other OSs. WRT to Drobos -- to (again!) echo David, one of the reasons we started using them when they first came to the UK was the ability to use different size disks. We could buy an enclosure, put a couple of disks in and then later upgrade it by adding bigger-but-now-cheaper disks, without having to copy of all the data of and swap disks and reformat the RAID and copy the data back on... Now that big disks are (relatively) cheap and other NASs now allow "volume expansion" on the fly (with limitations) it's not such a selling point to us -- but it might be for me at home. Another good point was that you could move all the disks from one enclosure to another and the Drobo volume(s) would just work as before -- useful if eg a PSU failed. A downside at the time was their (lack of) speed, probably because of the overheads of their proprietary "RAID" system on "consumerish" hardware -- I daresay things are much better now, but I haven't tried any of the current models.
  11. It would appear that "sloppily written" really means "Goes against my belief that..." ...so the official documentation stating that you can switch seamlessly between local and remote backups -- including an example using the industry standard method of onboarding work computers that are to be used remotely -- and the way Retrospect has been shown to work in practice are, quite simply, wrong. They must be wrong, because you say they are. So there really is no point continuing...
  12. You might want to edit your post following the results of my last test in the post before, where an "AlwaysRemote" client was indeed added without user intervention to the server and can therefore be seen in the server's Sources list -- what I'm calling a "client record" because that's where you set things like client options and tags.
  13. And the bad news is -- it does... "But Nige," I hear you say, "surely that's a good thing, allowing us to onboard Remote clients without making them come to the office?" I don't think so, because Remote Clients are automatically added: ...without the "Automatically Added Clients" tag, so there's no easy way to find them ...with the "Remote Backup Clients" tag, which means they will be automatically added to all Remote-enabled Proactive scripts ...with the client's backup "Options" defaulting to "All Volumes", so external drives etc will also be included I let things run and, without any intervention on my part, the new client started backing up via the first available Remotely-enabled script. Edit to add: I didn't notice this last night, but it appears that the client was added with the interface "Default/Direct IP", in contrast to the clients automatically added from the server's network which were "Default/Subnet". I don't know what this means if my home router's IP changes or I take the laptop to a different location (will the server consider it to now be on the "wrong" IP and refuse to back it up?) or if I know take it into work and plug in to the network (will the server not subnet-scan for the client, since it is "DirectIP"?). End edit Given the above I'd suggest everyone think really carefully before enabling both "Automatically add.." and "Remote Client Backups" unless they retain control of client installation (eg over a remote-control Zoom session) -- especially since I've yet to find out what happens if you have a duplicate client name (the next test, which I'm struggling to work out how to do).
  14. Jon, did you ever get anywhere with this? Just had a silly thought -- if the problem was that your Proactive scripts were running but none of your remote clients were getting backed up, make sure you didn't install Retrospect client on your server (see the very last point in the Remote Backup KB article).
  15. The trite answer is "Easily!". Note that tags are not applied to "client machines" -- they never have been, and the "Remote Backups Clients" tag is no different. They are applied to the "client record" on the server. They effectively say "allow incoming calls from this client" (when in the client record) and "allow incoming calls to this script" (when used in the script's Source). So I'm proposing a "class" of Remote tags, each instance having a user-defined attribute -- they all say "allow incoming" but finesse it with "and direct them to scripts with the matching attribute". Perhaps it is easier to think of them the other way round -- they would behave exactly the same as all other tags and, additionally, allow remote access. Are you sure? 😉 You've actually raised the next issue I want to test -- does automatic onboarding work with Remote clients? I haven't bothered with that yet, because our use of Favourites mandates local addition, but now I've time to scrub a machine and start a client from scratch again. Sorry David, but that's just plain wrong. From the KB article: "Clients can seamlessly switch between network backup on a local network to remote backup over the internet. You do not need to set up the remote backup initially. You can transition to it and back again" -- true as far back as Nov 2018.
  16. Briefly stepping away from the "David & Nigel Show" and back to OP's original question -- Fred, I hope you're still with us! At the moment, no. But rebuilding is parallelised, and once any client is complete it can be backed up again even while others are rebuilding. But I don't fancy doing multi-terabyte rebuilds for just one corrupt catalog either, and I can't (yet!) find a way of using backed up catalogs and the Repair function to shorten the process. I suspect that the "gold standard" work-around would be to: Remove "broken" client from original set's Source list (either machine entry or tags) Create a new Storage Group media set Move the folder of the backup files of the "failed" client to the new Storage Group, putting it in the same place in the SG hierarchy as it was Rebuild the new Storage Group media set Check you can restore from the rebuilt set Delete backup files and client catalog from original set Do a Transfer of the now-rebuilt backups from the new set to the original Check that original set's version of client is now working again Re-add client to original set's Source list Delete new Storage group data and catalogs If haven't tested this, but it seems like it should work. Try yourself on test data before using in anger! This shortcut, however, I have tried: Remove "broken" client from original set's Source list (either machine entry or tags) Create a new Storage Group media set Copy the folder of the backup files of the "failed" client to the new Storage Group, putting it in the same place in the SG hierarchy as it was Rebuild the new Storage Group media set Check you can restore from the rebuilt set Replace original catalog with the rebuilt one Check that original set's version of client is now working again Re-add client to original set's Source list Delete new Storage group data and catalogs And it works! But I'd urge you to give it a thorough thrashing in test mode before entrusting any of your precious data to this method... Both methods will do want we want -- rebuild just one client catalog out of many. The first requires an extra Transfer operation, the second the extra disk space for the data copy. neither comes with any official stamp of approval 🙂 Try them both and see what you think.
  17. The videos don't show that (although that's how the clients were installed, registered, and Favourites defined) as you can see, the tag changes were done to a machine that couldn't be contacted by the server (because it was at home and turned off!). The videos show that the list of machines ProactiveAI is going to try to back up is based solely on server-set tags, without reference to or need of contact with the client itself. So it's pretty obvious that I'm not forgetting that tags don't appear on client machines. Yes, but it isn't a huge leap to have multiple Remote tags that can be assigned server-side to both script and client and that message interpreted by the RS engine as "back me up with any matching Proactive script; legitimised because I have this public key stored in my pubkey.dat." It's almost the same solution as you propose but with three small and one big difference. The small ones are "no faffing with the client when changes need to be made", things are obvious to the Admin (no digging for secret strings), and the fact that adding Remote clients to scripts is exactly the same as adding any other tagged client (consistency is good). The big one is that you could apply multiple Remote tags to the client record so they can be backed up with multiple scripts (doable with multiple secret strings but nowhere near as easy or obvious to the Admin). I think they've assumed you'd have be a real idiot to want more than one remote-enabled script -- they've got me down to a T 🙂 From my playing, it alternates between the two available scripts -- but that's probably because there's no contention due to low client numbers and use of Storage Groups on both. I would hope it follows the "usual rules" -- if more than one destination is available for backup, pick the least-recently (yuck! Pardon my English, but I can't think of a better way of putting it) used.
  18. SMB is using the Windows short name because the name isn't being encoded for compatibility by the OS X SMB daemon -- and that's because the data is being put on the server by AFP clients. The permanent fix to this is: Lock out all your users On Mac Pro, mount the share using "afp://IP_address/sharename" Still on the Pro, mount the share again using "smb://IP_address/sharename" Move the troublesome data from the AFP-share window to the SMB-share window Turn off AFP on the Synology and force your users to SMB from now on, and maybe look at how to turn off signing in SMB config Step 4's the tricky bit -- set the windows to different views so you can keep track of which is which and decide how you are going to manage the transfer wrt clashing names, deletion of "AFP versions" when complete, etc. I'd probably do one folder at a time, copy that into a folder at the same level named "Transfer", delete the original, move the contents of "Transfer" to where the original was. (All the above just tested and confirmed with Catalina, an up-to-date Synolgy, a folder on the Mac named as yours with contents created by echo "Here's some text in a file" | tee "Animation (smaller)?"{0..9}.txt > /dev/null ...with an additional "testFile.txt" file to make sure "normal" files were also OK, and then copied to the Syn using AFP.) Or you could insist that everyone stays using, and only using, AFP, and pray it remains supported for as long as you need it... I may have given you a bum steer -- it looks like (for AFP and SMB at least) RS is using the system APIs to mount shares. If I turn the RS engine off and restart, my server shows no network mounts in /Volumes. If I turn on the RS engine the shares connected to via RS's Sources then appear in /Volumes, and if I add another share there it appears in /Volumes too. RS adds them as root/wheel while mounting via the Finder (non-root account) adds them as admin/staff -- entirely reasonable. Both work as expected from above -- SMB-mounted shows AFP-added files by short name and SMB-added files by full name complete with non-standard characters. AFP-mounted shows both versions with their full names. Both mount methods back up all files without error, so it doesn't look like the filenames are causing your errors in and of themselves. I still think you are losing connection with the server for some (unrelated to filenames) reason. As said before, RS backup-in-progress is much more sensitive to temporary disconnects than "normal" file sharing is, so it may be that your other users simply don't notice when it happens. It sounds like you have some time for testing -- if so I'd suggest you go back to scratch. Static IP on just one Syn interface, static on Mac Pro, direct cable connection between the two, does the problem still occur? If not, slowly add complexity until it does.
  19. It'll still show as a share (because it is!) but you should see a difference in the "Name" and "Machine" columns of the Shares list -- a share mounted via the server's OS will show the volume name in "Name" and the server's name in "Machine", a share mounted by RS will show "user@IP_or_FQDN/share" and either IP, FQDN or name.local depending on how you addressed it. We're trying to make sure that if you are using SMB you are doing so via OS X's smbd -- I don't know if RS uses that or its own routines, but I do know that OS X's smbd does lots of trickery to encode/decode funky Mac filenames for use on less supportive SMB shares. Have you got any examples of filenames that are failing? I can try and replicate your problem, albeit using RS17 rather than 16, if you think it might help.
  20. As excuses go, that's definitely in the top three! I hope all the important things -- the people! -- were undamaged. Have you tried mounting the share on the Mac (still using AFP) and treating it as a local volume, rather than letting RS do it? Also this thread throws up an interesting suggestion -- use the IP address rather than relying on Bonjour resolution (static IP on the server an advantage, obv).
  21. My testing setup was as close as I could get to our proposed "production" use -- no point in doing otherwise in this case. So... No -- I took the laptops into work, wiped them, got hem on the ethernet, installed RS client, registered them with RS Server and created Favourites. I then took them home. From home I VPNed from my desktop into the work RS Server and set up the test scripts and tags. I then turned on the laptops, got them connected to my home wireless network, and let things happen -- the laptops were at no time connected via VPN. Sidenote: If it was created using port forwarding, the client record "Interface" would necessarily be "Direct IP" and it would be my router's public address that was shown. That's because there's no way (at least that I can find, correct me if I'm wrong) to change from "Direct IP" to "Multicast" or "Subnet Broadcast" if you can't actually contact then select the client using the changed method. Sidenote 2: Apologies for any confusion, but the RS Server I'm testing with is the latest update to v17. "RS_16_Test" is the previous v16 test server which now has the engine turned off so is client-only. I was short on time and so grabbed an available machine -- I really should have thought ahead and renamed it! Again no -- I've shown that the list of clients that Proactive is waiting to access is generated on the fly using the script's Source list, without reference to the clients' locations or availability. And that the Source list is updated, without need to start/stop the script, either periodically or in response to tag changes. You can try this yourself by setting up a new Proactive script using a new tag that hasn't been applied to any clients, start it running, then applying that tag to a client -- after a few moments the client will appear in the ProactiveAI list. And if you can't do that yourself -- here's a video: https://youtu.be/LiZ8b6KZ3OY And inB4 "but Remote Backup Client tags are different": https://youtu.be/4Ok_z2Lrn9A And again no -- though I'm much less certain on this one 😉 -- the ProactiveAI list doesn't contain IP addresses, they're held in the Sources record for the client. I'd guess that Proactive signals the Engine which client is to be backed up and Engine refers to the Sources record for the client to determine connection type etc. I don't know how that works with Remote clients. But it isn't really important to the discussion. Agreed. But... The current kludgey perversion binds all so-tagged scripts to all so-tagged clients. I want to limit that binding so that a client can be bound to one, some, or all scripts as I desire. It can't be beyond the wit of man to have multiple Remote Backup Clients tags, each with an unique identifying attribute, so that matching-tagged scripts bind to matching-tagged clients -- after all, that's exactly how all the other tags work, and there's no messing around with the client installation required for those.
  22. Sorry, but that's irrelevant. "Automatically add clients" is for onboarding (note "check network for new clients" [my emphasis] in your quote) so Joe Bloggs can do his own client install and the server will automatically register the new client -- rather than the usual install client, go to server, manually discover client on network, add to server. The tag is preserved so you can use it scripts later on. It's neat, but not so useful to Admin's like me who always define Favourites on clients -- since the client must be online to do that, you may as well add them manually. I can see it would be good for anyone who always backs up whole clients (you can also apply Rules, obviously), since it removes the requirement for both client and Admin to be available when onboarding. Anyways... Take another look at my first screenshot. That's the list of clients the SG_Test Proactive script is waiting to back up, straight after a server restart. The MacBook Air is asleep, and has been for the previous 3 hours -- it isn't online, it can't have sent any message to the server since the restart, yet the server knows it is to be backed up to SG_Test. So the list must be generated regardless of any client's availability. Note: To be even clearer -- the server, "Luggage", and "RS_16_Test" are at work, and all on the same network. "admin2's MacBook" and "Admin's MacBook Air" are at home, behind a NATing router with no port forwarding, not using a VPN -- they cannot be contacted by the server, and any backup network session must be initiated by them. And that's why, as you point out, it says what it says on p228 of the UG. Server generates the Proactive script's client list from the script's source list, without reference to availability, the script then iterates through the list in order of "need". (I don't know what it does for "unavailable" clients, whether it skips them or tries a multicast/subnet broadcast before moving on. That "last seen at home" Remote client may now be back on the work ethernet -- does the now-local client report its presence to the server or does the server discover it?) Seriously -- try it for yourself. Add a new client to the server, but don't tag it. Turn the client computer off. Make a new Proactive script but don't run it, create a tag for the client, add the tag to the script's Sources, then start the script and look at what it is waiting to back up -- you'll see your turned-off client in the list. I don't have enough test machines to find out what happens when a Remote client that is due for backup advertises its availability but the script has other, more "in need" clients available. Perhaps another good reason to use Storage Groups, to avoid that very problem... No, it's a different approach to the same end -- you talk of "storing a suffix string on the client", I'm doing it wholly with tags on the server. Analogy time! (Yes, I'm twiddling my thumbs while a web server rebuilds -- apologies...) Acme Explosives are a careful company -- as you'd expect, given the business they're in. Every day each manager (Proactive script) phones each of their members of staff (RS client) to ask for a report on what had happened (backup) since they last spoke. Then Covid-19 hit. Acme furloughed most of their staff, set a few to work from home (Remote clients), and kept one manager in the office. The manager didn't know anyone's home number so, instead, the home-workers phoned in their daily reports to that manager, and everything was good. As the pandemic dragged on, more and more workers were taken off furlough and set to work from home. The single manager couldn't handle all the reports so Acme had to put more managers on site. Although they each had a phone, they all shared the same number, so there was no control over who collected a worker's report -- each worker phoned in again and again until they'd all recorded it (roughly the current situation in RS with multiple Remote-enabled Proactive scripts) or each one listened to the report and either recorded it or said "Sorry, not my responsibility, please call back and try again" (my work-round using Rules). Being smart businessmen, Acme called in the consultants. One proposed that each manager had an extension number and that each home-workers' phone be programmed to speed-dial the appropriate extension. Easy for the both worker and Acme once set up, but if the home-worker's manager changed a technician would have to go to their house and reprogram the speed-dial. The other also suggested extensions, but that all calls in were to a central number and then routed by an operator to the appropriate manager, as shown by tags on the switchboard. A bit more work on every call -- but easily changed if required, without the operator leaving their desk. History has yet to record which solution they eventually went with. But it must have been successful because, since then, business at Acme Explosives is booming... Sorry -- just couldn't resist 🙂
  23. I believe you're wrong. I can't be absolutely sure of how it works, I'm not an RS engineer, but here's a couple of screenshots immediately following a full restart of the server: Apologies for the fuzziness -- remote desktopping in a remote desktop session can play havoc with screenshots 🙂 On the same network as the server, "Luggage" is a Direct IP client and "RS_16_Test" is subnet broadcast. At home, ie Remote clients, "admin2's MacBook" finished a backup to "SG_Test" one minute before the server restart and is still awake/available, "Admin's MacBook Air" has been asleep since its last backup more than four hours before the restart. As you can see, they are all in the Proactive list, even though two can't be contacted. RS might, of course, be including anything from the source list with a "known" IP address, as shown in the second screenie, but that also shows that the "known" IP address persists even across restarts -- so a client will always have a "last known" IP address even if that isn't its current one. (Note that a Remote Backup isn't just "here's my IP, come back to me when you're ready". I deliberately turned off all port forwarding on my router for this test -- the backup session must be initiated client-side, hence the private IP address.) So each Proactive script does appear to have a list of clients, generated from the script's Source list. This isn't points-scoring on my part -- it helps refine our ideas on how to move forward because: ...is more complicated than needed. If RS interpreted any tag name starting "Remote Backup Clients" as it does now, but also allowed you to create eg "Remote Backup Clients - Group 1" and "Remote Backup Clients - Group 2" so you could add different "remote groups" to different scripts, everything would work just as with other tags -- no messing around with the client and no problem with your China example, just update the tag selection on the server and you're done. If that's a bit clunky you could have a special class of tags, "Remote Backup tags" that could have any name and yet still be understood as Remote tags. Thanks David -- this is really helpful in trying to work out a way forward for our setup. Even if nothing comes of it, it's had me poking around and trying different ways of using what's already available. Including -- gulp -- Storage Groups!
  24. Agreed -- and, IMHO, that's the wrong place to do it anyway. Proactive scripts, the only ones we're concerned with, generate then periodically refresh a list of clients to watch out for. That's where the logic should be implemented -- list generation. Wrong problem. You can have multiple Proactive scripts for Remote Backup clients -- the issue is that all Remote Backup clients are included in all Proactive/Remote scripts. I do like your solution of multiple, user-definable, Remote Backup tags. Obvious to the Admin, in keeping with what is already there and presumably easiest to implement (always easy to say that when on the outside!). Another option would be to keep the single tag but to add AND logic to client list generation from tags. They could even go further and replicate Rules within source-selection-by-tag -- think how powerful that would be, with the advantage of using the same structure as Rules. A lot more work for the programmers though, and probably of little use to the majority of users. A third would be to divorce Remote Backups from tags altogether. It can instead be considered as both a script attribute (this script accepts Remote clients as sources) and a client attribute (this client advertises its IP to the server) and so be moved to being "Options" for both. That makes sense in that Remote Backups are a functional attribute whereas tags are organisational but, again, would be a lot more work for the programmers and could also throw up client compatibility issues. If I was in charge of road-mapping I'd put the first solution as a priority, the third as something to work towards, and dismiss the second as nice-but-unnecessary. Meanwhile I'll carry on looking for a way to work with what we've got, whether that means single Remote Backup set with a stupidly large catalog or taking the hit of the Rules-based solution in an attempt to keep catalogs a manageable size.
  25. Finally got around to having a play with this. While RS17 still treats tags as "OR" when choosing which clients to back up in script and you can't use tags within a rule, you can use "Source Host" as part of a rule to determine whether or not a client's data is backed up by a particular Remote-enabled script. It involves more management, since you'd have to build and update the "Source Host" rules for each script, but there's a bigger problem: Omitting by Rule is not the same as not backing up the client. That's worth shouting -- the client is still scanned, every directory and file on the client's volume(s) or Favourite Folder(s) will be matched, a snapshot will be stored, and the event will be recorded as a successful backup. It's just that no data will be copied from client to server. (TBH that's the behaviour I should have expected from my own answers in other threads about how path/directory matching is applied in Rules.) So if you have 6 Proactive scripts, each backing up 1 of 6 groups of clients daily to 1 of 6 backup sets, every client will be "backed up" 6 times with just 1 resulting in data being copied. That's a lot of overhead, and may not be worth it for the resulting reduced (individual) catalog size. Also note: a failed media set or script will not be as obvious since it won't result in clients going into the "No backup in 7 days" report, since the "no data" backups from the other scripts are considered to be successful. For me, at least, Remote Backups is functionality that promises much but delivers little. Which is a shame -- if Remote Backup was a script or client option rather than a tag/group attribute, or if tag/group attributes could be evaluated with AND as well as OR logic, I think it would work really well.
  • Create New...