Jump to content

Nigel Smith

Members
  • Content count

    242
  • Joined

  • Last visited

  • Days Won

    14

Everything posted by Nigel Smith

  1. You might want to edit your post following the results of my last test in the post before, where an "AlwaysRemote" client was indeed added without user intervention to the server and can therefore be seen in the server's Sources list -- what I'm calling a "client record" because that's where you set things like client options and tags.
  2. And the bad news is -- it does... "But Nige," I hear you say, "surely that's a good thing, allowing us to onboard Remote clients without making them come to the office?" I don't think so, because Remote Clients are automatically added: ...without the "Automatically Added Clients" tag, so there's no easy way to find them ...with the "Remote Backup Clients" tag, which means they will be automatically added to all Remote-enabled Proactive scripts ...with the client's backup "Options" defaulting to "All Volumes", so external drives etc will also be included I let things run and, without any intervention on my part, the new client started backing up via the first available Remotely-enabled script. Given the above I'd suggest everyone think really carefully before enabling both "Automatically add.." and "Remote Client Backups" unless they retain control of client installation (eg over a remote-control Zoom session) -- especially since I've yet to find out what happens if you have a duplicate client name (the next test, which I'm struggling to work out how to do).
  3. Nigel Smith

    No Proactive scripts running

    Jon, did you ever get anywhere with this? Just had a silly thought -- if the problem was that your Proactive scripts were running but none of your remote clients were getting backed up, make sure you didn't install Retrospect client on your server (see the very last point in the Remote Backup KB article).
  4. The trite answer is "Easily!". Note that tags are not applied to "client machines" -- they never have been, and the "Remote Backups Clients" tag is no different. They are applied to the "client record" on the server. They effectively say "allow incoming calls from this client" (when in the client record) and "allow incoming calls to this script" (when used in the script's Source). So I'm proposing a "class" of Remote tags, each instance having a user-defined attribute -- they all say "allow incoming" but finesse it with "and direct them to scripts with the matching attribute". Perhaps it is easier to think of them the other way round -- they would behave exactly the same as all other tags and, additionally, allow remote access. Are you sure? 😉 You've actually raised the next issue I want to test -- does automatic onboarding work with Remote clients? I haven't bothered with that yet, because our use of Favourites mandates local addition, but now I've time to scrub a machine and start a client from scratch again. Sorry David, but that's just plain wrong. From the KB article: "Clients can seamlessly switch between network backup on a local network to remote backup over the internet. You do not need to set up the remote backup initially. You can transition to it and back again" -- true as far back as Nov 2018.
  5. Briefly stepping away from the "David & Nigel Show" and back to OP's original question -- Fred, I hope you're still with us! At the moment, no. But rebuilding is parallelised, and once any client is complete it can be backed up again even while others are rebuilding. But I don't fancy doing multi-terabyte rebuilds for just one corrupt catalog either, and I can't (yet!) find a way of using backed up catalogs and the Repair function to shorten the process. I suspect that the "gold standard" work-around would be to: Remove "broken" client from original set's Source list (either machine entry or tags) Create a new Storage Group media set Move the folder of the backup files of the "failed" client to the new Storage Group, putting it in the same place in the SG hierarchy as it was Rebuild the new Storage Group media set Check you can restore from the rebuilt set Delete backup files and client catalog from original set Do a Transfer of the now-rebuilt backups from the new set to the original Check that original set's version of client is now working again Re-add client to original set's Source list Delete new Storage group data and catalogs If haven't tested this, but it seems like it should work. Try yourself on test data before using in anger! This shortcut, however, I have tried: Remove "broken" client from original set's Source list (either machine entry or tags) Create a new Storage Group media set Copy the folder of the backup files of the "failed" client to the new Storage Group, putting it in the same place in the SG hierarchy as it was Rebuild the new Storage Group media set Check you can restore from the rebuilt set Replace original catalog with the rebuilt one Check that original set's version of client is now working again Re-add client to original set's Source list Delete new Storage group data and catalogs And it works! But I'd urge you to give it a thorough thrashing in test mode before entrusting any of your precious data to this method... Both methods will do want we want -- rebuild just one client catalog out of many. The first requires an extra Transfer operation, the second the extra disk space for the data copy. neither comes with any official stamp of approval 🙂 Try them both and see what you think.
  6. The videos don't show that (although that's how the clients were installed, registered, and Favourites defined) as you can see, the tag changes were done to a machine that couldn't be contacted by the server (because it was at home and turned off!). The videos show that the list of machines ProactiveAI is going to try to back up is based solely on server-set tags, without reference to or need of contact with the client itself. So it's pretty obvious that I'm not forgetting that tags don't appear on client machines. Yes, but it isn't a huge leap to have multiple Remote tags that can be assigned server-side to both script and client and that message interpreted by the RS engine as "back me up with any matching Proactive script; legitimised because I have this public key stored in my pubkey.dat." It's almost the same solution as you propose but with three small and one big difference. The small ones are "no faffing with the client when changes need to be made", things are obvious to the Admin (no digging for secret strings), and the fact that adding Remote clients to scripts is exactly the same as adding any other tagged client (consistency is good). The big one is that you could apply multiple Remote tags to the client record so they can be backed up with multiple scripts (doable with multiple secret strings but nowhere near as easy or obvious to the Admin). I think they've assumed you'd have be a real idiot to want more than one remote-enabled script -- they've got me down to a T 🙂 From my playing, it alternates between the two available scripts -- but that's probably because there's no contention due to low client numbers and use of Storage Groups on both. I would hope it follows the "usual rules" -- if more than one destination is available for backup, pick the least-recently (yuck! Pardon my English, but I can't think of a better way of putting it) used.
  7. SMB is using the Windows short name because the name isn't being encoded for compatibility by the OS X SMB daemon -- and that's because the data is being put on the server by AFP clients. The permanent fix to this is: Lock out all your users On Mac Pro, mount the share using "afp://IP_address/sharename" Still on the Pro, mount the share again using "smb://IP_address/sharename" Move the troublesome data from the AFP-share window to the SMB-share window Turn off AFP on the Synology and force your users to SMB from now on, and maybe look at how to turn off signing in SMB config Step 4's the tricky bit -- set the windows to different views so you can keep track of which is which and decide how you are going to manage the transfer wrt clashing names, deletion of "AFP versions" when complete, etc. I'd probably do one folder at a time, copy that into a folder at the same level named "Transfer", delete the original, move the contents of "Transfer" to where the original was. (All the above just tested and confirmed with Catalina, an up-to-date Synolgy, a folder on the Mac named as yours with contents created by echo "Here's some text in a file" | tee "Animation (smaller)?"{0..9}.txt > /dev/null ...with an additional "testFile.txt" file to make sure "normal" files were also OK, and then copied to the Syn using AFP.) Or you could insist that everyone stays using, and only using, AFP, and pray it remains supported for as long as you need it... I may have given you a bum steer -- it looks like (for AFP and SMB at least) RS is using the system APIs to mount shares. If I turn the RS engine off and restart, my server shows no network mounts in /Volumes. If I turn on the RS engine the shares connected to via RS's Sources then appear in /Volumes, and if I add another share there it appears in /Volumes too. RS adds them as root/wheel while mounting via the Finder (non-root account) adds them as admin/staff -- entirely reasonable. Both work as expected from above -- SMB-mounted shows AFP-added files by short name and SMB-added files by full name complete with non-standard characters. AFP-mounted shows both versions with their full names. Both mount methods back up all files without error, so it doesn't look like the filenames are causing your errors in and of themselves. I still think you are losing connection with the server for some (unrelated to filenames) reason. As said before, RS backup-in-progress is much more sensitive to temporary disconnects than "normal" file sharing is, so it may be that your other users simply don't notice when it happens. It sounds like you have some time for testing -- if so I'd suggest you go back to scratch. Static IP on just one Syn interface, static on Mac Pro, direct cable connection between the two, does the problem still occur? If not, slowly add complexity until it does.
  8. It'll still show as a share (because it is!) but you should see a difference in the "Name" and "Machine" columns of the Shares list -- a share mounted via the server's OS will show the volume name in "Name" and the server's name in "Machine", a share mounted by RS will show "user@IP_or_FQDN/share" and either IP, FQDN or name.local depending on how you addressed it. We're trying to make sure that if you are using SMB you are doing so via OS X's smbd -- I don't know if RS uses that or its own routines, but I do know that OS X's smbd does lots of trickery to encode/decode funky Mac filenames for use on less supportive SMB shares. Have you got any examples of filenames that are failing? I can try and replicate your problem, albeit using RS17 rather than 16, if you think it might help.
  9. As excuses go, that's definitely in the top three! I hope all the important things -- the people! -- were undamaged. Have you tried mounting the share on the Mac (still using AFP) and treating it as a local volume, rather than letting RS do it? Also this thread throws up an interesting suggestion -- use the IP address rather than relying on Bonjour resolution (static IP on the server an advantage, obv).
  10. My testing setup was as close as I could get to our proposed "production" use -- no point in doing otherwise in this case. So... No -- I took the laptops into work, wiped them, got hem on the ethernet, installed RS client, registered them with RS Server and created Favourites. I then took them home. From home I VPNed from my desktop into the work RS Server and set up the test scripts and tags. I then turned on the laptops, got them connected to my home wireless network, and let things happen -- the laptops were at no time connected via VPN. Sidenote: If it was created using port forwarding, the client record "Interface" would necessarily be "Direct IP" and it would be my router's public address that was shown. That's because there's no way (at least that I can find, correct me if I'm wrong) to change from "Direct IP" to "Multicast" or "Subnet Broadcast" if you can't actually contact then select the client using the changed method. Sidenote 2: Apologies for any confusion, but the RS Server I'm testing with is the latest update to v17. "RS_16_Test" is the previous v16 test server which now has the engine turned off so is client-only. I was short on time and so grabbed an available machine -- I really should have thought ahead and renamed it! Again no -- I've shown that the list of clients that Proactive is waiting to access is generated on the fly using the script's Source list, without reference to the clients' locations or availability. And that the Source list is updated, without need to start/stop the script, either periodically or in response to tag changes. You can try this yourself by setting up a new Proactive script using a new tag that hasn't been applied to any clients, start it running, then applying that tag to a client -- after a few moments the client will appear in the ProactiveAI list. And if you can't do that yourself -- here's a video: https://youtu.be/LiZ8b6KZ3OY And inB4 "but Remote Backup Client tags are different": https://youtu.be/4Ok_z2Lrn9A And again no -- though I'm much less certain on this one 😉 -- the ProactiveAI list doesn't contain IP addresses, they're held in the Sources record for the client. I'd guess that Proactive signals the Engine which client is to be backed up and Engine refers to the Sources record for the client to determine connection type etc. I don't know how that works with Remote clients. But it isn't really important to the discussion. Agreed. But... The current kludgey perversion binds all so-tagged scripts to all so-tagged clients. I want to limit that binding so that a client can be bound to one, some, or all scripts as I desire. It can't be beyond the wit of man to have multiple Remote Backup Clients tags, each with an unique identifying attribute, so that matching-tagged scripts bind to matching-tagged clients -- after all, that's exactly how all the other tags work, and there's no messing around with the client installation required for those.
  11. Sorry, but that's irrelevant. "Automatically add clients" is for onboarding (note "check network for new clients" [my emphasis] in your quote) so Joe Bloggs can do his own client install and the server will automatically register the new client -- rather than the usual install client, go to server, manually discover client on network, add to server. The tag is preserved so you can use it scripts later on. It's neat, but not so useful to Admin's like me who always define Favourites on clients -- since the client must be online to do that, you may as well add them manually. I can see it would be good for anyone who always backs up whole clients (you can also apply Rules, obviously), since it removes the requirement for both client and Admin to be available when onboarding. Anyways... Take another look at my first screenshot. That's the list of clients the SG_Test Proactive script is waiting to back up, straight after a server restart. The MacBook Air is asleep, and has been for the previous 3 hours -- it isn't online, it can't have sent any message to the server since the restart, yet the server knows it is to be backed up to SG_Test. So the list must be generated regardless of any client's availability. Note: To be even clearer -- the server, "Luggage", and "RS_16_Test" are at work, and all on the same network. "admin2's MacBook" and "Admin's MacBook Air" are at home, behind a NATing router with no port forwarding, not using a VPN -- they cannot be contacted by the server, and any backup network session must be initiated by them. And that's why, as you point out, it says what it says on p228 of the UG. Server generates the Proactive script's client list from the script's source list, without reference to availability, the script then iterates through the list in order of "need". (I don't know what it does for "unavailable" clients, whether it skips them or tries a multicast/subnet broadcast before moving on. That "last seen at home" Remote client may now be back on the work ethernet -- does the now-local client report its presence to the server or does the server discover it?) Seriously -- try it for yourself. Add a new client to the server, but don't tag it. Turn the client computer off. Make a new Proactive script but don't run it, create a tag for the client, add the tag to the script's Sources, then start the script and look at what it is waiting to back up -- you'll see your turned-off client in the list. I don't have enough test machines to find out what happens when a Remote client that is due for backup advertises its availability but the script has other, more "in need" clients available. Perhaps another good reason to use Storage Groups, to avoid that very problem... No, it's a different approach to the same end -- you talk of "storing a suffix string on the client", I'm doing it wholly with tags on the server. Analogy time! (Yes, I'm twiddling my thumbs while a web server rebuilds -- apologies...) Acme Explosives are a careful company -- as you'd expect, given the business they're in. Every day each manager (Proactive script) phones each of their members of staff (RS client) to ask for a report on what had happened (backup) since they last spoke. Then Covid-19 hit. Acme furloughed most of their staff, set a few to work from home (Remote clients), and kept one manager in the office. The manager didn't know anyone's home number so, instead, the home-workers phoned in their daily reports to that manager, and everything was good. As the pandemic dragged on, more and more workers were taken off furlough and set to work from home. The single manager couldn't handle all the reports so Acme had to put more managers on site. Although they each had a phone, they all shared the same number, so there was no control over who collected a worker's report -- each worker phoned in again and again until they'd all recorded it (roughly the current situation in RS with multiple Remote-enabled Proactive scripts) or each one listened to the report and either recorded it or said "Sorry, not my responsibility, please call back and try again" (my work-round using Rules). Being smart businessmen, Acme called in the consultants. One proposed that each manager had an extension number and that each home-workers' phone be programmed to speed-dial the appropriate extension. Easy for the both worker and Acme once set up, but if the home-worker's manager changed a technician would have to go to their house and reprogram the speed-dial. The other also suggested extensions, but that all calls in were to a central number and then routed by an operator to the appropriate manager, as shown by tags on the switchboard. A bit more work on every call -- but easily changed if required, without the operator leaving their desk. History has yet to record which solution they eventually went with. But it must have been successful because, since then, business at Acme Explosives is booming... Sorry -- just couldn't resist 🙂
  12. I believe you're wrong. I can't be absolutely sure of how it works, I'm not an RS engineer, but here's a couple of screenshots immediately following a full restart of the server: Apologies for the fuzziness -- remote desktopping in a remote desktop session can play havoc with screenshots 🙂 On the same network as the server, "Luggage" is a Direct IP client and "RS_16_Test" is subnet broadcast. At home, ie Remote clients, "admin2's MacBook" finished a backup to "SG_Test" one minute before the server restart and is still awake/available, "Admin's MacBook Air" has been asleep since its last backup more than four hours before the restart. As you can see, they are all in the Proactive list, even though two can't be contacted. RS might, of course, be including anything from the source list with a "known" IP address, as shown in the second screenie, but that also shows that the "known" IP address persists even across restarts -- so a client will always have a "last known" IP address even if that isn't its current one. (Note that a Remote Backup isn't just "here's my IP, come back to me when you're ready". I deliberately turned off all port forwarding on my router for this test -- the backup session must be initiated client-side, hence the private IP address.) So each Proactive script does appear to have a list of clients, generated from the script's Source list. This isn't points-scoring on my part -- it helps refine our ideas on how to move forward because: ...is more complicated than needed. If RS interpreted any tag name starting "Remote Backup Clients" as it does now, but also allowed you to create eg "Remote Backup Clients - Group 1" and "Remote Backup Clients - Group 2" so you could add different "remote groups" to different scripts, everything would work just as with other tags -- no messing around with the client and no problem with your China example, just update the tag selection on the server and you're done. If that's a bit clunky you could have a special class of tags, "Remote Backup tags" that could have any name and yet still be understood as Remote tags. Thanks David -- this is really helpful in trying to work out a way forward for our setup. Even if nothing comes of it, it's had me poking around and trying different ways of using what's already available. Including -- gulp -- Storage Groups!
  13. Agreed -- and, IMHO, that's the wrong place to do it anyway. Proactive scripts, the only ones we're concerned with, generate then periodically refresh a list of clients to watch out for. That's where the logic should be implemented -- list generation. Wrong problem. You can have multiple Proactive scripts for Remote Backup clients -- the issue is that all Remote Backup clients are included in all Proactive/Remote scripts. I do like your solution of multiple, user-definable, Remote Backup tags. Obvious to the Admin, in keeping with what is already there and presumably easiest to implement (always easy to say that when on the outside!). Another option would be to keep the single tag but to add AND logic to client list generation from tags. They could even go further and replicate Rules within source-selection-by-tag -- think how powerful that would be, with the advantage of using the same structure as Rules. A lot more work for the programmers though, and probably of little use to the majority of users. A third would be to divorce Remote Backups from tags altogether. It can instead be considered as both a script attribute (this script accepts Remote clients as sources) and a client attribute (this client advertises its IP to the server) and so be moved to being "Options" for both. That makes sense in that Remote Backups are a functional attribute whereas tags are organisational but, again, would be a lot more work for the programmers and could also throw up client compatibility issues. If I was in charge of road-mapping I'd put the first solution as a priority, the third as something to work towards, and dismiss the second as nice-but-unnecessary. Meanwhile I'll carry on looking for a way to work with what we've got, whether that means single Remote Backup set with a stupidly large catalog or taking the hit of the Rules-based solution in an attempt to keep catalogs a manageable size.
  14. Finally got around to having a play with this. While RS17 still treats tags as "OR" when choosing which clients to back up in script and you can't use tags within a rule, you can use "Source Host" as part of a rule to determine whether or not a client's data is backed up by a particular Remote-enabled script. It involves more management, since you'd have to build and update the "Source Host" rules for each script, but there's a bigger problem: Omitting by Rule is not the same as not backing up the client. That's worth shouting -- the client is still scanned, every directory and file on the client's volume(s) or Favourite Folder(s) will be matched, a snapshot will be stored, and the event will be recorded as a successful backup. It's just that no data will be copied from client to server. (TBH that's the behaviour I should have expected from my own answers in other threads about how path/directory matching is applied in Rules.) So if you have 6 Proactive scripts, each backing up 1 of 6 groups of clients daily to 1 of 6 backup sets, every client will be "backed up" 6 times with just 1 resulting in data being copied. That's a lot of overhead, and may not be worth it for the resulting reduced (individual) catalog size. Also note: a failed media set or script will not be as obvious since it won't result in clients going into the "No backup in 7 days" report, since the "no data" backups from the other scripts are considered to be successful. For me, at least, Remote Backups is functionality that promises much but delivers little. Which is a shame -- if Remote Backup was a script or client option rather than a tag/group attribute, or if tag/group attributes could be evaluated with AND as well as OR logic, I think it would work really well.
  15. Nigel Smith

    Retrospect Management Console

    I don't think that would help -- you'd still have to VPN in separately to each "organisation" to check/monitor, just what Malcolm was trying to avoid by using RMC. Doubly annoying since, last time I looked, iOS VPN settings weren't available to Shortcuts. Assuming Malcolm has "control" of the remote RS servers and routers he might be able to do something with a proxy server at his end which ssh-tunnelled to the remote servers. Quite how you'd set that up is well beyond me though 😞
  16. Nigel Smith

    Retrospect Management Console

    ? My point is that if the only requirement is monitoring, you could use Script Hooks to generate your own data to parse into the on-prem engine of your choice and not use RMC at all. I can't speak for Malcolm, but the vast majority of my RS interactions are for monitoring so having to VPN into each site when active management is required wouldn't be a hardship.
  17. Nigel Smith

    Retrospect Management Console

    Late to the party, since RMC holds no interest for me. So I won't comment on most of the thread. However... I can see how RMC would be very appealing, especially if it presents as "single pane of glass" rather than you having to go to one site, then the next, then the next. But if your main requirement is monitoring, maybe look at using Script Hooks to send data to your Zabbix instance so it's of a part with your other stats reports/alerts. I don't know how much you can do within Zabbix, eg to show machines which haven't been backed up in X days. But you've got FileMaker available -- it wouldn't take much to parse the backup reports or any other info sent via Script Hooks into a custom database with any functionality you wanted, including a dashboard, automated email alerting, etc. (I used to do that back when v6 had AppleScript support, so I could monitor things like weekly churn on individual machines, generate summaries of Group backup storage usage for cross-charging, and similar.)
  18. Nigel Smith

    Backup set located on site to site VPN

    Never one to resist sticking an oar in... Again, details are important -- different OS X versions have different SMB implementations so, if you do file a bug report, you'll need to include that info. But I wouldn't bother -- as Lennart says, performance when using a NAS-stored catalog (especially when you add the VPN in too!) will be absolutely dire, and you'll save yourself a lot of grief simply storing it locally. To follow on from David's last point -- it's nice to store catalog and set "together", especially for disaster recovery of the RS server, but in cases like this it isn't really practical. Perhaps a better solution is to store the backup set on the NAS, the catalog locally, and to also back up the catalog to the NAS (doesn't need to be a full-on incremental RS backup, you could just copy it over every night). That way you'll have little or nothing to do in the way of catalog rebuilding in a DR scenario, just picking up those few (if any) sessions that happened between the last catalog backup and the disaster.
  19. Nigel Smith

    Scanning incomplete, error -1101 (file/directory not found)

    Retrospect doesn't do a UNIXy tree-walk, not bothering to look at anything "/backup/FileMaker/Progressive/" or lower. Instead it scans *every* file of a volume and applies its selectors to decide what to do. I'd assume from the errors that it is getting partway through scanning those directories' contents when, suddenly, they vanish. Whilst annoying in a simple case like you describe, it's also part of what makes the selectors so powerful -- for example, being able to exclude files on a path *unless* they were modified in the last 2 hours -- and why all the metadata needs to be collected via the scan before a decision can be made. Two ways round this. If you want to exclude most paths, define the ones you want as volumes and only back those up -- we only back up "/Users" so that's what we do, which also greatly reduces scan time. If you want to back up most but not all, which I guess is what you're after, use the "Privacy" pane in the client to designate those paths to exclude.
  20. Nigel Smith

    'Restore --> Find Files' yields 0 files found

    Only think left I can think of is an indexing issue with the set's database -- and that's assuming that there's a database, that the database is indexed, and the index is used when searching... I'm guessing all the "missing" .nef files were backed up in the same session? You could always try a catalog rebuild and see if the problem persists -- but make sure your original catalog is safe and you don't overwrite it in the process!
  21. Nigel Smith

    'Restore --> Find Files' yields 0 files found

    So "Watercolor" is on your D:\ drive, contains "*.nef" files, but those files don't appear when you search your backups for them. Almost sounds as if they haven't been backed up -- have you checked for any exclusions in your backup scripts? You can also browse the entire Backup set by doing a search with no criteria -- IIRC, on Windows it defaults to "Include everything" and "Exclude nothing" -- then browsing the results. It'll probably be a long list, you'll get every backed version of every document, but you'll at least be able to drill down to "2016-04-01 - Watercolor" and see what's in there. If you still don't see the .nef files that strongly suggests they were never backed up for some reason. I'd be inclined to do a quick test. Duplicate the backup script you've been using, define "2016-04-01 - Watercolor" as a volume, change the duplicate script to back up only that volume and ideally the destination to be somewhere new (stick, HD, share, cloud -- doesn't matter, it just pays to play safe and keep it separate from your "real" backups). Run it and see what happens -- do the .nef files get backed up?
  22. Nigel Smith

    'Restore --> Find Files' yields 0 files found

    If I were you, I'd start again from scratch, but coming from the opposite direction. And remember that it pays to be as explicit as possible with selectors -- so files don't end with "nef", they end in ".nef". So start with only the "filename ends with .nef" selector. If that picks up all you expect, add "and Windows file or folder path starts with D:\Greg\" (remember -- explicit! Include the trailing backslash). Then, maybe a "Windows path of folder contains..." to get just your subfolder. But you may not need to even go that far if you can manually remove/select what you want from the results of the first filter. Selectors can be tricky beasts, which don't always behave the way you'd expect -- or the way selectors anywhere else would! But they work well once you master their own particular logic. If they then don't show what you expect it's usually a wrongly-chosen snapshot or similar, so search the whole set.
  23. Nigel Smith

    'Restore --> Find Files' yields 0 files found

    Change that first match to "path starts with D:\Greg\" <- note the trailing slash, and see if that helps. Previous testing showed that exact folder matching required that terminating backslash, and I suspect that "path starts with" does too, as implied in the "Tip" under where you type the path for the selector.
  24. Nigel Smith

    Proactive Backups and Background Running

    My apologies, I'd assumed that: ...was a detail, in that RS was installed on a machine that was just lying around in your office rather than a secured server room. In such situations we've used lockable security enclosures and hard-wired power so cleaners/users/random passers-by can't "accidentally" power cycle the machine after "accidentally" plugging in a bootable USB, etc. And I agree, requiring a login is a major minus for RS on Windows.
  25. Nigel Smith

    Proactive Backups and Background Running

    Totally agree with all you wrote -- which is why I asked, since our Win RS server is in a locked and alarmed server room to which access is tightly controlled and so it isn't such an issue. We've also been burnt enough times by "auto-restarts" (on both Win and Mac) that we stop them wherever possible -- we'll control when updates are applied thank-you-very-much, and if a machine gets shut down because of power loss we both want to know why and to make sure it has come back up cleanly -- so having to log in isn't an issue, we're doing it anyway. I would add that there are plenty of ways to physically secure a machine in a more open situation such as yours, and that having RS run as a background process wouldn't solve any of the many other security issues that arise from physical access to a computer. It's obvious from the length of time this has been an issue that Windows's security features make switching the RS Engine to background daemon a non-trivial exercise, else it would have been done already. Until it does happen we'll just have to find workarounds -- and, being a Mac guy, I'm particularly partial to your idea of repurposing that old Mac Mini 🙂
×