Jump to content

Nigel Smith

Members
  • Posts

    363
  • Joined

  • Last visited

  • Days Won

    26

Everything posted by Nigel Smith

  1. CHKDSK with what flags? And you may get better results using a less... rudimentary tool. Most drive manufacturers offer better eg SeaTools for Seagate disks. I'd also try using the same filter as you use for the cloud backups, then slowly expanding it to include other files -- if you can find the problem file(s) you can get as good a backup as you can before things become terminal.
  2. I assume it's the "!"s that are the problem? If you click on one and check the "Status" in the "Summary" tab, what does it tell you? Might help to see the "Source" and "Destination" columns too, if that isn't too secret-squirrel to post.
  3. I don't think that's what is being said. If I set up a bunch of scheduled scripts at the same time, the first kicks off and the others go into the waiting queue. If I set up a bunch of Proactive scripts (note: scripts) they don't go in the queue. The note doesn't make it clear, but observed behaviour is that P-AI uses the source lists in all the scripts to generate the P-AI source list -- I don't know what happens if you are already running on all available units and another script starts (do the sources get added straight away, do you have to wait for a unit to finish before the P-AI list is regenerated [and, if so, does that take priority over skipping to the next client?]). There's obviously something hinky in what Jan's seeing, but I think we're second-guessing at this point.
  4. Jan's running a script to back up Macs and another to back up PCs, so he can separate the different rules for each. With his setup, we'd expect RS to start backing up the first 8-10 clients and, when the first of those is finished, it to go looking for the next available and start on that. It should maintain 8-10 parallel executions for as long as there are clients in need and available. He appears to be seeing RS start to back up the first (say) 8 clients and, when the first of those is finished, it carries on with the remaining 7. Then 6, 5...1, and when that last finishes it goes looking for the next batch of available clients. That sounds like a bug, and it would be a useful data point to know if that is happening across the whole Engine (Mac script is waiting because PC script is still backing up a client) or is per script (Mac script reloads next 6 when it finishes the last Mac, even while PC script is still backing up its last scheduled client). I suspect it's the former, both from Jan's description and my own vague sense of how things work, but if it's the latter that does suggest an easy workaround until the problem is fixed.
  5. My v17 tests wouldn't have been severe enough to bring about this situation (testing a work server and simultaneous multiple Remote clients all at your house is a sure way to cripple your home broadband, so I had to limit the time it ran!). Previous versions did what you want, but that was using multiple scripts/sets rather than a Storage Group. In your setup -- if, say, the Mac script is backing up a Mac, does the PC script continue after it's finished a client? IE, is the Engine waiting until all execution threads have completed, or all execution threads for a script? And you may get some clues by setting the Engine Log Level to 5 (see p49 of the manual and this KB article). Agreed that it's annoying if it isn't behaving optimally. OTOH, with the resources you've got, non-optimal performance shouldn't be an issue once you're doing incrementals (unless you've Remote Clients with significant amounts of churn). AFAIK that was used in the context of backups long before the word gained its current "social" connotation. And it's a good word -- "pruning" is chopping off the oldest data, while "grooming" is cleverer and can use complex rules to remove unwanted data from any place in the set.
  6. Thanks David -- I hadn't kept up with the changes, and have edited my post accordingly. Feel free to laugh -- "That Nigel, he's so version 16..." 🙂
  7. I think I've actually mis-represented things, although it's the behaviour most people will observe most of the time. Really it's that "a catalog or media set can only written to by one process at a time". So if you had a single Proactive script targeting two or more media sets (sometimes done so you can rotate sets by alternating which is online) or multiple scripts using the same catalog, only one client could be backed up at a time. Again, Storage Group catalogs behave like one catalog per client/volume (which is why there's no file level de-dup across clients [or volumes on the same client? I've not checked that]) which are presented as a single catalog for search, UI interaction, etc. Even if the catalog is just one file, internally it's probably a database with a table per client/volume. The observed behaviour... Edit: Oh dear, that'll teach me to keep up with the latest changes! See David's posts and links for the correct description of how ProactiveAI now works. But I'll leave this here, both as a monument to my own stupidity and because of the bit about single script/media set blocking. ...is that the ProactiveAI builds a list of clients to back up, based on the sources in currently active Proactive scripts, ordered by "least recently backed up". It works its way down that list, and starts to back up the first available client. If you only have one Proactive script and that doesn't use a Storage Group, it'll wait for the backup to finish before trying the next client in the list. But if you have multiple Proactive scripts using different media sets or your single script targets a Storage Group, a second process will start and try the next client in the list, and so on. Clients "bubble down" the list after they are backed up so, with each iteration, the list remains ordered by "least recently backed up". So it does what you want, if you have multiple (and correctly set up) Proactive scripts or use a single script and Storage Groups. Perhaps the only thing that is missing (at least, I've never observed it) is that if ProactiveAI is half-way down the list it doesn't jump back up to a higher-listed client if it becomes available -- the client will have to wait for the next loop round to get to it. Complete backups should be a pretty rare event (and, ideally, done over the local network) -- Retrospect is very good at restoring from even a long series of incrementals in one go (unlike some other software where you do a restore, then overlay incremental 1, then incrementals 2, 3, 4...100). If you want to do it as part of "set management", have a look at the various transfer options -- instead of doing a (slow) complete backup you can copy the latest snapshot and data from old set to new, then start doing incrementals to that new media set. But, in these days of remote working, getting a first complete backup can be a problem. You could use a separate script/media set for that and then transfer that backup to your "working" media set, so you didn't block other clients for the duration. Or you could use the same media set but a different script with an "only backup data newer than..." filter which you steadily roll back over a couple of weeks -- so you always back up the latest data and, after a few days, you'll have the rest too, with much-reduced impact on your other clients. Retrospect is very flexible, with plenty of options from which you can chose what will work best for your situation.
  8. To expand on David's comment, because I think he's hit the nail on the head... You can only run one backup at a time to a Media Set. If you want to parallelise you must have multiple Media Sets and distribute your clients across them. I do this, putting each "departments'" computers into their own Group, making a Disk Media Set for each "department", then making a Proactive script for each with the appropriate Group as Source and Media Set as target. A Storage Group is, in essence, multiple Media Sets (one per client/volume) in a wrapper -- similar to above but with Retrospect doing the hard work and presenting you with a single UI element to use in your operations. So a single Proactive script can back up up to 16 clients in parallel to a Storage Group. There are pros and cons to both approaches -- which you should use depends on your situation. You can even use both, for example multiple sets/scripts for local desktops and Storage Groups for Remote clients.
  9. The confounding issue is that Macs behave "properly" -- when a "new" primary interface becomes "live" the client will, eventually, switch to it. On Windows the client often gets stuck -- I most commonly see it when users have started/woken their laptop up (client binds to wireless) then connect to the ethernet (which takes precedence for network traffic, but the client is still on wireless), but I've also seen what MrPete describes (client binding to internal IP during network change, and not releasing). That you are using Macs, plus the relatively simple nature of your home network, means that your suggested automated work-rounds will (probably) work. In more complicated situations, with Windows clients, that's far less likely. While I'm sure it could be made to work, the real solution is to fix the problem (which, hopefully, that in-progress-but-delayed bug fix will do).
  10. Go to your Applications folder. Control-click the "Retrospect" app and select "Open Package Contents". Open the "Contents" directory, then "Resources". Scroll down to "Uninstall Retrospect". If that sounds like too much trouble then launch the "Retrospect" app, select "Preferences..." from the "Retrospect" menu, select the "Console" pane (far right) and then "Export server installer and uninstaller" to the destination of your choice.
  11. Manual ipsave has been working for years, well before Remote Clients were even thought of. Particularly useful for dual-NIC machines where eg the primary interface (which RS Client will generally try to bind to) is the general network or "outside world" while the secondary (which you want RS Client to bind to) is connected to a backup or "internal" network. It's most useful for machines with static IPs since you just do it the once. As said before, it's also a good work-round for the "confused client" issue for those who don't allow their users to turn the Client off-and-on-again. But it is more involved since the user (or a script) has to determine the IP address to be used. Note that pretty much anything done server-side will not help because the "confused client" is rarely bound to an address that the server can reach.
  12. His contention is that it's a lot easier for his users to simply turn RS off and on than it is to find the new IP address and use that (somehow!) in an ipsave command -- and I have total sympathy with his view! That would be my default fix too -- as long as (as you've also said) admins haven't disallowed turning off RS.
  13. I'm a bit lost as to how far you have (or haven't!) got. Do you know the version of Retrospect used to create these backups? If it's *really* old you may be in trouble -- I don't think current versions can handle pre-v6 backup sets. Anyway, try Rebuild, select "Disk", "Add Member", navigate to and choose the folder containing your RDB files and you should get a list -- that'll also tell you if the files are encrypted. Pick the "earliest" RDB file -- and hope the files are undamaged after all this time. Choosing the correct directory in "Add Member" isn't always obvious, maybe post a screenshot of the directory structure of the external drive if you are getting stuck.
  14. The problem isn't with the network being joined. When you switch between networks on Windows there can be a "temporary bind" to the self-assigned IP because there's a period when no external network is available. Think of an old-fashioned A/B rotary switch -- as you turn it from A to B there's a moment when neither A nor B are connected. The problem is that when the new network is joined, either Windows doesn't tell RS Client or RS Client refuses to listen (or both! Or something else -- this is Windows, after all 😉). We can nudge things with stop/start, ipsave, or your suggested automation, but these are just workarounds until the engineers (from whichever company) fix the problem.
  15. Neither of which will work, because it is an RS client/OS problem. You can see how it should work with your Mac. Have the Client Preferences open while you are on your ethernet network, then unplug the ethernet and join a wireless network. RS Prefs will read "Client_name Multicast unavailable" in the "Client name:" section for a while (still bound to the unplugged ethernet) and then switch to the new IP address and read "Client_name (wirelessIPAddress)". (Going from memory, exact messages may be different, but you can see a delay then a re-bind to the new primary IP.) But in the same situation, Windows RS Client will go from the ethernet-bind to self-assigned-IP-bind but not then switch to the new wireless primary IP -- it gets stuck on the self-assigned address. Whether that's RS Client or Windows "doing it wrong" is something they can argue about amongst themselves... It does suggest another solution, though. That self-assigned IP is always in the 169.254.*.* subnet. If you are in a single-subnet situation and can configure your DHCP server appropriately you could have your network only use addresses in 169.254.*.* range, and both DHCP- and self-assigned addresses will be in the same subnet and the client will always be available.
  16. I'm constantly repeating similar to our Mac users. FileVault (macOS's similar feature) may be great for securing their files, but makes frequent usable backups even more important because a failed drive usually means the loss of the data on it. So we're on the fence whether to use it -- we have way more failed disks than lost/stolen laptops and work data isn't particularly sensitive/valuable so, in what is virtually a BYOD environment, it's up to the user whether they want the extra security for their personal stuff and if so they can take on extra responsibility for their backups.
  17. Try the following: Ctrl-click the "top level" item of the list. If your version of RS behaves like mine that will "open" all the sub-folders with just one click. You may have to try other modifier keys -- I'm currently using a Mac which is Remote Desktopping into another Mac which is Microsoft Remote Desktopping into the RS Server PC. So there's some confusion as to which keys are doing what and where!
  18. Most setups, unlike yours, use DHCP for the majority of their machines and/or have wireless enabled. While static addresses are a fix for the binding problem that isn't always practical (eg a workplace with more potentially-connected devices than available IP addresses) or a complete solution (eg my laptop may be static on "my" ethernet, but if I take it to another department I'll have to use wireless). Unfortunately my post was solving a different problem, which is probably specific to our setup, and won't help here. So currently the only solutions appear to be: Static IPs on clients (may not be practical), or Turn RS client off and on again (only possible where you allow this), or Use the "ipsave" command Note that rebooting the computer often doesn't solve the problem -- whatever glitch resulted in RS Client being bound to the Windows Private IP before the OS registered the NIC's "new" DHCP-assigned address is usually repeated. I'd use the "ipsave" command since we don't let users turn off RS Client -- ideally this would be scripted, since I don't trust users to notice there's a problem, and it'll be something I'll have to look at when we redeploy RS across our estate if this is a frequent problem for us. Shouldn't be an issue since you should be extending your network and so using the same router for all DHCP assignments, rather than adding a second network complete with second DHCP server and gateway connecting back to your original network etc. Extension will give you far fewer configuration headaches -- unless you need that second, segregated, network for some reason.
  19. Setting up the VPN(s) is easy -- using them can be a pain in the butt if you are frequently changing. Leave the current app, go to Settings, go to the VPN section, turn off the one you're using, find the one you want to use, turn it on, go back to the app, wait while Retrospect iOS resyncs with the now-available server (yawn...) and errors for all the others... Not a problem for me, with one VPN for RS. But for Malcolm, with maybe a dozen clients each with their own server on their own VPN -- that's a lot of swiping and waiting just to check that things are OK. And the above is exacerbated by the lack of iOS Shortcut support for VPN settings. On my Mac I just have to mouse up to the menu bar and select the VPN I want to use from my "Scripts" icon and the script does the rest -- closing anything particular to the current VPN, switching VPNs, opening anything particular to the new VPN, etc. "Single pane of glass" makes monitoring multiple RS instances much easier. Unfortunately, if those instances are on networks requiring different VPNs, that means RMC or a roll-your-own monitoring solution.
  20. Subnet detection works, and always has. The problem isn't detection by the server, but that the RS client stops listening. It doesn't "rebind" when the client machine's IP address changes -- both MrPete's stop/start and the "ipsave" command David mentions above will solve this, but both mean that the user has to know that there's a problem (unless you can automate it, triggered by the network change). This isn't just a problem with Retrospect and Windows, I've seen it with other "listener" daemons and other OSs. WRT to Drobos -- to (again!) echo David, one of the reasons we started using them when they first came to the UK was the ability to use different size disks. We could buy an enclosure, put a couple of disks in and then later upgrade it by adding bigger-but-now-cheaper disks, without having to copy of all the data of and swap disks and reformat the RAID and copy the data back on... Now that big disks are (relatively) cheap and other NASs now allow "volume expansion" on the fly (with limitations) it's not such a selling point to us -- but it might be for me at home. Another good point was that you could move all the disks from one enclosure to another and the Drobo volume(s) would just work as before -- useful if eg a PSU failed. A downside at the time was their (lack of) speed, probably because of the overheads of their proprietary "RAID" system on "consumerish" hardware -- I daresay things are much better now, but I haven't tried any of the current models.
  21. It would appear that "sloppily written" really means "Goes against my belief that..." ...so the official documentation stating that you can switch seamlessly between local and remote backups -- including an example using the industry standard method of onboarding work computers that are to be used remotely -- and the way Retrospect has been shown to work in practice are, quite simply, wrong. They must be wrong, because you say they are. So there really is no point continuing...
  22. You might want to edit your post following the results of my last test in the post before, where an "AlwaysRemote" client was indeed added without user intervention to the server and can therefore be seen in the server's Sources list -- what I'm calling a "client record" because that's where you set things like client options and tags.
  23. And the bad news is -- it does... "But Nige," I hear you say, "surely that's a good thing, allowing us to onboard Remote clients without making them come to the office?" I don't think so, because Remote Clients are automatically added: ...without the "Automatically Added Clients" tag, so there's no easy way to find them ...with the "Remote Backup Clients" tag, which means they will be automatically added to all Remote-enabled Proactive scripts ...with the client's backup "Options" defaulting to "All Volumes", so external drives etc will also be included I let things run and, without any intervention on my part, the new client started backing up via the first available Remotely-enabled script. Edit to add: I didn't notice this last night, but it appears that the client was added with the interface "Default/Direct IP", in contrast to the clients automatically added from the server's network which were "Default/Subnet". I don't know what this means if my home router's IP changes or I take the laptop to a different location (will the server consider it to now be on the "wrong" IP and refuse to back it up?) or if I know take it into work and plug in to the network (will the server not subnet-scan for the client, since it is "DirectIP"?). End edit Given the above I'd suggest everyone think really carefully before enabling both "Automatically add.." and "Remote Client Backups" unless they retain control of client installation (eg over a remote-control Zoom session) -- especially since I've yet to find out what happens if you have a duplicate client name (the next test, which I'm struggling to work out how to do).
  24. Jon, did you ever get anywhere with this? Just had a silly thought -- if the problem was that your Proactive scripts were running but none of your remote clients were getting backed up, make sure you didn't install Retrospect client on your server (see the very last point in the Remote Backup KB article).
  25. The trite answer is "Easily!". Note that tags are not applied to "client machines" -- they never have been, and the "Remote Backups Clients" tag is no different. They are applied to the "client record" on the server. They effectively say "allow incoming calls from this client" (when in the client record) and "allow incoming calls to this script" (when used in the script's Source). So I'm proposing a "class" of Remote tags, each instance having a user-defined attribute -- they all say "allow incoming" but finesse it with "and direct them to scripts with the matching attribute". Perhaps it is easier to think of them the other way round -- they would behave exactly the same as all other tags and, additionally, allow remote access. Are you sure? 😉 You've actually raised the next issue I want to test -- does automatic onboarding work with Remote clients? I haven't bothered with that yet, because our use of Favourites mandates local addition, but now I've time to scrub a machine and start a client from scratch again. Sorry David, but that's just plain wrong. From the KB article: "Clients can seamlessly switch between network backup on a local network to remote backup over the internet. You do not need to set up the remote backup initially. You can transition to it and back again" -- true as far back as Nov 2018.
×
×
  • Create New...