Jump to content

Nigel Smith

Members
  • Content count

    149
  • Joined

  • Last visited

  • Days Won

    8

Posts posted by Nigel Smith


  1. 21 hours ago, DavidHertzberg said:

    That would mean that as of March or August of 2020 rbratton's "backup server" machine would  have to either be constantly running the GUI-less Retrospect Engine whenever it is booted, or rbratton would have to remember to start the Engine each midnight

    Or, since he knows the the time the script needs to run, he could use Windows Task Scheduler to launch RS when appropriate.

    Or even, if his BIOS supports it and he doesn't mind the security implications: set the PC to boot at a certain time and auto-login, set Windows Task Scheduler to fire up RS, use script hooks to to monitor and shut down both PCs when complete!

    These are computers -- we should be getting them to do things, instead of having to remember to do them ourselves! 😉 


  2. On 1/19/2020 at 7:30 AM, DavidHertzberg said:

    Moreover I'm not sure his second-paragraph suggestion of Proactive is applicable, because Proactive (which I don't use) assumes a dedicated "backup server" machine that is "always here" whenever the "client" machine is "sometimes here"

    Not at all -- you can, for example, schedule Proactive to run only for certain hours of the day.

    So OP could set Proactive to run from 2am-6am every day, with an 20 hour interval. If the server is running during that time the server will be backed up, and it will also back up the client if that's available. No client is a "graceful fail", no server and nothing happens 😉 

    What you can't do with a single Proactive script is set the order in which clients should be backed up, so no good if that's important. You can't shut down the backup server, as part of the script, when it's finished. And using a schedule as above would mean you couldn't use the "Early backup" request to get a daytime backup, so you'd have to make another script for that -- you *might* be able to set a second Proactive script, running from 6am-2am, with a ridiculously large interval setting, that allows earlys, but I haven't tried that myself...

    Proactive is very flexible -- which is sometimes a boon, sometimes a pain -- and is always worth considering in any situation where backup routines can vary (presence of clients, volumes, target sets, etc).


  3. Assuming incremental backups, no need to delete -- it'll just make the "proper" backup run faster because most has already been done.

    And consider using Proactive (unless standard scripts do something you need that Proactive doesn't), which is made for exactly this "sometimes here, sometimes not" situation.

    Re-reading your OP, it sounds like both computers get shut down and one is the RS server while the other is the client. Have a play with the "Look ahead time" in the general (rather than script) Schedule Preferences. I'm starting to think it's *because* you shut down the server that you are getting the "catchups" -- look ahead sees you've got something scheduled within the next 12 hours so makes sure it runs at the next opportunity (I'd assumed that you had the server running 24/7 and it was two clients you were restarting). It may be that setting "Look ahead" to 0 solves your problem, but that might require you to leave RS running on the server rather than quitting/autolaunching for the next scheduled run.


  4. On 1/10/2020 at 7:08 PM, rbratton said:

    How do I tell Retrospect that if a scheduled backup is missed because resources aren't available, just wait until the next scheduled backup time to try again?  I don't want to block off specific times when backups are allowed as I might need to run a manual backup any time of the day.

    Have you tried the "Options" section of your script? There's also scheduling options there, which only apply to that script (though the defaults reflect the Schedule settings in General Prefs, which might make you think otherwise...) and so would have no impact on manual backups. Set your "Start", "Wrap up" and "Stop" times to suit your working practices and required backup window and you should be good.

    • Like 1

  5. Interesting...

    Most "transient" files are "here today, gone tomorrow" -- think cache files etc. But, for whatever reason, Windows doesn't seem to delete these update packages after they have been used. All I can think of (aside from clumsiness by MS!) is that they are also used when you uninstall System updates.

    So, to be safe, I'd probably exclude them from backups but would only delete them from disk once I was happy that I wouldn't need to uninstall.


  6. 1 hour ago, oleksiak said:

    I am not sure what tick-box you are referring to... In my backup I have only one specific volume selected as a source.

    That's the information I was looking for...

    So *if* you are only backing up one volume *and* that volume is backing up/verifying successfully *and* you can restore from the backup *and* you get the un-named volume error *and* Retrospect carries on regardless -- I'd just ignore it. If the error is causing other problems, eg killing the script while there are still other machines to process, re-arrange things so the erring machine is the last to be done.

    If the error is truly killing the system, eg popping a dialog that must be dismissed before continuing, I'd look into script triggers and a GUI-targetted AppleScript to automatically dismiss the dialog so RS can continue.

    Some things are easier to work round than to fix 😉

    • Like 1

  7. 5 hours ago, oleksiak said:

    Well, if by "name" you mean that the volume should have a label - then it has.

    No -- I'm suggesting that it is successfully scanning, backing up, and verifying the storage volume, and is *then* failing to scan a nameless volume. What's the output from "lsblk --fs", without the device selector?

    I'm assuming that, as with a Mac client, you can set things to back up the complete Linux box, only selected volumes, etc. Perhaps a previous "only the storage volume" tick-box was forgotten in the transfer to the new server.


  8. On 12/22/2019 at 11:25 PM, kolohe280 said:

    I have a NAS with 2 RAID5 arrays.  I backup to one of them on a share called Online_Backup.  I wanted to add some more space to that backup set, so I created a second share on the other RAID array and called it Online_Backup_2.

    Crazy suggestion -- try naming the new share "2nd_Online_Backup" instead, and see if that solves it.

    Reason being, different implementations of SAMBA have different lengths of "valid" names, and using the same first-13 characters in each may be confusing something (eg if it only parsed the first 8 characters).

    Otherwise, knowing your NAS make/model might help.


  9. Mac screenie, but repeated mentions of Windows -- I'll assume you've got Windows client problems...

    See if you can find your current client version. Uninstall it, restart the client machine, re-install using a fresh download from here -- personally, I'd start with the most recent (16.5.1.109) and, if that was still problematic, work my way back version by version. If you don't want to re-register the client with the server you could take a punt and simply re-install over the top of the old client.

    I've just installed the latest client on a clean, up-to-date, Windows VM without issues, so it looks like something specific to this instance rather than a generic Windows problem (but I don't deal with Windows much, so I'm probably wrong...).


  10. To be clear -- I don't think his client "went bad", I think it just got stuck in a reserved state and simply deleting the retroclient.state file and restarting the Mac would have cleared it. I often see "-505s" here, sometimes because the server borked and, more often, because the client was deliberately disconnected (eg laptop lid closed or network plug pulled) partway through a backup.

    It used to be a simple command-click on the client control panel's "Off" button (a simple click turns the client off but leaves the process running and the .state file untouched, cmd-click shuts down the process and removes the .state file) -- I don't know if the same works with "modern" versions, but we lock ours down anyway so the user can no longer solve this themselves and it's easier for me to visit than explain Terminal commands to them 🙂

    What I don't trust is a re-install, especially without a full and complete un-install first -- though that may just be me and the fact that I'm dealing with clients of various vintages (because it is easier to sort things out when these rare problems rear up, rather than proactively update).


  11. On 11/26/2019 at 4:14 PM, redleader said:

    It says they are offline ... these drives are not*

    It says they are offline to the Retrospect client of FS Server (which is different from offline to FS Server's OS).

    As David says, check FS Server's Privacy settings and make sure Client has Full Disk Access. You could also get clever in the Terminal with "lsof" to see if any process is blocking access to those volumes, but I'd start with restart of FS Server to clear everything and turn to Terminal if the problem came back.

    Are you actually using FS Server as a RS backup server? If not, stop the Retrospect Engine via its Preference pane -- the Engine *shouldn't* interfere with the Client's access to those volumes, but if you don't need it why take the chance?


  12. 8 hours ago, cgtyoder said:

    I tried reinstalling the client on the Mac

    Did you uninstall first, or just re-install over the top?

    My go-to for a manual Mac uninstall is still Der Flounder's instructions for 6.x -- I just do each line of his script manually in the Mac Terminal app, ignoring any that don't autocomplete/exist.

    In this case, if you want to save time you could probably just get away with killing the client, killing pitond, deleting retroclient.state (the "I am reserved"-causing file) and restarting the Mac.


  13. What you are trying to achieve here. Retrospect won't quit after a Proactive script has finished (see p253 of the manual), and you have no finish time on an always-on schedule anyway. So why not just launch Retrospect and leave it running?

    6 hours ago, meld51 said:

    I have notification emails set up and I can see that Retrospect is automatically launched now and then but it closes down in the same minute that it runs. Why does it do this?

    I'm not sure why it's starting up, since always-on has no start time. Do you have another script that would trigger the launch? As to why it exits again straight away -- since always-on has no start time, there's no script set to run in the next 12 hours so RS does what you've told it to -- exit.

    If you are trying to minimise the amount of time Retrospect is running for some reason, "normal" backup scripts would be a better approach. But if you must use Proactive, eg for automatic media rotation, you might be able to do what you want by scheduling Proactive to specific start and stop times, then scheduling a "spoof" normal backup script to run just after your Proactive stop-time, with the "Exit" startup preference set.


  14. On 11/21/2019 at 5:02 PM, billbobdole said:

    Nothing has changed recently except the macOS & Retro updates.

    Are you saying that "it worked before the OS and RS updates, but hasn't since"? Or has it worked since the updates but is now failing? If it's the former, try Support. If it's the latter:

    Do a thorough disk check on your SMB server -- we've seen on these forums before where failing drives/corrupt volumes cause this error.

    Are you manually mounting the SMB share, or is it done from RS as per your last screenie? I suggest you pick one or the other, to prevent conflicts.

    Assuming the disk check passes, this feels like a permissions issue, RS being able to read but not write, so check your SMB server's logs for errors. And check you've granted RS "Full Disk Access" in System Preferences->Security & Privacy->Privacy.

    While personally I'd prefer to use Disk Media, I can understand why the "self-contained" File Media format could be advantageous in some situations. And you should be OK capacity-wise, given that you've got larger -- both in size and file count -- listed.


  15. On 11/20/2019 at 6:44 AM, DavidHertzberg said:

    -515 and -530 errors have to do with connecting via the Piton Protocol

    -515 is "data becoming corrupt while being transferred" -- ie the client is still connected (else it would be a -519).
    -519 is "network communication failed" -- ie the client was found, then disappeared.
    -530 is "client not found" and can be thrown for all sorts of reasons, from the client not being powered on through hubs/switches "invisibly" dropping multicast packets -- including my oft-mentioned "client binding to the wrong interface".

    -530 is the easiest to differentiate, since the client was never there 🙂  -515 and -519 are more of a grey area -- IME, a "fluttering" switch port or NIC can give either a -515 (brief, occasional, drops cause data problems without the client dropping for long enough to trigger the "disappeared" time-out) or a -519 (more prolonged failures where RS registers a disconnect though, client-side, everything seems OK because more usual network operations -- file sharing, browsing, etc -- are less sensitive).

    FWIW -- my understanding is that all the above involve the piton protocol, regardless of connection type, since that's how RS adds clients, accesses them, and transfers data to/from them.

    On 11/22/2019 at 7:18 PM, twickland said:

    We have solved the problem.

    Hurrah! Though, in our case, it won't have been caused by RS's auto-update -- because we don't use it! Perhaps a Windows update changed a linked library, a security fingerprint, or something -- or, my particular fave, a Defender update did it's usual "reset the firewall" nonsense 🙂 I'll try and get some reinstalls done today and see what happens...


  16. On 11/24/2019 at 5:36 AM, x509 said:

    I think all these files represent true "transient data," where I would need only the latest version or maybe 2 versions for restore purposes.

    If you need to restore it (ie can't just copy it from elsewhere or easily regenerate it from other sources) then it isn't transient data -- and you've already assigned it a "value" of "I need the last two versions". IMO, David's nailed it -- separate script with its own grooming strategy.

    Because of the way we back up, I've never used grooming beyond a quick play. Is it possible to specify one or more strategies that apply to the same backup set to, for example, keep x versions of files in directory /fooy versions of everything in /bar, and z versions of everything else?

    As for your overall backup/management strategy, I can only echo David -- awesome! Would that I were as conscientious...


  17. On 11/19/2019 at 10:49 PM, bookcent said:

    Thanks for the reply, I do have more than one connection thought the ethernet is top prioriy and has a fixed IP address

    That doesn't matter. The client will bind to the first interface to become available, regardless of your preferences. So if your ethernet is slower to come up than your wireless -- because, for example, your hub is a bit flakey! -- the client binds to the wireless interface instead of the ethernet.

    You can solve that by turning off your wireless and restarting the client but, since you're on a static IP for your wired connection, it's even easier to use "ipsave" (since that persists across restarts etc) to force the client to bind to the ethernet interface.

    And yes -- if you suspect your hub for any reason, get another. It could save you a lot of troubleshooting in the future!


  18. 9 hours ago, bookcent said:

    If I reboot the remote mac it sometimes works and does a backup

    Do you have more than one network connection on the client, eg you're using ethernet and wireless is also turned on? You mentioned a "multicast port unavailable" message in another thread, which is usually seen after the RS client sees a network change -- either a different IP on the same interface, or a different interface becoming primary. Make sure ethernet is above wireless in your service order (System Preferences->Network, click the cog-wheel under the connection list, "Set Service Order...", drag into the order you want) and/or turn off your wireless.

    You can often get multicast working again (assuming your IP is stable!) by turning the client off and on. If going direct as David suggests above you may still have to sort this out -- if the client binds to the wireless when you entered the ethernet IP you'll have the same problem -- or make use of static IPs and Retroclient's "ipsave" command, documented here.


  19. 4 hours ago, DavidHertzberg said:

    BTW, could you tell us why you are backing up the same "clients" with two "backup servers—and what version of Retrospect Mac you are using?

    Simple enough -- we we've been using the Mac version for years, currently on 13.5. With Apple dropping server and enterprise I was looking at moving to Windows (now at 15.6.1), if only for 10GbE, so we've been running both in parallel But the "evaluation" has gone on rather longer than expected -- partly because the goalposts keep moving (eg Apple introducing 10GbE on the Mac Mini) but mainly because I really, really, really don't want to permanently move to Windows 🙂 

    I'll have to make a decision soon, at which time version numbers will jump to current. But, at the moment, all clients are being backed up at least once -- and if it ain't broke...

    5 hours ago, DavidHertzberg said:

    Is it possible that your "client" machines are defined to your Retrospect Windows "backup server" by the Direct Access Method

    The clients are "Direct IP" (the terminology RS uses in the Console's Sources summary when you've used an IP address to "Add source directly...") on both servers (since they're statics on a private subnet), so it isn't that. No changes to the Mac server, so it isn't that. Which leaves either network hardware (unlikely, since both servers are on the same switch), the router/IPS security settings (possible, but unlikely since both servers are on the same subnet so are subject to the same policies etc), or Windows. All I need to do is test with another Windows 10 client on the same subnet as the server to start narrowing things down. But, as above -- meh, Windows 😉 

    In my experience, there's a commonality between -515 and -519 errors, a brief "ceased to communicate" usually being reported as a -519 but sometimes as a -515 -- especially when, as in my case, the software isn't as up-to-date as it should be. Since the troubleshooting is similar for both we may as well consider them the same.


  20. On 11/15/2019 at 9:38 PM, twickland said:

    We have run into a peculiar issue where most of our Windows 10 client machines can no longer be backed up.

    FWIW, my Mac RS server has been showing "error -515 (Piton protocol violation)" errors from our Win10 PCs since the beginning of this month -- no successful backups at all.

    I don't think it's a network hardware issue, since they *are* being successfully backed up by the Windows RS server!

    I was keeping quiet, assuming it was my rather old Mac RS software playing badly with a Windows update, but I'll have a closer look tomorrow.


  21. On 11/17/2019 at 3:47 AM, x509 said:

    True enough, but there is no report that identifies those files that have been backed up N times in the last N days/weeks

    True enough 😉, but is one really necessary?

    "Transient data", in amounts that matter to disk sizing/grooming policies, is usually pretty obvious and a result of your workflow (rather than some background OS thing). Think video capture which you then upload to a server -- no need to back that up on the client too. Or data that you download, process, then throw away -- back up the result of the processing, not the data itself. Home-wise, RS already has a "caches" filter amongst others, and why back up your downloads folder when you can just re-download, etc, etc.

    OP ran out of space on an 8TB drive with only a 1 month retention policy. That's either a woefully under-specced drive or a huge amount of churn -- and it's probably the latter:

    On 10/31/2019 at 12:54 PM, NoelC said:

    given the amount of data my system crunches through

    ...rather than "given the amount of data on my machine".

    Like David, I'd be reluctant to let RS do this choosing for me -- "transient" is very much in the eye of the beholder and, ultimately, requires a value to be placed on that data on that machine before a reasoned decision can be made.


  22. On 11/14/2019 at 6:13 PM, kidziti said:

    Nigel - Windows scripting does sound intriguing. I also wonder if increasing scripting power within Retrospect is something that Storcentric is considering. That's a high hope, of course. I could probably figure Windows scripting out, but my obstacle at the moment is finding the time to do so.

    The obvious question is -- why would they bother?

    Building in application scripting ability to work with an OS is a lot of work, especially with both Windows and OS X constantly moving the goalposts! Better, IMO, for RS to concentrate on their core functionality -- difficult enough because of the aforementioned moving goalposts -- while providing ways to interact with RS from "outside" with whatever scripting language a user is comfortable with. There's currently a bunch of events revealed by RS via Script Hooks and you can start a script by using a Run Document (in Windows -- I think us Mac users may have slipped behind here, but haven't checked). Most other things -- adding clients, creating sets, and so on -- are pretty much edge cases which few users would ever need, so not worth the development time.

    OS scripting can be remarkably easy. There's enough info in the two pages I linked above that you could do what you want without any other knowledge of Windows scripting. You could then think of other features (maybe take the scheduling outside of RS? Initiate the whole process based on some other event? Voice controlled via Alexa/Siri/whatever!) and learn as you add them. I think that's how most scripters find their feet, one small "utility" at a time, learning what they need as they go -- so jump in and give it a try!

    (And if anyone from Storcentric is reading -- ignore me! RS would be *so* much better with a fully-revealed Applescript dictionary and the ability to both send and receive Apple Events. Go on, you know you want to... And think of the sales demo -- "Hey Siri, activate script Important Machines, add client MD's Mac to it, then run it" and the Managing Director's computer is being backed up!)


  23. 14 hours ago, DavidHertzberg said:

    which is why I assume he wants to establish these alternating-between-two-Backup-Sets scripted Backups—probably to protect against ransomware as he says in his OP.

    As you've since realised, that wasn't the problem. The ultimate aim is to limit a drive's exposure to ransomware attacks by minimising the amount of time it is connected to the system -- in an ideal world you'd have a scripted "mount disk, run backup, unmount disk on completion" which would run without human intervention. That would have been easy on the Mac in the "old days", when RS had OK Applescript support, now you should probably use Script Hooks which is something I've not really played with. Run files as described above would also work, if you prefer to do your scheduling from outside RS.

    kidziti -- you'll find more about Script Hooks here. You'll see there's both StartScript and EndScript events, and a quick google gets me this page with Windows batch scripts for mounting and unmounting a volume. So I'm thinking you'd set up the script, plug in the drive, unmount it via Windows Explorer, walk away. Then, every time the backup script runs, it would be script start -> StartScript hook -> mountBatchScript -> backup -> script ends -> EndScript hook -> unmountBatchScript.

    I'm not a Windows scripter, so there's some questions you'll have to answer for yourself but should be easy enough to test. I don't don't know if RS waits for the hooked scripts to finish, though that shouldn't be a problem in this case as the BU script will re-try until the media is available (within timeout limits, obv). I also don't know what privileges RS would run the script with -- Windows privileges as a whole are a mystery to me! -- but would optimistically assume that you could get round any problems by creating the correct local user and using RS's "Run as user" setting (as discussed in your "Privileges" thread).

    But this is all theoretical for me -- and I, for one, would love to hear how you get on!

×