Jump to content

Nigel Smith

Members
  • Content count

    139
  • Joined

  • Last visited

  • Days Won

    6

Posts posted by Nigel Smith


  1. Mac screenie, but repeated mentions of Windows -- I'll assume you've got Windows client problems...

    See if you can find your current client version. Uninstall it, restart the client machine, re-install using a fresh download from here -- personally, I'd start with the most recent (16.5.1.109) and, if that was still problematic, work my way back version by version. If you don't want to re-register the client with the server you could take a punt and simply re-install over the top of the old client.

    I've just installed the latest client on a clean, up-to-date, Windows VM without issues, so it looks like something specific to this instance rather than a generic Windows problem (but I don't deal with Windows much, so I'm probably wrong...).


  2. To be clear -- I don't think his client "went bad", I think it just got stuck in a reserved state and simply deleting the retroclient.state file and restarting the Mac would have cleared it. I often see "-505s" here, sometimes because the server borked and, more often, because the client was deliberately disconnected (eg laptop lid closed or network plug pulled) partway through a backup.

    It used to be a simple command-click on the client control panel's "Off" button (a simple click turns the client off but leaves the process running and the .state file untouched, cmd-click shuts down the process and removes the .state file) -- I don't know if the same works with "modern" versions, but¬†we lock ours down anyway so the user can no longer solve this themselves and it's easier for me to visit than explain Terminal commands to them ūüôā

    What I don't trust is a re-install, especially without a full and complete un-install first -- though that may just be me and the fact that I'm dealing with clients of various vintages (because it is easier to sort things out when these rare problems rear up, rather than proactively update).


  3. On 11/26/2019 at 4:14 PM, redleader said:

    It says they are offline ... these drives are not*

    It says they are offline to the Retrospect client of FS Server (which is different from offline to FS Server's OS).

    As David says, check FS Server's Privacy settings and make sure Client has Full Disk Access. You could also get clever in the Terminal with "lsof" to see if any process is blocking access to those volumes, but I'd start with restart of FS Server to clear everything and turn to Terminal if the problem came back.

    Are you actually using FS Server as a RS backup server? If not, stop the Retrospect Engine via its Preference pane -- the Engine *shouldn't* interfere with the Client's access to those volumes, but if you don't need it why take the chance?


  4. 8 hours ago, cgtyoder said:

    I tried reinstalling the client on the Mac

    Did you uninstall first, or just re-install over the top?

    My go-to for a manual Mac uninstall is still Der Flounder's instructions for 6.x -- I just do each line of his script manually in the Mac Terminal app, ignoring any that don't autocomplete/exist.

    In this case, if you want to save time you could probably just get away with killing the client, killing pitond, deleting retroclient.state (the "I am reserved"-causing file) and restarting the Mac.


  5. What you are trying to achieve here. Retrospect won't quit after a Proactive script has finished (see p253 of the manual), and you have no finish time on an always-on schedule anyway. So why not just launch Retrospect and leave it running?

    6 hours ago, meld51 said:

    I have notification emails set up and I can see that Retrospect is automatically launched now and then but it closes down in the same minute that it runs. Why does it do this?

    I'm not sure why it's starting up, since always-on has no start time. Do you have another script that would trigger the launch? As to why it exits again straight away -- since always-on has no start time, there's no script set to run in the next 12 hours so RS does what you've told it to -- exit.

    If you are trying to minimise the amount of time Retrospect is running for some reason, "normal" backup scripts would be a better approach. But if you must use Proactive, eg for automatic media rotation, you might be able to do what you want by scheduling Proactive to specific start and stop times, then scheduling a "spoof" normal backup script to run just after your Proactive stop-time, with the "Exit" startup preference set.


  6. On 11/21/2019 at 5:02 PM, billbobdole said:

    Nothing has changed recently except the macOS & Retro updates.

    Are you saying that "it worked before the OS and RS updates, but hasn't since"? Or has it worked since the updates but is now failing? If it's the former, try Support. If it's the latter:

    Do a thorough disk check on your SMB server -- we've seen on these forums before where failing drives/corrupt volumes cause this error.

    Are you manually mounting the SMB share, or is it done from RS as per your last screenie? I suggest you pick one or the other, to prevent conflicts.

    Assuming the disk check passes, this feels like a permissions issue, RS being able to read but not write, so check your SMB server's logs for errors. And check you've granted RS "Full Disk Access" in System Preferences->Security & Privacy->Privacy.

    While personally I'd prefer to use Disk Media, I can understand why the "self-contained" File Media format could be advantageous in some situations. And you should be OK capacity-wise, given that you've got larger -- both in size and file count -- listed.


  7. On 11/20/2019 at 6:44 AM, DavidHertzberg said:

    -515 and -530 errors have to do with connecting via the Piton Protocol

    -515 is "data becoming corrupt while being transferred" -- ie the client is still connected (else it would be a -519).
    -519 is "network communication failed" -- ie the client was found, then disappeared.
    -530 is "client not found" and can be thrown for all sorts of reasons, from the client not being powered on through hubs/switches "invisibly" dropping multicast packets -- including my oft-mentioned "client binding to the wrong interface".

    -530 is the easiest to differentiate, since the client was never there ūüôā¬† -515 and -519 are more of a grey area -- IME, a "fluttering" switch port or NIC can give either a -515 (brief, occasional, drops cause data problems without the client dropping for long enough to trigger the "disappeared" time-out) or a -519 (more prolonged failures where RS registers a disconnect¬†though, client-side, everything seems OK because more usual network operations -- file sharing, browsing, etc -- are less sensitive).

    FWIW -- my understanding is that all the above involve the piton protocol, regardless of connection type, since that's how RS adds clients, accesses them, and transfers data to/from them.

    On 11/22/2019 at 7:18 PM, twickland said:

    We have solved the problem.ÔĽŅ

    Hurrah! Though, in our case, it won't have been caused by RS's auto-update -- because we don't use it! Perhaps a Windows update changed a linked library, a security fingerprint, or something -- or, my particular fave, a Defender update did it's usual "reset the firewall" nonsense ūüôā¬†I'll try and get some reinstalls done today and see what happens...


  8. On 11/24/2019 at 5:36 AM, x509 said:

    I think all these files represent true "transient data," where I would need only the latest version or maybe 2 versions for restore purposes.

    If you need to restore it (ie can't just copy it from elsewhere or easily regenerate it from other sources) then it isn't transient data -- and you've already assigned it a "value" of "I need the last two versions". IMO, David's nailed it -- separate script with its own grooming strategy.

    Because of the way we back up, I've never used grooming beyond a quick play. Is it possible to specify one or more strategies that apply to the same backup set to, for example, keep x versions of files in directory /foo, y versions of everything in /bar, and z versions of everything else?

    As for your overall backup/management strategy, I can only echo David -- awesome! Would that I were as conscientious...


  9. On 11/19/2019 at 10:49 PM, bookcent said:

    Thanks for the reply, I do have more than one connection thought the ethernet is top prioriy and has a fixed IP address

    That doesn't matter. The client will bind to the first interface to become available, regardless of your preferences. So if your ethernet is slower to come up than your wireless -- because, for example, your hub is a bit flakey! -- the client binds to the wireless interface instead of the ethernet.

    You can solve that by turning off your wireless and restarting the client but, since you're on a static IP for your wired connection, it's even easier to use "ipsave" (since that persists across restarts etc) to force the client to bind to the ethernet interface.

    And yes -- if you suspect your hub for any reason, get another. It could save you a lot of troubleshooting in the future!


  10. 9 hours ago, bookcent said:

    If I reboot the remote mac it sometimes works and does a backup

    Do you have more than one network connection on the client, eg you're using ethernet and wireless is also turned on? You mentioned a "multicast port unavailable" message in another thread, which is usually seen after the RS client sees a network change -- either a different IP on the same interface, or a different interface becoming primary. Make sure ethernet is above wireless in your service order (System Preferences->Network, click the cog-wheel under the connection list, "Set Service Order...", drag into the order you want) and/or turn off your wireless.

    You can often get multicast working again (assuming your IP is stable!) by turning the client off and on. If going direct as David suggests above you may still have to sort this out -- if the client binds to the wireless when you entered the ethernet IP you'll have the same problem -- or make use of static IPs and Retroclient's "ipsave" command, documented here.


  11. 4 hours ago, DavidHertzberg said:

    BTW, could you tell us why you are backing up the same "clients" with two "backup servers‚ÄĒand what version of Retrospect Mac you are using?

    Simple enough -- we we've been using the Mac version for years, currently on 13.5. With Apple dropping server and enterprise I was looking at moving to Windows (now at 15.6.1), if only for 10GbE, so we've been running both in parallel But the "evaluation" has gone on rather longer than expected -- partly because the goalposts keep moving (eg Apple introducing 10GbE on the Mac Mini) but mainly because I really, really,¬†really¬†don't want to permanently move to Windows ūüôā¬†

    I'll have to make a decision soon, at which time version numbers will jump to current. But, at the moment, all clients are being backed up at least once -- and if it ain't broke...

    5 hours ago, DavidHertzberg said:

    Is it possible that your "client" machines are defined to your Retrospect Windows "backup server" by the Direct Access Method

    The clients are "Direct IP" (the terminology RS uses in the Console's Sources summary when you've used an IP address to "Add source directly...") on both servers (since they're statics on a private subnet), so it isn't that. No changes to the Mac server, so it isn't that. Which leaves either network hardware (unlikely, since both servers are on the same switch), the router/IPS security settings (possible, but unlikely since both servers are on the same subnet so are subject to the same policies etc), or Windows. All I need to do is test with another Windows 10 client on the same subnet as the server to start narrowing things down. But, as above -- meh, Windows ūüėȬ†

    In my experience, there's a commonality between -515 and -519 errors, a brief "ceased to communicate" usually being reported as a -519 but sometimes as a -515 -- especially when, as in my case, the software isn't as up-to-date as it should be. Since the troubleshooting is similar for both we may as well consider them the same.


  12. On 11/15/2019 at 9:38 PM, twickland said:

    We have run into a peculiar issue where most of our Windows 10 client machines can no longer be backed up.

    FWIW, my Mac RS server has been showing "error -515 (Piton protocol violation)" errors from our Win10 PCs since the beginning of this month -- no successful backups at all.

    I don't think it's a network hardware issue, since they *are* being successfully backed up by the Windows RS server!

    I was keeping quiet, assuming it was my rather old Mac RS software playing badly with a Windows update, but I'll have a closer look tomorrow.


  13. On 11/17/2019 at 3:47 AM, x509 said:

    True enough, but there is no report that identifies those files that have been backed up N times in the last N days/weeks

    True enough ūüėČ, but is one really necessary?

    "Transient data", in amounts that matter to disk sizing/grooming policies, is usually pretty obvious and a result of your workflow (rather than some background OS thing). Think video capture which you then upload to a server -- no need to back that up on the client too. Or data that you download, process, then throw away -- back up the result of the processing, not the data itself. Home-wise, RS already has a "caches" filter amongst others, and why back up your downloads folder when you can just re-download, etc, etc.

    OP ran out of space on an 8TB drive with only a 1 month retention policy. That's either a woefully under-specced drive or a huge amount of churn -- and it's probably the latter:

    On 10/31/2019 at 12:54 PM, NoelC said:

    given the amount of data my system crunches through

    ...rather than "given the amount of data on my machine".

    Like David, I'd be reluctant to let RS do this choosing for me -- "transient" is very much in the eye of the beholder and, ultimately, requires a value to be placed on that data on that machine before a reasoned decision can be made.


  14. On 11/14/2019 at 6:13 PM, kidziti said:

    Nigel - Windows scripting does sound intriguing. I also wonder if increasing scripting power within Retrospect is something that Storcentric is considering. That's a high hope, of course. I could probably figure Windows scripting out, but my obstacle at the moment is finding the time to do so.

    The obvious question is -- why would they bother?

    Building in application scripting ability to work with an OS is a lot of work, especially with both Windows and OS X constantly moving the goalposts! Better, IMO, for RS to concentrate on their core functionality -- difficult enough because of the aforementioned moving goalposts -- while providing ways to interact with RS from "outside" with whatever scripting language a user is comfortable with. There's currently a bunch of events revealed by RS via Script Hooks and you can start a script by using a Run Document (in Windows -- I think us Mac users may have slipped behind here, but haven't checked). Most other things -- adding clients, creating sets, and so on -- are pretty much edge cases which few users would ever need, so not worth the development time.

    OS scripting can be remarkably easy. There's enough info in the two pages I linked above that you could do what you want without any other knowledge of Windows scripting. You could then think of other features (maybe take the scheduling outside of RS? Initiate the whole process based on some other event? Voice controlled via Alexa/Siri/whatever!) and learn as you add them. I think that's how most scripters find their feet, one small "utility" at a time, learning what they need as they go -- so jump in and give it a try!

    (And if anyone from Storcentric is reading -- ignore me! RS would be *so* much better with a fully-revealed Applescript dictionary and the ability to both send and receive Apple Events. Go on, you know you want to... And think of the sales demo -- "Hey Siri, activate script Important Machines, add client MD's Mac to it, then run it" and the Managing Director's computer is being backed up!)


  15. 14 hours ago, DavidHertzberg said:

    which is why I assume he wants to establish these alternating-between-two-Backup-Sets scripted Backups‚ÄĒprobably to protect against ransomware as he says in his OP.

    As you've since realised, that wasn't the problem. The ultimate aim is to limit a drive's exposure to ransomware attacks by minimising the amount of time it is connected to the system -- in an ideal world you'd have a scripted "mount disk, run backup, unmount disk on completion" which would run without human intervention. That would have been easy on the Mac in the "old days", when RS had OK Applescript support, now you should probably use Script Hooks which is something I've not really played with. Run files as described above would also work, if you prefer to do your scheduling from outside RS.

    kidziti -- you'll find more about Script Hooks here. You'll see there's both StartScript and EndScript events, and a quick google gets me this page with Windows batch scripts for mounting and unmounting a volume. So I'm thinking you'd set up the script, plug in the drive, unmount it via Windows Explorer, walk away. Then, every time the backup script runs, it would be script start -> StartScript hook -> mountBatchScript -> backup -> script ends -> EndScript hook -> unmountBatchScript.

    I'm not a Windows scripter, so there's some questions you'll have to answer for yourself but should be easy enough to test. I don't don't know if RS waits for the hooked scripts to finish, though that shouldn't be a problem in this case as the BU script will re-try until the media is available (within timeout limits, obv). I also don't know what privileges RS would run the script with -- Windows privileges as a whole are a mystery to me! -- but would optimistically assume that you could get round any problems by creating the correct local user and using RS's "Run as user" setting (as discussed in your "Privileges" thread).

    But this is all theoretical for me -- and I, for one, would love to hear how you get on!


  16. 18 hours ago, kidziti said:

    I'd like a situation such that when I plug in the destination drive for this backup, Retrospect sees it and runs the backup script that I designed for that destination. I suspect that is not possible but figured I would ask anyways.

    You could just the schedule the script as normal, with a short "media timeout" window, so that if the disk is attached the script runs but if it isn't it waits bit, errors, then carries on with whatever is next.

     If you want to get a bit more nerdy, what you need is a Windows script/utility that regularly polls mounted volumes for the drive and, if it is there, executes the appropriate Retrospect "Run Document"  -- see the "Automated Operations" section of the RS manual for more about these but, basically, when you create a schedule you have the option to save it as a Run Document that can be launched by eg double-clicking in Windows Explorer. Extra credits if you then use a script trigger at the end of the schedule to run another Windows script/utility that unmounts the drive for you...

    ObDisclaimer: Certainly doable on a Mac, and I'd say *probably* doable on Windows, but you'll have to wait for one of the Windows gurus to chime in if you've any scripting questions.

     


  17. 13 hours ago, kidziti said:

    Thanks, Nigel. I was thinking of making an account specifically for Retrospect but unless I can figure out how to give it the same broad access as Administrator, I'm not sure it will work and don't see any advantage of that over using the logged-in user account.

    To be clear -- the "retrospect" account is on your NAS. All you need to do is set up another account on the NAS with full access to all the NAS's contents, then enter those details in the "Log in as..." dialog after right-clicking the NAS volume in RS's "Volumes" window. How you set up the account will depend on the NAS's OS -- some come preconfigured with a "backup" group, most (home) ones don't. The nerd in me always advises against giving the backup account the same privs as the "admin" account on the NAS -- if nothing else, find a way to prevent the backup account being used to administer the NAS via the web interface, ie give it access to file sharing only. Not really necessary in a home environment, but restricting accounts to what is necessary and no more than that is a good general habit to get into (which is a case of "do as I say, not as I do", I'm afraid ūüė쬆).

    There are many other advantages. In this case the two that first spring to mind are a clear differentiation between "Administrator" (the account you are running Retrospect under on the PC) and "backup" (the account RS uses to access the NAS shares) and the ability to go through the NAS's logs looking for backup related events without having to manually filter out all the "Administrator" entries created simply by you trying to look at the logs!


  18. On 11/3/2019 at 12:55 PM, kidziti said:

    So I hit the curve again - figured that while Windows default when enabling the Administrator account is no password, I needed to add a password because Retrospect cannot have a "blank" in the password box

    Blame Windows for that -- allowing you to create an admin-level account with a blank password is beyond stupid in this day and age.

    On 11/3/2019 at 12:55 PM, kidziti said:

    It took me some extra time to understand what to put in the log on to prompt - and realized it was the computer name.

    I'm no Windows guru (mbennett?) but I've a feeling that's the "domain" field. Since you aren't running under Active Directory or similar then yes, you should use the local computer name. But that probably won't work as the login for your NAS, since "T1650\Administrator" and "NAS\Administrator" aren't the same user. So I'd do as you and add the auto-login via the "Volumes" pane.

    What I'd suggest is you create a new user on the NAS -- 'retrospect', 'backups', or similar -- and give that user full access to everything you want to back up. Then use *that* account rather than Administrator as the auto-login account in RS's "Volumes" pane. If nothing else it'll make troubleshooting easier later, being able to refer to different accounts for different operations! It'll certainly make it easier to check the NAS's logs to see which account you PC is using to try and access the shares, and why you are being denied.

    But as mbennett says -- if you've just bought RS then you're entitled to support. Worth it if only to find out about the user display in the title bar...


  19. On 11/1/2019 at 5:20 PM, NoelC said:

    And how would this be different than choosing a fixed horizon of N backups, or following even the complex grooming policy that's default?

    Simple example -- you've a system with enough space to account for your expected 5% churn daily, so you set up a grooming policy that keeps things for 14 days to give you some wiggle room. You expect to always be able to restore a file version from 2 weeks ago.

    You find out about this whizzy new grooming feature which clears enough space for your latest backups every session, and enable it.

    Couple of nights later a client's process (or a typical user!) runs amok and unexpectedly dumps a shedload of data to disk. RS does exactly as asked and, to make space for that data, grooms out 50% of your backups. And suddenly, unexpectedly, that file version form 2 weeks ago is no longer restorable...

    But I agree with you -- backups need to reliable, dependable, and behave as expected. Which brings us to...

    21 hours ago, NoelC said:

    Y'know what, forget it.

    To be honest, I don't blame you! If you can't get software to reliably work how you want it to -- particularly, perhaps, backup software -- you should cut your losses and look elsewhere. While I'd love you to continue using RS, your situation isn't mine, your requirements aren't mine, so your best solution may not be mine.


  20. On 10/8/2019 at 6:14 PM, Xenomorph said:

    Does anyone else do backups over 10 Gbps networks?

    Yes, but only with 1Gbps clients in the main. I use it more so I've the network capacity to parallelise operations rather than to speed up a single op like you. And I run on a Mac -- not a particularly well specced Mac, either... That said, my single-op speeds are comparative to yours and Retrospect transfers at less than half the speed of a Finder copy.

    But...

    If I set up a second share on the same NAS as another source, script that to back up to a different set, and run both that and the test above at the same time, the transfers are almost as fast on each as they were on the single (ie I'm now hitting constraints on the NAS-read and/or RS server).

    My totally-pulled-out-of-a-hat theory is that each RS activity thread has a bottleneck which limits op speed to what you are seeing, probably server hardware dependant. Think something like "all of an activity thread's ops happen on a single processor core". So a server with a pair of 4-core processors would only be reporting maybe 20% usage, but that is 7 cores barely ticking away while the RS activity thread is running at 100%, and constrained, on the eighth. But it could equally involve the buffer (as I understand it, an RS backup is repeated "read data from client into buffer til full, write data to disk from buffer til empty, do housekeeping, repeat") or any number of things I'm not qualified to even guess at!

    If you can, try splitting your multiple TBs into separate "clients" backed up to separate sets, and see if it makes a difference. Otherwise you may just have to accept that you've outgrown RS, at least in the way you're currently using it, and will have to think again.


  21. I always thought those options applied to sources with the RS Client installed, rather than non-client shares -- or does it consider the share to be mounted on a client which also happens to be the server? I'd certainly never think of changing options in the "Macintosh" section when backing up a Synology, however it was attached!

    Every day's a learning day ūüôā¬†

     


  22. 11 hours ago, kidziti said:

    I have searched all over the site to see what could have caused this but am at a loss.

    "error -1116 (can't access network volume)".

    Since it happened part-way through, it looks like the network connection dropped rather than a permissions thing. Check both PC and NAS for energy-saver-type settings, logs for restarts/sleep events, switches/hubs for reboots, etc. What else was on the network, and busy, during the backup period?

    Looks like a Netgear NAS, so 1GE unless it's a pretty old model, but we're only seeing 100Mb/s across the network -- something just doesn't feel right. Perhaps try a direct ethernet connection between the PC and NAS, if only to get the initial backup completed cleanly.


  23. The problem of having a "nightly" script and a "pro-active" script backing up to the same set is that only one can write to that set, blocking the other while it is running. While David has some suggestions above, may I offer another?

    Move *all* your systems onto the "pro-active" script!

    Schedule it to run during your overnight window. Set it to back up sources every 22 hours or so (roughly 24 hrs - time taken to back up all systems) so it only backs up each once. When it kicks off it will start polling the network for client availability, starting with the one least-recently backed up. Each system in turn will be backed up if present, or skipped for a while (I think the default is 30 minutes) then checked for again -- meanwhile the script continues with the other clients.

    It's not good if you need to back things up in a certain order, or if you need to hit a certain time (eg quiescing databases first), but it's great to make sure that "irregular" clients get backed up if available and that those "most in need" get priority.

    AFAIK, with two backup sets listed *and available* the above would alternate between them nightly, but things may have changed in more recent RS versions.


  24. That would be complex, fraught with error, and have huge potential for unexpected data loss.

    It sounds like you've either under-specced your target drive, have too long a retention period, or have a huge amount of data churn. First two are easy enough to sort and, for the last, do you really need to back up all that data?

    We have a policy here that if data is transient or can easily be regenerated it should not be backed up. Maybe you could do the same, either by storing it all in a directory that doesn't get backed up or by using rules that match your workflows and requirements to automatically reduce the amount of data you're scraping.

    Whilst it would be nice to back up everything and keep it forever, resources often dictate otherwise. So you'll have to find a balance that you (and/or any Compliance Officer you may answer to!) can live with.


  25. 17 hours ago, Lennart_T said:

    Now I have a Synology DS218j as a NAS server. The audio and video files were copied over to the NAS, 2.1 TB in total.

    What's the Synology volume formatted as? (btrfs can do funky things with metadata, as mentioned in a previous thread.) Did you make any service changes on the Synology between the backups? Are the Synology and Retrospect clocks in sync and in the same time zone (only thinking of that because we in the UK have just switched from BST, which often caused problems in the past ūüėȬ†).

    17 hours ago, Lennart_T said:

    Is there a way to get Retrospect to tell me why it thinks it needs to backup all those files and what can I do about it?

    AFAIK, we don't have the equivalent of the RS Windows's "Preview" when backing up. You might be able to tell after the fact by restoring the same file from both snapshots and comparing them and their metadata. Or winding the log levels up to max and sorting through all the dross for some information gems -- you'll want to define a "Favo(u)rite" with just a few files for that, though!

    17 hours ago, Lennart_T said:

    Bonus question: Is there a utility that can check and correct the file dates (for instance) so they are not somewhere in the future? I have TechTool Pro 11, but that does not seems to do the trick.

    Terminal?

    find . -ctime -1s

    ...will find "future modified" files (I think -- tricky to test!) in the current working directory and deeper. "-ctime" is the most inclusive "change", including file modification, permissions and ownership.

    What do then want to do? If it's just "set modification time to now" then

    find . -ctime -1s -exec touch {} \;

    ...should do the job.

    That's working with the Synology mounted on your Mac. If you want to do it directly on the Synology, which is probably faster/better but assumes you can ssh in, then this should work with its "idiosyncratic" version of *nix:

    touch timeStamp.txt;find . -newer timeStamp.txt -exec touch {} \;

    ...where we're creating a file with a mod time of "now", finding all files newer than that (ie timestamped in the future), and setting their mod times to "now".

    All the above works on my setup here but, as always, you should test on something you can easily restore if it all goes pear-shaped...

×