Jump to content

Nigel Smith

Members
  • Content count

    125
  • Joined

  • Last visited

  • Days Won

    6

Nigel Smith last won the day on September 25

Nigel Smith had the most liked content!

Community Reputation

8 Neutral

About Nigel Smith

  • Rank
    Retrospect Addict

Recent Profile Visitors

401 profile views
  1. Nigel Smith

    Automating a removable disk backup

    As you've since realised, that wasn't the problem. The ultimate aim is to limit a drive's exposure to ransomware attacks by minimising the amount of time it is connected to the system -- in an ideal world you'd have a scripted "mount disk, run backup, unmount disk on completion" which would run without human intervention. That would have been easy on the Mac in the "old days", when RS had OK Applescript support, now you should probably use Script Hooks which is something I've not really played with. Run files as described above would also work, if you prefer to do your scheduling from outside RS. kidziti -- you'll find more about Script Hooks here. You'll see there's both StartScript and EndScript events, and a quick google gets me this page with Windows batch scripts for mounting and unmounting a volume. So I'm thinking you'd set up the script, plug in the drive, unmount it via Windows Explorer, walk away. Then, every time the backup script runs, it would be script start -> StartScript hook -> mountBatchScript -> backup -> script ends -> EndScript hook -> unmountBatchScript. I'm not a Windows scripter, so there's some questions you'll have to answer for yourself but should be easy enough to test. I don't don't know if RS waits for the hooked scripts to finish, though that shouldn't be a problem in this case as the BU script will re-try until the media is available (within timeout limits, obv). I also don't know what privileges RS would run the script with -- Windows privileges as a whole are a mystery to me! -- but would optimistically assume that you could get round any problems by creating the correct local user and using RS's "Run as user" setting (as discussed in your "Privileges" thread). But this is all theoretical for me -- and I, for one, would love to hear how you get on!
  2. Nigel Smith

    Automating a removable disk backup

    You could just the schedule the script as normal, with a short "media timeout" window, so that if the disk is attached the script runs but if it isn't it waits bit, errors, then carries on with whatever is next. If you want to get a bit more nerdy, what you need is a Windows script/utility that regularly polls mounted volumes for the drive and, if it is there, executes the appropriate Retrospect "Run Document" -- see the "Automated Operations" section of the RS manual for more about these but, basically, when you create a schedule you have the option to save it as a Run Document that can be launched by eg double-clicking in Windows Explorer. Extra credits if you then use a script trigger at the end of the schedule to run another Windows script/utility that unmounts the drive for you... ObDisclaimer: Certainly doable on a Mac, and I'd say *probably* doable on Windows, but you'll have to wait for one of the Windows gurus to chime in if you've any scripting questions.
  3. Nigel Smith

    Administrator account has insufficient permissions

    To be clear -- the "retrospect" account is on your NAS. All you need to do is set up another account on the NAS with full access to all the NAS's contents, then enter those details in the "Log in as..." dialog after right-clicking the NAS volume in RS's "Volumes" window. How you set up the account will depend on the NAS's OS -- some come preconfigured with a "backup" group, most (home) ones don't. The nerd in me always advises against giving the backup account the same privs as the "admin" account on the NAS -- if nothing else, find a way to prevent the backup account being used to administer the NAS via the web interface, ie give it access to file sharing only. Not really necessary in a home environment, but restricting accounts to what is necessary and no more than that is a good general habit to get into (which is a case of "do as I say, not as I do", I'm afraid 😞 ). There are many other advantages. In this case the two that first spring to mind are a clear differentiation between "Administrator" (the account you are running Retrospect under on the PC) and "backup" (the account RS uses to access the NAS shares) and the ability to go through the NAS's logs looking for backup related events without having to manually filter out all the "Administrator" entries created simply by you trying to look at the logs!
  4. Nigel Smith

    Administrator account has insufficient permissions

    Blame Windows for that -- allowing you to create an admin-level account with a blank password is beyond stupid in this day and age. I'm no Windows guru (mbennett?) but I've a feeling that's the "domain" field. Since you aren't running under Active Directory or similar then yes, you should use the local computer name. But that probably won't work as the login for your NAS, since "T1650\Administrator" and "NAS\Administrator" aren't the same user. So I'd do as you and add the auto-login via the "Volumes" pane. What I'd suggest is you create a new user on the NAS -- 'retrospect', 'backups', or similar -- and give that user full access to everything you want to back up. Then use *that* account rather than Administrator as the auto-login account in RS's "Volumes" pane. If nothing else it'll make troubleshooting easier later, being able to refer to different accounts for different operations! It'll certainly make it easier to check the NAS's logs to see which account you PC is using to try and access the shares, and why you are being denied. But as mbennett says -- if you've just bought RS then you're entitled to support. Worth it if only to find out about the user display in the title bar...
  5. Nigel Smith

    Grooming Policy Too Simplistic

    Simple example -- you've a system with enough space to account for your expected 5% churn daily, so you set up a grooming policy that keeps things for 14 days to give you some wiggle room. You expect to always be able to restore a file version from 2 weeks ago. You find out about this whizzy new grooming feature which clears enough space for your latest backups every session, and enable it. Couple of nights later a client's process (or a typical user!) runs amok and unexpectedly dumps a shedload of data to disk. RS does exactly as asked and, to make space for that data, grooms out 50% of your backups. And suddenly, unexpectedly, that file version form 2 weeks ago is no longer restorable... But I agree with you -- backups need to reliable, dependable, and behave as expected. Which brings us to... To be honest, I don't blame you! If you can't get software to reliably work how you want it to -- particularly, perhaps, backup software -- you should cut your losses and look elsewhere. While I'd love you to continue using RS, your situation isn't mine, your requirements aren't mine, so your best solution may not be mine.
  6. Nigel Smith

    Does anyone else backup over 10 Gbps?

    Yes, but only with 1Gbps clients in the main. I use it more so I've the network capacity to parallelise operations rather than to speed up a single op like you. And I run on a Mac -- not a particularly well specced Mac, either... That said, my single-op speeds are comparative to yours and Retrospect transfers at less than half the speed of a Finder copy. But... If I set up a second share on the same NAS as another source, script that to back up to a different set, and run both that and the test above at the same time, the transfers are almost as fast on each as they were on the single (ie I'm now hitting constraints on the NAS-read and/or RS server). My totally-pulled-out-of-a-hat theory is that each RS activity thread has a bottleneck which limits op speed to what you are seeing, probably server hardware dependant. Think something like "all of an activity thread's ops happen on a single processor core". So a server with a pair of 4-core processors would only be reporting maybe 20% usage, but that is 7 cores barely ticking away while the RS activity thread is running at 100%, and constrained, on the eighth. But it could equally involve the buffer (as I understand it, an RS backup is repeated "read data from client into buffer til full, write data to disk from buffer til empty, do housekeeping, repeat") or any number of things I'm not qualified to even guess at! If you can, try splitting your multiple TBs into separate "clients" backed up to separate sets, and see if it makes a difference. Otherwise you may just have to accept that you've outgrown RS, at least in the way you're currently using it, and will have to think again.
  7. Nigel Smith

    Files backed up again

    I always thought those options applied to sources with the RS Client installed, rather than non-client shares -- or does it consider the share to be mounted on a client which also happens to be the server? I'd certainly never think of changing options in the "Macintosh" section when backing up a Synology, however it was attached! Every day's a learning day 🙂
  8. Nigel Smith

    Suddenly Error 1116 Decides to Happen

    "error -1116 (can't access network volume)". Since it happened part-way through, it looks like the network connection dropped rather than a permissions thing. Check both PC and NAS for energy-saver-type settings, logs for restarts/sleep events, switches/hubs for reboots, etc. What else was on the network, and busy, during the backup period? Looks like a Netgear NAS, so 1GE unless it's a pretty old model, but we're only seeing 100Mb/s across the network -- something just doesn't feel right. Perhaps try a direct ethernet connection between the PC and NAS, if only to get the initial backup completed cleanly.
  9. Nigel Smith

    Errors when laptop not connected

    The problem of having a "nightly" script and a "pro-active" script backing up to the same set is that only one can write to that set, blocking the other while it is running. While David has some suggestions above, may I offer another? Move *all* your systems onto the "pro-active" script! Schedule it to run during your overnight window. Set it to back up sources every 22 hours or so (roughly 24 hrs - time taken to back up all systems) so it only backs up each once. When it kicks off it will start polling the network for client availability, starting with the one least-recently backed up. Each system in turn will be backed up if present, or skipped for a while (I think the default is 30 minutes) then checked for again -- meanwhile the script continues with the other clients. It's not good if you need to back things up in a certain order, or if you need to hit a certain time (eg quiescing databases first), but it's great to make sure that "irregular" clients get backed up if available and that those "most in need" get priority. AFAIK, with two backup sets listed *and available* the above would alternate between them nightly, but things may have changed in more recent RS versions.
  10. Nigel Smith

    Grooming Policy Too Simplistic

    That would be complex, fraught with error, and have huge potential for unexpected data loss. It sounds like you've either under-specced your target drive, have too long a retention period, or have a huge amount of data churn. First two are easy enough to sort and, for the last, do you really need to back up all that data? We have a policy here that if data is transient or can easily be regenerated it should not be backed up. Maybe you could do the same, either by storing it all in a directory that doesn't get backed up or by using rules that match your workflows and requirements to automatically reduce the amount of data you're scraping. Whilst it would be nice to back up everything and keep it forever, resources often dictate otherwise. So you'll have to find a balance that you (and/or any Compliance Officer you may answer to!) can live with.
  11. Nigel Smith

    Files backed up again

    What's the Synology volume formatted as? (btrfs can do funky things with metadata, as mentioned in a previous thread.) Did you make any service changes on the Synology between the backups? Are the Synology and Retrospect clocks in sync and in the same time zone (only thinking of that because we in the UK have just switched from BST, which often caused problems in the past 😉 ). AFAIK, we don't have the equivalent of the RS Windows's "Preview" when backing up. You might be able to tell after the fact by restoring the same file from both snapshots and comparing them and their metadata. Or winding the log levels up to max and sorting through all the dross for some information gems -- you'll want to define a "Favo(u)rite" with just a few files for that, though! Terminal? find . -ctime -1s ...will find "future modified" files (I think -- tricky to test!) in the current working directory and deeper. "-ctime" is the most inclusive "change", including file modification, permissions and ownership. What do then want to do? If it's just "set modification time to now" then find . -ctime -1s -exec touch {} \; ...should do the job. That's working with the Synology mounted on your Mac. If you want to do it directly on the Synology, which is probably faster/better but assumes you can ssh in, then this should work with its "idiosyncratic" version of *nix: touch timeStamp.txt;find . -newer timeStamp.txt -exec touch {} \; ...where we're creating a file with a mod time of "now", finding all files newer than that (ie timestamped in the future), and setting their mod times to "now". All the above works on my setup here but, as always, you should test on something you can easily restore if it all goes pear-shaped...
  12. Nigel Smith

    Retrospect for Windows 16.5.1.109 cannot start

    It's *probably* a corrupt Retrospect configuration file (see eg here for the same). I believe the latest Windows version is still using "Config77.dat", found in "C:\ProgramData\Retrospect" (Windows users feel free to chime in!). Either restore that from a backup or delete, launch RS, and reconfigure. Worth noting that you'll also find "assert_log" and "operations_log" in that same directory -- these can be opened with a text editor, so you can access the log files when RS won't launch.
  13. What shares do you have on the 192.168.10.5 NAS? I notice that you're connecting to "admin" but "Add Favorites" is showing "admin-1". Have you also mounted the share on the RS-hosting Mac? Try "ls -al /Volumes" in the Terminal -- there may be some multiple-mounting confusion happening. If it is mounted on the Mac already, do you get a listing if you go to "Add Favorites" on the mounted volume rather than via "Shares"? Check that the username/password you are using actually allows access to the files and folders, and not just to the share itself!
  14. Seems likely you hit the nail on the head. Just for fun, what happens to times if you run a second backup of the Synology on the same day? I don't know if an RS scan would be considered "file access", but it's possible that you are triggering a metadata update across all the files (see here) which is slowing things down.
  15. Nigel Smith

    Slow closing of clients after backup

    Probably something on the server is slowing things down -- other processes using the disk, disk errors slowing down write times, potentially problems with the Retrospect catalogs... Check the whatever is Windows's equivalent of System Log for errors, try running a process monitor to see if you can spot other active (especially disk I/O) processes, run a backup manually to the same set with a single client at a different time to see if it is something scheduled, try a new backup set to eliminate catalog problems.
×