Jump to content

Nigel Smith

  • Content count

  • Joined

  • Last visited

  • Days Won


Everything posted by Nigel Smith

  1. Nigel Smith

    Administrator account has insufficient permissions

    Blame Windows for that -- allowing you to create an admin-level account with a blank password is beyond stupid in this day and age. I'm no Windows guru (mbennett?) but I've a feeling that's the "domain" field. Since you aren't running under Active Directory or similar then yes, you should use the local computer name. But that probably won't work as the login for your NAS, since "T1650\Administrator" and "NAS\Administrator" aren't the same user. So I'd do as you and add the auto-login via the "Volumes" pane. What I'd suggest is you create a new user on the NAS -- 'retrospect', 'backups', or similar -- and give that user full access to everything you want to back up. Then use *that* account rather than Administrator as the auto-login account in RS's "Volumes" pane. If nothing else it'll make troubleshooting easier later, being able to refer to different accounts for different operations! It'll certainly make it easier to check the NAS's logs to see which account you PC is using to try and access the shares, and why you are being denied. But as mbennett says -- if you've just bought RS then you're entitled to support. Worth it if only to find out about the user display in the title bar...
  2. Nigel Smith

    Grooming Policy Too Simplistic

    Simple example -- you've a system with enough space to account for your expected 5% churn daily, so you set up a grooming policy that keeps things for 14 days to give you some wiggle room. You expect to always be able to restore a file version from 2 weeks ago. You find out about this whizzy new grooming feature which clears enough space for your latest backups every session, and enable it. Couple of nights later a client's process (or a typical user!) runs amok and unexpectedly dumps a shedload of data to disk. RS does exactly as asked and, to make space for that data, grooms out 50% of your backups. And suddenly, unexpectedly, that file version form 2 weeks ago is no longer restorable... But I agree with you -- backups need to reliable, dependable, and behave as expected. Which brings us to... To be honest, I don't blame you! If you can't get software to reliably work how you want it to -- particularly, perhaps, backup software -- you should cut your losses and look elsewhere. While I'd love you to continue using RS, your situation isn't mine, your requirements aren't mine, so your best solution may not be mine.
  3. Nigel Smith

    Does anyone else backup over 10 Gbps?

    Yes, but only with 1Gbps clients in the main. I use it more so I've the network capacity to parallelise operations rather than to speed up a single op like you. And I run on a Mac -- not a particularly well specced Mac, either... That said, my single-op speeds are comparative to yours and Retrospect transfers at less than half the speed of a Finder copy. But... If I set up a second share on the same NAS as another source, script that to back up to a different set, and run both that and the test above at the same time, the transfers are almost as fast on each as they were on the single (ie I'm now hitting constraints on the NAS-read and/or RS server). My totally-pulled-out-of-a-hat theory is that each RS activity thread has a bottleneck which limits op speed to what you are seeing, probably server hardware dependant. Think something like "all of an activity thread's ops happen on a single processor core". So a server with a pair of 4-core processors would only be reporting maybe 20% usage, but that is 7 cores barely ticking away while the RS activity thread is running at 100%, and constrained, on the eighth. But it could equally involve the buffer (as I understand it, an RS backup is repeated "read data from client into buffer til full, write data to disk from buffer til empty, do housekeeping, repeat") or any number of things I'm not qualified to even guess at! If you can, try splitting your multiple TBs into separate "clients" backed up to separate sets, and see if it makes a difference. Otherwise you may just have to accept that you've outgrown RS, at least in the way you're currently using it, and will have to think again.
  4. Nigel Smith

    Files backed up again

    I always thought those options applied to sources with the RS Client installed, rather than non-client shares -- or does it consider the share to be mounted on a client which also happens to be the server? I'd certainly never think of changing options in the "Macintosh" section when backing up a Synology, however it was attached! Every day's a learning day 🙂
  5. Nigel Smith

    Suddenly Error 1116 Decides to Happen

    "error -1116 (can't access network volume)". Since it happened part-way through, it looks like the network connection dropped rather than a permissions thing. Check both PC and NAS for energy-saver-type settings, logs for restarts/sleep events, switches/hubs for reboots, etc. What else was on the network, and busy, during the backup period? Looks like a Netgear NAS, so 1GE unless it's a pretty old model, but we're only seeing 100Mb/s across the network -- something just doesn't feel right. Perhaps try a direct ethernet connection between the PC and NAS, if only to get the initial backup completed cleanly.
  6. Nigel Smith

    Errors when laptop not connected

    The problem of having a "nightly" script and a "pro-active" script backing up to the same set is that only one can write to that set, blocking the other while it is running. While David has some suggestions above, may I offer another? Move *all* your systems onto the "pro-active" script! Schedule it to run during your overnight window. Set it to back up sources every 22 hours or so (roughly 24 hrs - time taken to back up all systems) so it only backs up each once. When it kicks off it will start polling the network for client availability, starting with the one least-recently backed up. Each system in turn will be backed up if present, or skipped for a while (I think the default is 30 minutes) then checked for again -- meanwhile the script continues with the other clients. It's not good if you need to back things up in a certain order, or if you need to hit a certain time (eg quiescing databases first), but it's great to make sure that "irregular" clients get backed up if available and that those "most in need" get priority. AFAIK, with two backup sets listed *and available* the above would alternate between them nightly, but things may have changed in more recent RS versions.
  7. Nigel Smith

    Grooming Policy Too Simplistic

    That would be complex, fraught with error, and have huge potential for unexpected data loss. It sounds like you've either under-specced your target drive, have too long a retention period, or have a huge amount of data churn. First two are easy enough to sort and, for the last, do you really need to back up all that data? We have a policy here that if data is transient or can easily be regenerated it should not be backed up. Maybe you could do the same, either by storing it all in a directory that doesn't get backed up or by using rules that match your workflows and requirements to automatically reduce the amount of data you're scraping. Whilst it would be nice to back up everything and keep it forever, resources often dictate otherwise. So you'll have to find a balance that you (and/or any Compliance Officer you may answer to!) can live with.
  8. Nigel Smith

    Files backed up again

    What's the Synology volume formatted as? (btrfs can do funky things with metadata, as mentioned in a previous thread.) Did you make any service changes on the Synology between the backups? Are the Synology and Retrospect clocks in sync and in the same time zone (only thinking of that because we in the UK have just switched from BST, which often caused problems in the past 😉 ). AFAIK, we don't have the equivalent of the RS Windows's "Preview" when backing up. You might be able to tell after the fact by restoring the same file from both snapshots and comparing them and their metadata. Or winding the log levels up to max and sorting through all the dross for some information gems -- you'll want to define a "Favo(u)rite" with just a few files for that, though! Terminal? find . -ctime -1s ...will find "future modified" files (I think -- tricky to test!) in the current working directory and deeper. "-ctime" is the most inclusive "change", including file modification, permissions and ownership. What do then want to do? If it's just "set modification time to now" then find . -ctime -1s -exec touch {} \; ...should do the job. That's working with the Synology mounted on your Mac. If you want to do it directly on the Synology, which is probably faster/better but assumes you can ssh in, then this should work with its "idiosyncratic" version of *nix: touch timeStamp.txt;find . -newer timeStamp.txt -exec touch {} \; ...where we're creating a file with a mod time of "now", finding all files newer than that (ie timestamped in the future), and setting their mod times to "now". All the above works on my setup here but, as always, you should test on something you can easily restore if it all goes pear-shaped...
  9. Nigel Smith

    Retrospect for Windows cannot start

    It's *probably* a corrupt Retrospect configuration file (see eg here for the same). I believe the latest Windows version is still using "Config77.dat", found in "C:\ProgramData\Retrospect" (Windows users feel free to chime in!). Either restore that from a backup or delete, launch RS, and reconfigure. Worth noting that you'll also find "assert_log" and "operations_log" in that same directory -- these can be opened with a text editor, so you can access the log files when RS won't launch.
  10. What shares do you have on the NAS? I notice that you're connecting to "admin" but "Add Favorites" is showing "admin-1". Have you also mounted the share on the RS-hosting Mac? Try "ls -al /Volumes" in the Terminal -- there may be some multiple-mounting confusion happening. If it is mounted on the Mac already, do you get a listing if you go to "Add Favorites" on the mounted volume rather than via "Shares"? Check that the username/password you are using actually allows access to the files and folders, and not just to the share itself!
  11. Seems likely you hit the nail on the head. Just for fun, what happens to times if you run a second backup of the Synology on the same day? I don't know if an RS scan would be considered "file access", but it's possible that you are triggering a metadata update across all the files (see here) which is slowing things down.
  12. Nigel Smith

    Slow closing of clients after backup

    Probably something on the server is slowing things down -- other processes using the disk, disk errors slowing down write times, potentially problems with the Retrospect catalogs... Check the whatever is Windows's equivalent of System Log for errors, try running a process monitor to see if you can spot other active (especially disk I/O) processes, run a backup manually to the same set with a single client at a different time to see if it is something scheduled, try a new backup set to eliminate catalog problems.
  13. The "Sleep" slider has been eliminated because they've tied computer-sleep and display-sleep together -- the default is that, when your display sleeps your computer does too. But below that is a "Prevent computer from sleeping..." checkbox and, with that ticked, the display will turn off but the computer remain awake. That's how I leave my work iMac so I can access it from home of an evening. That's how it *should* happen, anyway. Maybe you've got some borked settings from the "upgrade"? Check the "Power" section of a System Information report, "System Power Settings"->"AC Power", and the "System Sleep Timer" should be set to "0" (ie never) or the same as "Display Sleep". Try leaving the MBP's Console running overnight, then searching for "sleep" in the morning -- if it is sleeping you should see some "prepareForSleep" messages from the apsd process. It's a shame if the client no longer keeps the Mac awake during backup but, in your case at least, it should be an easy workround. And if your Mac isn't honouring your Energy Saver settings, try an SMC reset.
  14. Nigel Smith

    Can retrospect recover after error.

    Saw that after -- I'll answer there. Agreed, but that can be very time consuming -- and very expensive in some situations (eg cloud sets). Since you're doing backups anyway it's worth taking the extra time to back up Retrospect's catalogs and settings too, ready for the inevitable failure of the server. 3-2-1 ain't just for your clients! 😉
  15. "Synology" and "QNAP" covers an *awful* lot of units of varying spec. Can you be more precise (model, disks, cache, filesystem, networking, etc)?
  16. That sounds as if you are using your Synology as your backup target, connecting your Retrospect server to it via Minio (running on the Synology). In which case you might be making things more complicated than you need to! Mount the Synology on the Retrospect server machine as an AFP or SMB share. Set that share as your backup destination. No need for Minio at all. Either way, you'll have a potential bottleneck in that the server's network connection handles roughly twice the load of the other parts of the system -- both incoming from the client and outgoing to the Synology -- just as you surmised. That may not be a problem -- client read/server write speeds may mean your network never saturates -- but, if it is and it means you exceed your backup window, it's easily solved with a second network interface on the Mini (eg Thunderbolt Ethernet adapter) and using one for the clients and the other for the server connection, or even an upgrade to 10Gb. If you explain what you have (client numbers, amount of data) and what you're trying to achieve, we might be able to suggest ways of doing it.
  17. Nigel Smith

    Can retrospect recover after error.

    What change is that, then? Sorry for the hijack, Trevor. Whilst the correct answer is probably "It depends...", interrupting the backup is something you want to avoid if possible[1]. A reboot may be OK, assuming Windows lets Retrospect close down gracefully first, but a power cut or similar could leave the catalog in a strange state. So make sure you are backing up your catalog as well, so you can restore it to a previous "good" version in case of trouble (a good practice, regardless of situation). But, hopefully, you'll rarely have a problem. Once you get that initial, time consuming, full back up done you can carry on with incrementals, which should complete much faster. [1] I don't like to contradict David, but I think you have a different situation to his. In his case it is the client that is interrupting the process while the server continues "unharmed", in yours you'd effectively be killing the server halfway through the process...
  18. Awesome -- hopefully a tale that'll get less frustrating and more amusing to you with repeated telling... Can you "keypress" your way past the dialog? I don't know how consistent Windows dialogs are compared to Macs, but you might be able to e.g. tab-return to "press" the next button. Does an external monitor work in recovery mode -- maybe try the one from your desktop? Wouldn't that all be done by the Retrospect restore? Honest question -- we only back up User folders here, partly to save on time/storage but mainly because we treat a "full restore" as an opportunity to clear out years of accumulated cruft in system/application folders by re-installing from scratch. I know this is all by-the-by now the actual problem has become apparent, but it's good to know the limits of a backup system *before* you run into them!
  19. So, since you can boot from a Windows image... Boot from a Windows installer. Reformat you drive to how you had it before. Reinstall Windows, update drivers, etc, etc. Create Retrospect Recovery disk. Boot from newly created RS Recovery, restore with Retrospect. Windows image is a basic system -- I'd thought that the RS Recovery you tried to create is a "bootable clone" of *your* system, complete with your hardware's drivers etc (hence the need for a "dissimilar hardware" add-on), though I'll defer to mbennett's experience. Either way, you should create a Recovery Disk *before* you need it, and check that it works! That used to be a standard part of the "new computer experience", both PC and Mac, though in these days of internet booting and fast downloads of images/drivers/etc it seems to have fallen by the wayside (and I include myself amongst those who seldom bother anymore...).
  20. The "supported" way is to use MDM profiles -- but that involves enrolling the devices into MDM, etc. Der Flounder's page here is a good starting point for info, and visit Jamf for more MDM goodness. (Note: I've not used MDM myself, bar a bit of a play.) AFAIK, the TCC (Transparency, Consent and Control) database is read-only protected by SIP -- indeed the only command available in tccutil is "reset". Carl Ashley's TCC Roundup is a good primer, see also other pages on Der Flounder's site and these results from the Eclectic Light Co. So I think that, absent MDM, hitting every station is your only option. You might be able to push an Applescript that uses GUI interaction to automate things a bit while you're connected via ARD, but I can see that being highly error prone... But even something as simple as: tell application "System Preferences" activate reveal anchor "Privacy_AllFiles" of pane id "com.apple.preference.security" authorize pane id "com.apple.preference.security" end ...could save you a lot of mousing. You might even be able to wrap it in "osascript -e ..." and use ARD's Send Unix Command, though you'll have to be controlling the machine with ARD at the time.
  21. The recovery disk should have been created *before* your laptop went south, not after (in the same way that a new OEM PC will often throw up a "Now create a recovery disk" after first configuration). It's probable that your recovery disk, built using your borked system, will only boot as far as your borked system would... While the manual does "strongly recommend" you create the disk "as soon as possible", they could be more explicit about doing it before it might be needed. I think you'll need to find another method -- perhaps pull the disk, mount it and reformat on another PC, full restore to it there and then put it back in the laptop? But I'm a Mac guy, so will defer to those more knowledgable...
  22. Sample contents? I've got similar -- probably an older client giving a slightly different naming scheme -- looking like tight clusters around a single event per file, and every line refers to problems with "...ExcludeList...". Same backup set all through, no rebuilds. Nothing recent so I can't check logs properly -- but I have a "Private Backup Server" address in the client Preferences "Advanced" tab, and the logs feel like they correspond to server restarts or network downtimes. Maybe an error from a "phone home" function? I'd guess that going to the client Preferences "Advanced" tab and setting logging to "Off" would quiet things down, if they're annoying you.
  23. Exactly my point. We do all this, it's much easier than re-adding a client to lots of scripts -- but it would be *even easier* if you could say "This new client? It's actually that old client re-done", kind of like "Locate" does with client IP address, whereupon the client would "inherit" the old Favo(u)rite definitions (assuming absolute path remains the same), tags, etc without any work from us. And, especially, a seamless series of snapshots across old and new clients. But snapshots may be one of the practical/security reasons why this seemingly good idea really, *really*, isn't...
  24. Oh, but he does! Being able to tell RS that "this new client is that old client, only reinstalled" would be useful. Not having to re-define Favourite Folders, re-do Tags, etc, would be great. But I get the feeling that there are practical, and probably security, reasons why this can't/shouldn't be done else we would have had the feature ages ago. But, in the meantime, Tags are a time-saving feature that anyone who isn't using should take a good look at.
  25. Tags. If you set your scripts to use tags to determine what to back up, you only have to set a new/replaced client's tags once and it will be picked up by all appropriate scripts. And as David said, you shouldn't get a full backup unless that's part of the script's definition -- "Match only files in same location/path" may be the culprit here.