Jump to content

Nigel Smith

Members
  • Content count

    336
  • Joined

  • Last visited

  • Days Won

    23

Everything posted by Nigel Smith

  1. Interesting question! I think not, because I'm guessing that WoL is part of the discovery phase only. Unless that's how they did the fix you mentioned way back when? And sleep is only for security if you've got "require password on wake" set. IMO it's more about energy saving, extending component life (not just disks) etc -- especially when moving laptops around (I get so annoyed when I see people walking up and down stairs at work with laptop open and running -- and also get a desperate urge to trip them up, just to see what happens!).
  2. Client? Often-- eg when a backup is in progress and the user sleeps their machine, pulls the network plug, etc. Client remains "in use" so the server can't back it up. Either restart the client machine or (if your RS client permissions allow) turn the client off and on again(Option-click on the client "Off" button to fully kill the client process).
  3. In my defence, your honour... My personal preference is to lock things down because I don't trust ordinary users. But there are some who cause so much aggravation that, for the sake of my own sanity, they get the "This is why you shouldn't do that..." speech during which it is made clear that if they do do that then it is completely on their own head when they do it wrong. And I've got the email trail to prove it... Plus, in this case it's jethro who wants to choose his backup times -- and I'm sure he can be trusted to do it right! And if he doesn't and it all goes pants, I've got the forum screenshots to prove that it wasn't my fault, your worship 😉
  4. Totally untested, but you might be able to spoof what jethro describes by having a 24hr/day Proactive script with a large "days between backups" setting -- RS wouldn't back up the client whenever that client contacted it, only when the user used "Backup Now". That setting would, of course, apply to all clients in that script, which would be all those tagged for Remote Backup. I'd question why you'd want to do that though! Far simpler to just set things up as normal for Remote Backups, and if you only wanted those to happen at certain user-decided times (perhaps they always want to wait until they've finished both their work and their evening's Netflix viewing before clogging their network connection with RS traffic) allow them to turn their client on and off as it suits them.
  5. If you can't merge multiple saved rules (I'll test later), try chaining them: Rule A -- Include "Saved rule includes All Files Except Cache Files", Exclude "Folder name is Temp" Rule B -- Include "Saved rule includes Rule A", Exclude "Folder name is Downloads" Rule C -- Include "Saved rule includes Rule B", Exclude "Folder name is .dropbox" etc. Obviously each Exclude could contain as many clauses are you like. So a run using Rule C, above, would exclude ".dropbox", "Downloads", "Temp", and all cache files.
  6. Check by using the RS Console to "browse" the mounted volume. If you can, and given that your backups are working, you can consider it a spurious alert message. (Full Disk Access -- SystemPolicyAllFiles -- includes SystemPolicyRemovableVolumes so Engine and Instant should be OK.) My favourite quote about FDA/F&F is "Currently, this system is so complex that it appears unpredictable." I guess that extends to application's error messages, too 😉
  7. Everything David says. And I'd add that: Most VPN servers don't allow "network discovery", either Bonjour (like you'd use to list available printers etc) or Retrospect's version, between subnets. Remote Backup is a lot more flexible in that the client wouldn't need to be on the VPN to be backed up. That also reduces the load on your VPN server, helping the people that need to use it. If the use of VPN is a requirement, eg compliance issues, you can actually use Remote Backup through, and only through, your VPN. Otherwise you'll have to open the appropriate ports on the server to the Internet (probably including port-forwarding on the WatchGuard). Most home connections are slow. Get initial backups done while the machines are directly connected to the "work" network, then use incrementals when the machine is out of the office. In your situation you could try a transfer from your current backups to a new, Storage Groups based, set (I've not tried this myself, so don't know if you can). RS will do this switch to incrementals seamlessly, just as it would usually. There's no deduplication across different volumes in Storage Groups, so you may have to allow for more space on your target media. Deffo upgrade to v17!
  8. That isn't exactly a surprise 🙂 But there's a few more things you can try first (I'd still recommend upgrading to v17 eventually, for better compatibility). So that is completely different -- an external drive rather than a system disk, HFS+ rather than APFS, Sierra rather than Mojave? Sounds like the only constants are the server and the client version, yet it isn't happening with every client... What are the commonalities/differences between the clients for which this happens and the ones that don't? Don't just look at software versions, but also what's installed and what is or isn't enabled eg FileVault, sleep options. Give the v15.5 client a go, even if you haven't a spare test machine. If it doesn't work you can simply uninstall it, re-install the v14 version, and re-register the machine with your server. And ultimately -- if it is always "first attempt fails, second attempt works" as you describe... Simply schedule a "primer" script to hit the troublesome systems before your real script 😉 You could even do it with a rule that excluded everything -- all files would be scanned, priming the target for the real script, but little or no space would be needed on your backup target.
  9. I had completely forgotten about that button! Probably because it's always dimmed for me... Ah... Grooming a set stored on a NAS is an absolute shedload of network traffic, especially in storage-optimised grooming. See the "Grooming Tips" page, section 11. Do everything you can minimise problems, ideally (if possible) setting up a short-cabled direct connection from your Mac to the SAN -- you may want to use different interfaces on both machines and hard-code the IP addresses if you normally use DHCP. And make sure there's no scheduled processes on either machine that might interfere. Since the Synology checks OK, try copying the "bad" files to your Mac -- if that works and the "bytes" size shows the same in Finder's Get Info, it's even more likely that there was a network glitch. Experience has shown that RS is a lot more sensitive to such things than eg a Finder copy, so a NAS connection that seems OK in normal use can be too flakey for a "big" RS operation.
  10. I leave it off myself, for that very reason. I asked because it may have resulted in the client mistakenly using an "old" file listing from the last backup (so nothing needs to be backed up because you already have). If not on, that's probably not the issue. Also, Instant Scan isn't compatible with APFS. So "jorge" is your user folder and you've defined that as a Favourite in RS? What state is the client machine at when the backup fails -- logged out, logged in but asleep, logged in but on screen lock, logged in and in use? -- and when it succeeds? But... Certification for Mojave only arrived with RS v15.5. If you've a Mojave test system handy you could try downloading the 15.5 client, installing it, and seeing if your v14 server can still back it up and, if so, if there's any improvement. Otherwise, it's worth noting that even if your server is limited to High Sierra, you can still upgrade to RS v17.
  11. The logs are saying something different -- that the remote client is contacted/scanned just fine but no files are found that need to be backed up, while the local volume does have files to be backed up and they are. So it's a problem with the remote source, and you need to include OS version, RS client version, and disk format of that. Also whether "jorge" is the system volume, a Favourite Folder you've defined in RS, or another volume (icons may not have pasted into your post). It doesn't look like you are using any Rules, but check just in case. I would have guessed at an issue with Instant Scan but, on my system at least, use of that is included in the logs...
  12. As of RS v13(? -- David will know) Fast Catalog rebuild is always "on" for Disk Media Sets unless you enable grooming, and then it's "off". In v17, and maybe before, it isn't even shown as an option, but I suspect the UI took time to catch up with the behavioural change and they disabled the option rather than making a new layout. Which is why I was asking if you'd enabled grooming after the rebuild. It may just be that the logs are mis-reporting on that line. What I don't understand is your first sentence: As I understand it, grooming via Media Set options is either off, to a set number of backups, or to a defined policy -- not much scope for you triggering grooming yourself. So how did you do this? That may have a bearing on the matter. I'd also try doing the rebuild then backing something up to that set, even just one small file, before the grooming operation. Other questions: How much free space do you have on the system disk? Where is the 10TB+ of data stored, and how much free space is there on that? When was the last time you disk-checked it? Rebuild log says RS is skipping a file -- check that on your disk media, can you replace it from eg tape. Same for the files mentioned in the groom log. It might also be worth downloading the v17 trial, maybe on a another machine, and trying the rebuild on that. If successful you might even (I haven't tried it!) be able to copy the new catalog back to the v15 machine and use it there -- you can move catalogs up versions, but I've never tried down! If you can't but the rebuild worked, at least you'll know upgrading to v17 is one way out of your problem.
  13. And do you still have "two mounted disks named prepress", as per the previous warning? Might explain it... If so, unmount them all (may be easiest to just restart the RS Server machine), mount the volume (I'm assuming you do that manually. And check with "ls -l /Volumes" in Terminal, making sure "prepress" is listed only once), and run the script again.
  14. Glad it's working. I'm still going to blame the Win Server rather than RS -- for no better reason than bitter experiences with Windows servers 😉. A good test, next time it happens, would be to request the server be restarted without you doing anything to the RS installation.
  15. I didn't want to mention kswisher's work without some checks of my own -- there's even more chance of undocumented behaviours being broken by "updates" than the documented ones! Some quick tests suggest this is still valid, so "Folder Mac Path is like */Users/*/Documents/" will select the Documents folder of every user account of a machine. Note that "*" is "greedy", so "*/Users/*/Xcode/" will match /Users/nigel/Documents/Programming/SuperSecretStuff/Personal/ButNotHidden/Xcode/. Given the lack of official documentation you should test, test, and test again. While there's no preview, I do this by running the backup then using the "Past Backups" pane to "Browse" with "Only show files copied during this backup" checked. But you should still be able to do it the way I used to -- create a new media set, back up an entire client to that set, then do a "Restore" using "Search for files...", select your rule(s) and the backup set, then the destination. The "Select Backups" step will allow you to preview the results. When you are doing a lot of fiddling to get a rule right, this can be a lot quicker than repeated backup attempts (and there's a lot less impact on the client!). Also note that Rules don't reduce scan time -- every file on a (RS-defined) volume is scanned/tested, there are no "don't even look in this folder" shortcuts. The only way to do that is via the RS Client's "Privacy" settings.
  16. Can you screenshot your "failing" rule setup? Also, be careful how you "embed" rules -- a saved exclusion rule goes in the "Includes" section when you embed it (as you've done above with the built-in) and IIRC multiple exclusion rules should be "Any"ed. As David says, trailing slashes on your Mac paths -- implied by the documentation and even if not strictly necessary prevents a false match with eg "/Users_Important_Files" -- and no, there's no documented wildcarding. There is an "Is like" match option, but I don't think anyone knows how -- or even if -- it works! pp177 of the User Guide -- as much as I like to complain about the documentation, this is something they did include (albeit in a "blink and you'll miss it" way). It should certainly be more obvious, eg a tooltip in the UI.
  17. I'll let you into a secret -- if I was your IT team I would have probably said "No, there are no characters blocked by Acronis <tippy-tappy-type-fix-config> so try again and see what happens". 😉 More seriously, was there a restart of the backup machine between not working and working? I'm wondering if there might have been a freaky AFP cache problem or multiple mounts of the same share, either of which could be caused by disconnect/recovery and wouldn't be obvious unless you listed /Volumes.
  18. Repeat the test I did with your setup, with test files on both the Windows file server and any old Mac -- limit it to a single folder for speed 😉 Then download the RS17 trial onto a newer-OS Mac and repeat the tests from both servers, once using AFP mounting then again using SMB. You're hoping for both old and new RSs to succeed with the Mac file server, for both old and new RSs to fail with the Win server over AFP, and the new RS to succeed with the Win server of SMB -- that'll pretty much point the finger at the Win Server and/or Acronis, putting the ball firmly in IT's court for further troubleshooting.
  19. Logs show a Fast Catalog Rebuild and, IIRC, that can only used on Disk Media Sets when grooming isn't turned on. Perhaps you are falling foul of turning on grooming too late, after the (default) Fast Catalog Rebuild format has been used? Have you groomed that set previously? Are you using the set's builtin options or are you running a separate script?
  20. I hadn't tried -- but I have now, and no problems at all. Folder call "test" containing "test1.txt", "test-2.txt", and "test_3.txt". Shared over AFP from 10.14.6, mounted over AFP on RS v6.1.230 machine running 10.3.9: Different RS host OS to you, but you can easily reproduce the test with any Mac and an HFS+ formatted volume to share whole or part of.
  21. Not until you can define that exactly 😉 Rules are smart, but also very literal. I've already explained about "last access" vs "last modified", but also consider that you will be in a situation where only half a project can be restored, because some of the files in it were last used 729 days ago and others at 731 days. If you work in projects, IMO it makes more sense to manage your backups by removing them from the SAN (to archive, obv) 2 years after they've finished -- they won't get backed up any more because they aren't there(!), and your Grooming policy will remove them from the Disk Media Set after the appropriate number of backups. No need for a rule in your backup script (and remember that, if time is important, every file on the volume has to be scanned before it can be excluded by the rule), no need to run a separate Grooming script. If you still want to use a date-based rule, your first job is to test all the software that you use to access your SAN-stored files and find out whether "last access" will be honoured. Without that you won't be able to backup files that are read but not modified within your date window.
  22. ...and... ...makes complete restores difficult unless you never have anything older than 24 months on your SAN. That's where no backup rules and an "in set" Grooming policy win out -- you can always restore your SAN to how it was up to n snapshots ago with a single operation. Using date-based Rules will mean you can only restore all files that match that criteria, and may then have to go through other sets/tapes to find older items (which can be a real pain, takes time, and is prone to error). So you need to ask yourself "If my SAN fails and I have to replace it, do I want to restore it as it was at a certain time or will restore only files modified in the last... be good enough?".
  23. I wouldn't, for a few reasons: I like to disaster recover to the last, best, complete state. If you select as above there could be files that were on the SAN but won't be restored -- if they aren't worth backing up, why are they still on the SAN? If they should be on the SAN, as part of an on-going project, shouldn't they be backed up even if they haven't been modified? You could get round the above by restoring to the last time-point then overlaying any missing older files from previous backups -- but that's a lot of work and error-prone compared to simply restoring everything from a single snapshot You should also include files that haven't been modified in the last 24 months but have been accessed -- obvious examples are templates that you open then "Save As...", or images that you link (rather than embed) in documents. Perhaps in your case a drum loop that you import into a project? You might need that original loop, even though it's never itself modified. Not all systems set an access flag, some are way too keen and set it for things you wouldn't consider an access for your backup purposes, so you should test it very carefully Given the above, I'd back up everything on the SAN and rely on my archiving process to manage the amount of data on there. I also wouldn't groom the disk set, but that's because I find it much easier to keep things straight in my head if a set has everything backed up between its start and end -- I just archive off to tape and start a new set when it gets unmanageable. YMMV, so remember that there are two ways to groom in RS: As a property of the Disk Media Set, where you only set the set to retain the last n backups of a source By running a separate Groom script, where you use RS's Selectors to decide what to keep/remove If you still want to go by file properties, a Groom script will be the necessary. But given that you are considering backing up to both disk and tape I strongly recommend you look at the "Staged Backup Strategies" section (pp183 of the RS17 Mac UG). Not, in your case, for speed of getting data to your tape drive but because it reduces load on your SAN and gets it back to "production speed" more quickly -- if your users work odd hours and across your backup window, they'll thank you (Hah! When does a user ever thank a sysadmin?). So I think I'd do: Nightly backups of all the SAN's files to the Disk Media Set Daily transfers from that to tape (means only 1 of your 3 tape sets is on site/travelling at a time, reducing risk of loss) Have a grooming policy that balanced how far back you usually go to restore a previous version of a file with the capacity of your Disk Set media That last is particular to your setup, remembering that you can still go back further than n backups by going to your tapes -- it'll just take longer than a Disk Media restore. So many ways to do things! Pick your poison, try it out, refine as you understand your needs more.
  24. Files Connect can enforce a filename policy. Is it possible that it's set to disallow hyphens, which Illustrator (connecting over SMB) isn't bound by? I'm pretty sure if this was a Mac/RS/AFP thing we'd have bumped into it before, but I'll try and test anyway although my v6.1 is on an older OS.
  25. I was trying -- and failing! -- to avoid RS terminology to get away from the software specifics and get oslomike to look at it more theoretically. That's difficult to do when RS uses the same terms as you'd naturally use: backups and archives 🙂 You can archive an (RS) backup. You can restore from an (RS) archive. So in practice they're shades of grey rather than black-and-white different, but it can help in creating a strategy to think of them as different. Backups -- what you need for disaster recovery -- disk failure, accidental deletion, building fire, etc. Archives -- safe storage of things not needed now but may be needed later, or must be kept for compliance etc. In an ideal world you'd have all the data you ever used/created on super fast, instantly accessible, storage which you'd then back up (multiple copies, obviously!) in case of disaster. In the real world we usually can't afford to do that, so we keep the data we need, or will likely need, on the super fast storage and anything that is unlikely to be needed (or isn't needed instantly-on-demand) can be archived to cheaper slower, or even off-line, storage. So oslomike's plan, above, is a good backup plan (assuming the tape sets are rotated so one is always off-site!) with "accidental archiving" -- old files are somewhere on the backup tapes, and still retrievable, though there's no provision for removal of those old files from the SAN. I'd add in an actual "deliberate archiving step", which doesn't need to be anything fancy -- eg once a year, copy all projects that finished more than 18 months ago to 2 (or more!) dedicated "archive" sets, verify those by retrieving data from them, then remove that data from the SAN. The more business critical that archival data, the more care and copies you should take -- you might want to archive to local (slow) disk storage, to a cloud service with multi-region replication, and to tapes kept in off-site secure storage, with bonus points if you use different methods for each so you aren't tied to Retrospect for it all (for example, you could use RS or some other method to push actual files to the cloud service rather than using RS's storage format). Whether that's worth it to you is another matter, but it might give you some ideas.
×