Jump to content

Nigel Smith

  • Content count

  • Joined

  • Last visited

  • Days Won


Posts posted by Nigel Smith

  1. 9 hours ago, DavidHertzberg said:

    If a user puts a "client" to sleep while a Scheduled script is backing it up, do you know if Wake-on-Lan will reawaken it if enabled for that "client" Source?

    Interesting question! I think not, because I'm guessing that WoL is part of the discovery phase only. Unless that's how they did the fix you mentioned way back when?

    And sleep is only for security if you've got "require password on wake" set. IMO it's more about energy saving, extending component life (not just disks) etc -- especially when moving laptops around (I get so annoyed when I see people walking up and down stairs at work with laptop open and running -- and also get a desperate urge to trip them up, just to see what happens!).

  2. 18 hours ago, Gintzler said:

    Have you guys ever gotten that a source is busy and nothing will kick it out of Busy mode?

    Client? Often-- eg when a backup is in progress and the user sleeps their machine, pulls the network plug, etc. Client remains "in use" so the server can't back it up. Either restart the client machine or (if your RS client permissions allow) turn the client off and on again(Option-click on the client "Off" button to fully kill the client process).

  3. On 2/10/2021 at 8:50 PM, DavidHertzberg said:

    I'm surprised at you! 🙄  Don't you realize that the whole idea of modern  (since low-cost large-capacity HDDs were introduced in the 2000s; previously it was that tape drives were expensive) client-server backup is based on the principle of not trusting ordinary users to be responsible for backing up their own machines? 



    Prepare to be picked up by the fearsome Retrospect Support Police and be taken to a backup administrator re-education camp.🤣

    In my defence, your honour...

    My personal preference is to lock things down because I don't trust ordinary users. But there are some who cause so much aggravation that, for the sake of my own sanity, they get the "This is why you shouldn't do that..." speech during which it is made clear that if they do do that then it is completely on their own head when they do it wrong. And I've got the email trail to prove it...

    Plus, in this case it's jethro who wants to choose his backup times -- and I'm sure he can be trusted to do it right! And if he doesn't and it all goes pants, I've got the forum screenshots to prove that it wasn't my fault, your worship 😉

  4. 5 hours ago, DavidHertzberg said:

    That enhancement isn't in Retrospect Mac 17, which means the only way you can currently do Remote Backups using a Mac "backup server" is with Proactive scripts

    Totally untested, but you might be able to spoof what jethro describes by having a 24hr/day Proactive script with a large "days between backups" setting -- RS wouldn't back up the client whenever that client contacted it, only when the user used "Backup Now". That setting would, of course, apply to all clients in that script, which would be all those tagged for Remote Backup.

    I'd question why you'd want to do that though! Far simpler to just set things up as normal for Remote Backups, and if you only wanted those to happen at certain user-decided times (perhaps they always want to wait until they've finished both their work and their evening's Netflix viewing before clogging their network connection with RS traffic) allow them to turn their client on and off as it suits them.

  5. 18 hours ago, AlexB said:

    I still am not getting multiples to work - when I do this (see second image), the Downloads folder now gets backed up, telling me the 2nd rule is not being applied (see third image for 2nd rule).

    If you can't merge multiple saved rules (I'll test later), try chaining them:

    Rule A --  Include "Saved rule includes All Files Except Cache Files", Exclude "Folder name is Temp"
    Rule B -- Include "Saved rule includes Rule A", Exclude "Folder name is Downloads"
    Rule C -- Include "Saved rule includes Rule B", Exclude "Folder name is .dropbox"

    Obviously each Exclude could contain as many clauses are you like.

    So a run using Rule C, above, would exclude ".dropbox", "Downloads", "Temp", and all cache files.

  6. Check by using the RS Console to "browse" the mounted volume. If you can, and given that your backups are working, you can consider it a spurious alert message.

    (Full Disk Access -- SystemPolicyAllFiles -- includes SystemPolicyRemovableVolumes so Engine and Instant should be OK.)

    My favourite quote about FDA/F&F is "Currently, this system is so complex that it appears unpredictable." I guess that extends to application's error messages, too 😉 

  7. Everything David says. And I'd add that:

    1. Most VPN servers don't allow "network discovery", either Bonjour (like you'd use to list available printers etc) or Retrospect's version, between subnets.
    2. Remote Backup is a lot more flexible in that the client wouldn't need to be on the VPN to be backed up. That also reduces the load on your VPN server, helping the people that need to use it.
    3. If the use of VPN is a requirement, eg compliance issues, you can actually use Remote Backup through, and only through, your VPN. Otherwise you'll have to open the appropriate ports on the server to the Internet (probably including port-forwarding on the WatchGuard).
    4. Most home connections are slow. Get initial backups done while the machines are directly connected to the "work" network, then use incrementals when the machine is out of the office. In your situation you could try a transfer from your current backups to a new, Storage Groups based, set (I've not tried this myself, so don't know if you can). RS will do this switch to incrementals seamlessly, just as it would usually.
    5. There's no deduplication across different volumes in Storage Groups, so you may have to allow for more space on your target media.
    6. Deffo upgrade to v17!

  8. 1 hour ago, boomcha said:

    I'm given RS a call to see what they say, and they are recommending upgrading to 17.

    That isn't exactly a surprise 🙂 But there's a few more things you can try first (I'd still recommend upgrading to v17 eventually, for better compatibility).

    1 hour ago, boomcha said:

    This also happens on external HFS+ drives on another client as well consistently as well. That one is running Sierra but same version

    So that is completely different -- an external drive rather than a system disk, HFS+ rather than APFS, Sierra rather than Mojave? Sounds like the only constants are the server and the client version, yet it isn't happening with every client...

    What are the commonalities/differences between the clients for which this happens and the ones that don't? Don't just look at software versions, but also what's installed and what is or isn't enabled eg FileVault, sleep options.

    Give the v15.5 client a go, even if you haven't a spare test machine. If it doesn't work you can simply uninstall it, re-install the v14 version, and re-register the machine with your server.

    And ultimately -- if it is always "first attempt fails, second attempt works" as you describe... Simply schedule a "primer" script to hit the troublesome systems before your real script 😉 You could even do it with a rule that excluded everything -- all files would be scanned, priming the target for the real script, but little or no space would be needed on your backup target.

  9. 2 hours ago, ByTheC said:

    and then used the UI button about 3/4ths of the way to the right above the main window that is labeled Groom

    I had completely forgotten about that button! Probably because it's always dimmed for me...

    2 hours ago, ByTheC said:

    The system disk is not my backup target, instead this backup goes to a Synology NAS

    Ah... Grooming a set stored on a NAS is an absolute shedload of network traffic, especially in storage-optimised grooming. See the "Grooming Tips" page, section 11. Do everything you can minimise problems, ideally (if possible) setting up a short-cabled direct connection from your Mac to the SAN -- you may want to use different interfaces on both machines and hard-code the IP addresses if you normally use DHCP. And make sure there's no scheduled processes on either machine that might interfere.

    Since the Synology checks OK, try copying the "bad" files to your Mac -- if that works and the "bytes" size shows the same in Finder's Get Info, it's even more likely that there was a network glitch. Experience has shown that RS is a lot more sensitive to such things than eg a Finder copy, so a NAS connection that seems OK in normal use can be too flakey for a "big" RS operation.

  10. 1 hour ago, boomcha said:

    Hey Nigel, I did not have Instant Scan turned on because of how much of resource hog it is usually but I will test it with it turned on

    I leave it off myself, for that very reason. I asked because it may have resulted in the client mistakenly using an "old" file listing from the last backup (so nothing needs to be backed up because you already have). If not on, that's probably not the issue. Also, Instant Scan isn't compatible with APFS.

    1 hour ago, boomcha said:

    "jorge" is my user on the client machine FYI.

    So "jorge" is your user folder and you've defined that as a Favourite in RS? What state is the client machine at when the backup fails -- logged out, logged in but asleep, logged in but on screen lock, logged in and in use? -- and when it succeeds?


    1 hour ago, boomcha said:

    OS Client machine: Mojave  using RS 14.6 Client (APFS format)

    Certification for Mojave only arrived with RS v15.5. If you've a Mojave test system handy you could try downloading the 15.5 client, installing it, and seeing if your v14 server can still back it up and, if so, if there's any improvement. Otherwise, it's worth noting that even if your server is limited to High Sierra, you can still upgrade to RS v17.

  11. The logs are saying something different -- that the remote client is contacted/scanned just fine but no files are found that need to be backed up, while the local volume does have files to be backed up and they are.

    So it's a problem with the remote source, and you need to include OS version, RS client version, and disk format of that. Also whether "jorge" is the system volume, a Favourite Folder you've defined in RS, or another volume (icons may not have pasted into your post). It doesn't look like you are using any Rules, but check just in case.

    I would have guessed at an issue with Instant Scan but, on my system at least, use of that is included in the logs...

  12. On 2/3/2021 at 5:37 PM, ByTheC said:

    The User's Guide makes it sound like Fast Catalog Rebuild is a software toggle of some kind, however my UI doesn't reflect this term at all under Media Set/Options tab, even as a part of the UI that is greyed out/not changeable (but at least shown).

    As of RS v13(? -- David will know) Fast Catalog rebuild is always "on" for Disk Media Sets unless you enable grooming, and then it's "off". In v17, and maybe before, it isn't even shown as an option, but I suspect the UI took time to catch up with the behavioural change and they disabled the option rather than making a new layout.

    Which is why I was asking if you'd enabled grooming after the rebuild. It may just be that the logs are mis-reporting on that line.

    What I don't understand is your first sentence:

    On 1/31/2021 at 7:28 PM, ByTheC said:

    My backup media is getting quite full so I attempted to groom previous backups in order to free up space.

    As I understand it, grooming via Media Set options is either off, to a set number of backups, or to a defined policy -- not much scope for you triggering grooming yourself. So how did you do this? That may have a bearing on the matter.

    I'd also try doing the rebuild then backing something up to that set, even just one small file, before the grooming operation.

    Other questions: How much free space do you have on the system disk? Where is the 10TB+ of data stored, and how much free space is there on that? When was the last time you disk-checked it? Rebuild log says RS is skipping a file -- check that on your disk media, can you replace it from eg tape. Same for the files mentioned in the groom log.

    It might also be worth downloading the v17 trial, maybe on a another machine, and trying the rebuild on that. If successful you might even (I haven't tried it!) be able to copy the new catalog back to the v15 machine and use it there -- you can move catalogs up versions, but I've never tried down! If you can't but the rebuild worked, at least you'll know upgrading to v17 is one way out of your problem.

  13. 23 hours ago, Gintzler said:

    Day 2 and not a single error. Not even files with hyphens in it. Excelsior! 

    Glad it's working. I'm still going to blame the Win Server rather than RS -- for no better reason than bitter experiences with Windows servers 😉. A good test, next time it happens, would be to request the server be restarted without you doing anything to the RS installation.

  14. On 2/2/2021 at 9:54 PM, DavidHertzberg said:

    Guided by Nigel Smith's mention of "is like" in his post immediately above, I did a Forums search and struck gold in this 2010 post by kswisher

    I didn't want to mention kswisher's work without some checks of my own -- there's even more chance of undocumented behaviours being broken by "updates" than the documented ones!

    Some quick tests suggest this is still valid, so "Folder Mac Path is like */Users/*/Documents/" will select the Documents folder of every user account of a machine.

    Note that "*" is "greedy", so "*/Users/*/Xcode/" will match /Users/nigel/Documents/Programming/SuperSecretStuff/Personal/ButNotHidden/Xcode/.

    Given the lack of official documentation you should test, test, and test again.

    On 2/1/2021 at 8:49 PM, AlexB said:

    I know other people have complained about this but I remember Retrospect had a way you could preview rules - missing that.

    While there's no preview, I do this by running the backup then using the "Past Backups" pane to "Browse" with "Only show files copied during this backup" checked. But you should still be able to do it the way I used to -- create a new media set, back up an entire client to that set, then do a "Restore" using "Search for files...", select your rule(s) and the backup set, then the destination. The "Select Backups" step will allow you to preview the results. When you are doing a lot of fiddling to get a rule right, this can be a lot quicker than repeated backup attempts (and there's a lot less impact on the client!).

    Also note that Rules don't reduce scan time -- every file on a (RS-defined) volume is scanned/tested, there are no "don't even look in this folder" shortcuts. The only way to do that is via the RS Client's "Privacy" settings.

  15. 18 hours ago, AlexB said:

    But if I attempt to define them in a separate rule and then include them in a subsequent rule ("Saved Rule includes"), I can't get that to work.

    Can you screenshot your "failing" rule setup?

    Also, be careful how you "embed" rules -- a saved exclusion rule goes in the "Includes" section when you embed it (as you've done above with the built-in) and IIRC multiple exclusion rules should be "Any"ed.

    As David says, trailing slashes on your Mac paths -- implied by the documentation and even if not strictly necessary prevents a false match with eg "/Users_Important_Files" -- and no, there's no documented wildcarding. There is an "Is like" match option, but I don't think anyone knows how -- or even if -- it works!

    21 hours ago, AlexB said:

    Then I tried creating a rule that had the All conditions with a sub (took me a while to figure out the option key magically let me do this - a note for improved documentation)

    pp177 of the User Guide -- as much as I like to complain about the documentation, this is something they did include (albeit in a "blink and you'll miss it" way). It should certainly be more obvious, eg a tooltip in the UI.

  16. 1 hour ago, Gintzler said:

    I all of my 25 years in Prepress I've never seen anything like this!

    I'll let you into a secret -- if I was your IT team I would have probably said "No, there are no characters blocked by Acronis <tippy-tappy-type-fix-config> so try again and see what happens". 😉

    More seriously, was there a restart of the backup machine between not working and working? I'm wondering if there might have been a freaky AFP cache problem or multiple mounts of the same share, either of which could be caused by disconnect/recovery and wouldn't be obvious unless you listed /Volumes.

  17. Repeat the test I did with your setup, with test files on both the Windows file server and any old Mac -- limit it to a single folder for speed 😉

    Then download the RS17 trial onto a newer-OS Mac and repeat the tests from both servers, once using AFP mounting then again using SMB.

    You're hoping for both old and new RSs to succeed with the Mac file server, for both old and new RSs to fail with the Win server over AFP, and the new RS to succeed with the Win server of SMB -- that'll pretty much point the finger at the Win Server and/or Acronis, putting the ball firmly in IT's court for further troubleshooting.

  18. 2 hours ago, oslomike said:

    If I wanted to do that, is there a script that I can set up that will do exactly that?

    Not until you can define that exactly 😉

    Rules are smart, but also very literal. I've already explained about "last access" vs "last modified", but also consider that you will be in a situation where only half a project can be restored, because some of the files in it were last used 729 days ago and others at 731 days.

    If you work in projects, IMO it makes more sense to manage your backups by removing them from the SAN (to archive, obv) 2 years after they've finished -- they won't get backed up any more because they aren't there(!), and your Grooming policy will remove them from the Disk Media Set after the appropriate number of backups. No need for a rule in your backup script (and remember that, if time is important, every file on the volume has to be scanned before it can be excluded by the rule), no need to run a separate Grooming script.

    If you still want to use a date-based rule, your first job is to test all the software that you use to access your SAN-stored files and find out whether "last access" will be honoured. Without that you won't be able to backup files that are read but not modified within your date window.

  19. 11 hours ago, oslomike said:

    You mentioned that I should just backup everything on the SAN.


    11 hours ago, oslomike said:

    As for grooming scripts, option 2, using selectors was what I was thinking,

    ...makes complete restores difficult unless you never have anything older than 24 months on your SAN. That's where no backup rules and an "in set" Grooming policy win out -- you can always restore your SAN to how it was up to n snapshots ago with a single operation. Using date-based Rules will mean you can only restore all files that match that criteria, and may then have to go through other sets/tapes to find older items (which can be a real pain, takes time, and is prone to error).

    So you need to ask yourself "If my SAN fails and I have to replace it, do I want to restore it as it was at a certain time or will restore only files modified in the last... be good enough?".

  20. 6 hours ago, oslomike said:

    I would like to make a Disk Media backup script that will backup any file or folder on a RAID disk that has been created or modified within then last 24 months.

    I wouldn't, for a few reasons:

    1. I like to disaster recover to the last, best, complete state. If you select as above there could be files that were on the SAN but won't be restored -- if they aren't worth backing up, why are they still on the SAN? If they should be on the SAN, as part of an on-going project, shouldn't they be backed up even if they haven't been modified?
    2. You could get round the above by restoring to the last time-point then overlaying any missing older files from previous backups -- but that's a lot of work and error-prone compared to simply restoring everything from a single snapshot
    3. You should also include files that haven't been modified in the last 24 months but have been accessed -- obvious examples are templates that you open then "Save As...", or images that you link (rather than embed) in documents. Perhaps in your case a drum loop that you import into a project? You might need that original loop, even though it's never itself modified. Not all systems set an access flag, some are way too keen and set it for things you wouldn't consider an access for your backup purposes, so you should test it very carefully

    Given the above, I'd back up everything on the SAN and rely on my archiving process to manage the amount of data on there.

    I also wouldn't groom the disk set, but that's because I find it much easier to keep things straight in my head if a set has everything backed up between its start and end -- I just archive off to tape and start a new set when it gets unmanageable. YMMV, so remember that there are two ways to groom in RS:

    1. As a property of the Disk Media Set, where you only set the set to retain the last n backups of a source
    2. By running a separate Groom script, where you use RS's Selectors to decide what to keep/remove

    If you still want to go by file properties, a Groom script will be the necessary. But given that you are considering backing up to both disk and tape I strongly recommend you look at the "Staged Backup Strategies" section (pp183 of the RS17 Mac UG). Not, in your case, for speed of getting data to your tape drive but because it reduces load on your SAN and gets it back to "production speed" more quickly -- if your users work odd hours and across your backup window, they'll thank you (Hah! When does a user ever thank a sysadmin?).

    So I think I'd do:

    1. Nightly backups of all the SAN's files to the Disk Media Set
    2. Daily transfers from that to tape (means only 1 of your 3 tape sets is on site/travelling at a time, reducing risk of loss)
    3. Have a grooming policy that balanced how far back you usually go to restore a previous version of a file with the capacity of your Disk Set media

    That last is particular to your setup, remembering that you can still go back further than n backups by going to your tapes -- it'll just take longer than a Disk Media restore.

    So many ways to do things! Pick your poison, try it out, refine as you understand your needs more.

  21. 20 hours ago, Gintzler said:

    The Widows file server has Acronis File Connect

    Files Connect can enforce a filename policy. Is it possible that it's set to disallow hyphens, which Illustrator (connecting over SMB) isn't bound by?

    I'm pretty sure if this was a Mac/RS/AFP thing we'd have bumped into it before, but I'll try and test anyway although my v6.1 is on an older OS.

  22. On 1/22/2021 at 5:41 AM, DavidHertzberg said:

    Let's get our terminology in sync with Retrospect's terminology...

    I was trying -- and failing! -- to avoid RS terminology to get away from the software specifics and get oslomike to look at it more theoretically. That's difficult to do when RS uses the same terms as you'd naturally use: backups and archives 🙂

    You can archive an (RS) backup. You can restore from an (RS) archive. So in practice they're shades of grey rather than black-and-white different, but it can help in creating a strategy to think of them as different.

    Backups -- what you need for disaster recovery -- disk failure, accidental deletion, building fire, etc.
    Archives -- safe storage of things not needed now but may be needed later, or must be kept for compliance etc.

    In an ideal world you'd have all the data you ever used/created on super fast, instantly accessible, storage which you'd then back up (multiple copies, obviously!) in case of disaster. In the real world we usually can't afford to do that, so we keep the data we need, or will likely need, on the super fast storage and anything that is unlikely to be needed (or isn't needed instantly-on-demand) can be archived to cheaper slower, or even off-line, storage.

    So oslomike's plan, above, is a good backup plan (assuming the tape sets are rotated so one is always off-site!) with "accidental archiving" -- old files are somewhere on the backup tapes, and still retrievable, though there's no provision for removal of those old files from the SAN. I'd add in an actual "deliberate archiving step", which doesn't need to be anything fancy -- eg once a year, copy all projects that finished more than 18 months ago to 2 (or more!) dedicated "archive" sets, verify those by retrieving data from them, then remove that data from the SAN. The more business critical that archival data, the more care and copies you should take -- you might want to archive to local (slow) disk storage, to a cloud service with multi-region replication, and to tapes kept in off-site secure storage, with bonus points if you use different methods for each so you aren't tied to Retrospect for it all (for example, you could use RS or some other method to push actual files to the cloud service rather than using RS's storage format).

    Whether that's worth it to you is another matter, but it might give you some ideas.