Jump to content

Nigel Smith

Members
  • Posts

    348
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by Nigel Smith

  1. Glad it's working. I'm still going to blame the Win Server rather than RS -- for no better reason than bitter experiences with Windows servers 😉. A good test, next time it happens, would be to request the server be restarted without you doing anything to the RS installation.
  2. I didn't want to mention kswisher's work without some checks of my own -- there's even more chance of undocumented behaviours being broken by "updates" than the documented ones! Some quick tests suggest this is still valid, so "Folder Mac Path is like */Users/*/Documents/" will select the Documents folder of every user account of a machine. Note that "*" is "greedy", so "*/Users/*/Xcode/" will match /Users/nigel/Documents/Programming/SuperSecretStuff/Personal/ButNotHidden/Xcode/. Given the lack of official documentation you should test, test, and test again. While there's no preview, I do this by running the backup then using the "Past Backups" pane to "Browse" with "Only show files copied during this backup" checked. But you should still be able to do it the way I used to -- create a new media set, back up an entire client to that set, then do a "Restore" using "Search for files...", select your rule(s) and the backup set, then the destination. The "Select Backups" step will allow you to preview the results. When you are doing a lot of fiddling to get a rule right, this can be a lot quicker than repeated backup attempts (and there's a lot less impact on the client!). Also note that Rules don't reduce scan time -- every file on a (RS-defined) volume is scanned/tested, there are no "don't even look in this folder" shortcuts. The only way to do that is via the RS Client's "Privacy" settings.
  3. Can you screenshot your "failing" rule setup? Also, be careful how you "embed" rules -- a saved exclusion rule goes in the "Includes" section when you embed it (as you've done above with the built-in) and IIRC multiple exclusion rules should be "Any"ed. As David says, trailing slashes on your Mac paths -- implied by the documentation and even if not strictly necessary prevents a false match with eg "/Users_Important_Files" -- and no, there's no documented wildcarding. There is an "Is like" match option, but I don't think anyone knows how -- or even if -- it works! pp177 of the User Guide -- as much as I like to complain about the documentation, this is something they did include (albeit in a "blink and you'll miss it" way). It should certainly be more obvious, eg a tooltip in the UI.
  4. I'll let you into a secret -- if I was your IT team I would have probably said "No, there are no characters blocked by Acronis <tippy-tappy-type-fix-config> so try again and see what happens". 😉 More seriously, was there a restart of the backup machine between not working and working? I'm wondering if there might have been a freaky AFP cache problem or multiple mounts of the same share, either of which could be caused by disconnect/recovery and wouldn't be obvious unless you listed /Volumes.
  5. Repeat the test I did with your setup, with test files on both the Windows file server and any old Mac -- limit it to a single folder for speed 😉 Then download the RS17 trial onto a newer-OS Mac and repeat the tests from both servers, once using AFP mounting then again using SMB. You're hoping for both old and new RSs to succeed with the Mac file server, for both old and new RSs to fail with the Win server over AFP, and the new RS to succeed with the Win server of SMB -- that'll pretty much point the finger at the Win Server and/or Acronis, putting the ball firmly in IT's court for further troubleshooting.
  6. Logs show a Fast Catalog Rebuild and, IIRC, that can only used on Disk Media Sets when grooming isn't turned on. Perhaps you are falling foul of turning on grooming too late, after the (default) Fast Catalog Rebuild format has been used? Have you groomed that set previously? Are you using the set's builtin options or are you running a separate script?
  7. I hadn't tried -- but I have now, and no problems at all. Folder call "test" containing "test1.txt", "test-2.txt", and "test_3.txt". Shared over AFP from 10.14.6, mounted over AFP on RS v6.1.230 machine running 10.3.9: Different RS host OS to you, but you can easily reproduce the test with any Mac and an HFS+ formatted volume to share whole or part of.
  8. Not until you can define that exactly 😉 Rules are smart, but also very literal. I've already explained about "last access" vs "last modified", but also consider that you will be in a situation where only half a project can be restored, because some of the files in it were last used 729 days ago and others at 731 days. If you work in projects, IMO it makes more sense to manage your backups by removing them from the SAN (to archive, obv) 2 years after they've finished -- they won't get backed up any more because they aren't there(!), and your Grooming policy will remove them from the Disk Media Set after the appropriate number of backups. No need for a rule in your backup script (and remember that, if time is important, every file on the volume has to be scanned before it can be excluded by the rule), no need to run a separate Grooming script. If you still want to use a date-based rule, your first job is to test all the software that you use to access your SAN-stored files and find out whether "last access" will be honoured. Without that you won't be able to backup files that are read but not modified within your date window.
  9. ...and... ...makes complete restores difficult unless you never have anything older than 24 months on your SAN. That's where no backup rules and an "in set" Grooming policy win out -- you can always restore your SAN to how it was up to n snapshots ago with a single operation. Using date-based Rules will mean you can only restore all files that match that criteria, and may then have to go through other sets/tapes to find older items (which can be a real pain, takes time, and is prone to error). So you need to ask yourself "If my SAN fails and I have to replace it, do I want to restore it as it was at a certain time or will restore only files modified in the last... be good enough?".
  10. I wouldn't, for a few reasons: I like to disaster recover to the last, best, complete state. If you select as above there could be files that were on the SAN but won't be restored -- if they aren't worth backing up, why are they still on the SAN? If they should be on the SAN, as part of an on-going project, shouldn't they be backed up even if they haven't been modified? You could get round the above by restoring to the last time-point then overlaying any missing older files from previous backups -- but that's a lot of work and error-prone compared to simply restoring everything from a single snapshot You should also include files that haven't been modified in the last 24 months but have been accessed -- obvious examples are templates that you open then "Save As...", or images that you link (rather than embed) in documents. Perhaps in your case a drum loop that you import into a project? You might need that original loop, even though it's never itself modified. Not all systems set an access flag, some are way too keen and set it for things you wouldn't consider an access for your backup purposes, so you should test it very carefully Given the above, I'd back up everything on the SAN and rely on my archiving process to manage the amount of data on there. I also wouldn't groom the disk set, but that's because I find it much easier to keep things straight in my head if a set has everything backed up between its start and end -- I just archive off to tape and start a new set when it gets unmanageable. YMMV, so remember that there are two ways to groom in RS: As a property of the Disk Media Set, where you only set the set to retain the last n backups of a source By running a separate Groom script, where you use RS's Selectors to decide what to keep/remove If you still want to go by file properties, a Groom script will be the necessary. But given that you are considering backing up to both disk and tape I strongly recommend you look at the "Staged Backup Strategies" section (pp183 of the RS17 Mac UG). Not, in your case, for speed of getting data to your tape drive but because it reduces load on your SAN and gets it back to "production speed" more quickly -- if your users work odd hours and across your backup window, they'll thank you (Hah! When does a user ever thank a sysadmin?). So I think I'd do: Nightly backups of all the SAN's files to the Disk Media Set Daily transfers from that to tape (means only 1 of your 3 tape sets is on site/travelling at a time, reducing risk of loss) Have a grooming policy that balanced how far back you usually go to restore a previous version of a file with the capacity of your Disk Set media That last is particular to your setup, remembering that you can still go back further than n backups by going to your tapes -- it'll just take longer than a Disk Media restore. So many ways to do things! Pick your poison, try it out, refine as you understand your needs more.
  11. Files Connect can enforce a filename policy. Is it possible that it's set to disallow hyphens, which Illustrator (connecting over SMB) isn't bound by? I'm pretty sure if this was a Mac/RS/AFP thing we'd have bumped into it before, but I'll try and test anyway although my v6.1 is on an older OS.
  12. I was trying -- and failing! -- to avoid RS terminology to get away from the software specifics and get oslomike to look at it more theoretically. That's difficult to do when RS uses the same terms as you'd naturally use: backups and archives 🙂 You can archive an (RS) backup. You can restore from an (RS) archive. So in practice they're shades of grey rather than black-and-white different, but it can help in creating a strategy to think of them as different. Backups -- what you need for disaster recovery -- disk failure, accidental deletion, building fire, etc. Archives -- safe storage of things not needed now but may be needed later, or must be kept for compliance etc. In an ideal world you'd have all the data you ever used/created on super fast, instantly accessible, storage which you'd then back up (multiple copies, obviously!) in case of disaster. In the real world we usually can't afford to do that, so we keep the data we need, or will likely need, on the super fast storage and anything that is unlikely to be needed (or isn't needed instantly-on-demand) can be archived to cheaper slower, or even off-line, storage. So oslomike's plan, above, is a good backup plan (assuming the tape sets are rotated so one is always off-site!) with "accidental archiving" -- old files are somewhere on the backup tapes, and still retrievable, though there's no provision for removal of those old files from the SAN. I'd add in an actual "deliberate archiving step", which doesn't need to be anything fancy -- eg once a year, copy all projects that finished more than 18 months ago to 2 (or more!) dedicated "archive" sets, verify those by retrieving data from them, then remove that data from the SAN. The more business critical that archival data, the more care and copies you should take -- you might want to archive to local (slow) disk storage, to a cloud service with multi-region replication, and to tapes kept in off-site secure storage, with bonus points if you use different methods for each so you aren't tied to Retrospect for it all (for example, you could use RS or some other method to push actual files to the cloud service rather than using RS's storage format). Whether that's worth it to you is another matter, but it might give you some ideas.
  13. OS X's Samba daemon does clever things to maintain filenames on Windows shares, and you may be falling foul of an unintended consequence. Also worth checking that it actually is an ASCII hyphen and not an en-/em-dash or Unicode hyphen. What version of SMB/CIFS is the fileserver running? At what stage are you getting the -43 error? Have you tried creating a hyphen-named file using the Win Server's OS/GUI (rather than with a network client) and seeing if that can be backed up?
  14. RS can do that, just by using filters on what is backed up. Grooming is the step after that where you then remove things from your backup set so it doesn't keep growing and growing. Personally I've never used grooming and, instead, start new backup sets every year -- I do that from scratch with a new full backup of every client, but you could do it by transferring the last backups from "Old Set" to "New Set" and continuing with incrementals to "New Set". The previous year's backups are "archived" by putting the tapes into secure storage. No, I mean automated scheduled checks of your NAS/SAN's integrity -- eg parity checking. Your SAN can probably do that, but if you just had a bunch of external drives you'd have to do it yourself (or rely on SMART warnings, by which time it might be too late). If it was me, I'd keep the SAN only for "live" data -- a good, performant, SAN is an expensive way of storing old data that you only need occasional (if any) access to. I'd get a slower, cheaper, NAS and move that old data to there from the SAN -- that NAS would now be my archive. How/when that happened would be a "business rules" decision -- for example, if you worked by project you might archive the whole project when the final invoice was issued (on the theory that work stops at that point so the data is "fixed"), or if work was less structured you might archive anything that hasn't been accessed in the last 12 months. Or you may not bother at all -- it may be better to pay for extra SAN storage than to waste your (chargeable!) time on such things 😉 There are many ways to skin this particular cat, so start with what you want to achieve, figure out the resources available to you, and go from there.
  15. You'll find all older versions here. Licensing may be an issue -- if you have a license for a newer version, and that key doesn't work with the old, you could try asking Support for a downgrade license. But if you've got a newer version, why not use that for the rebuild instead and see if that transfers the snapshots?
  16. Exactly the same (I deliberately created them with separate catalog files, to match your situation). The v6 catalog file will be ignored -- this is a "rebuild", which starts from scratch and reconstructs the backup using only the data file, rather than a "repair". Try the alternate route -- attempt a "Restore" operation on the v10 set and use the "More Backups..." button to see if they are shown. Or, simply give up 😉 Do you really need to do a "point in time" restore of a whole volume from x years ago? If not, you can likely do what you'll need by simply using filters on the whole set -- eg "all files and sub-folders in the Important Project folder last modified in 2010" then manually selecting what to restore from the results.
  17. Sorry, I didn't make my point very well (or, indeed, at all!). We often use "backup" and "archive" interchangeably, but you may find it helpful to consider them as two different things -- "backup" for recovery (disk failure, wrongly deleted files, reverting to a previous version), "archive" for long-term storage of data you want to keep. In many ways this is a false dichotomy -- you may need to keep your backups long-term (eg compliance) and you can restore files from your archive -- but it can help from a management POV to keep the two separate. Of course, being a belt-and-braces type guy, I'd then archive my backups and make sure my archive was backed up 🙂 More copies are good! It's really a data management/business rules thing. It helps us because we tend to work on projects (or by person) so when the project is finished (or the person leaves) all associated data can be archived, keeping it in a single place while also freeing up resources on the "live" systems. YMMV. It doesn't really matter whether you use DAS, NAS, a bunch of drives in a cupboard that you plug in when needed -- whatever works for you! NAS is more expensive per TB than comparable DAS because you are paying for the computer that "runs" it as well as the storage, but because it is a complete "system" you can take advantage of in-built scheduled disk-scrubbing etc rather than having to roll your own health checks on DAS. But if your RAID is a SAN it probably has those already -- in many ways, a NAS is just a poor man's SAN 😉 But the best system is the one that works for you. Managing data is a necessity but, after a certain point, can cost more than it's worth to your business. Only you know where that point is and how best to get there.
  18. No magic -- just LaunchDaemon doing its thing. From the Engine's plist: <key>KeepAlive</key> <true/> So if you want to force-quit the RS Engine you'll have to unload the LaunchDaemon plist first. Untried, but sudo launchctl unload /Library/LaunchDaemons/com.retrospect.retroengine.plist ...in Terminal will probably (I'm looking at v13) do the trick.
  19. Then, tbh, you need a better system for archiving that data. Like with backups (and I draw a distinction between archive and backup), you should be thinking "3-2-1" -- 3 copies on 2 different media with 1 offsite and, importantly, you should be rotating your archives on to new media more often you have been. High-resilience disk-based storage is relatively cheap and you should use that for your "primary" archive copy. Don't forget to check application versions etc -- you may be near the point where you'll need to re-write old data to new formats... I wouldn't know -- and I'm not sure I'd trust anything over Retrospect for recovering RS's proprietary format (tar tapes would be a different matter). Best to avoid any problems with "3-2-1", media rotation, regular checks that you can retrieve files, and so on.
  20. Don't know about best, but what I'd try is: Rebuild to "tapeset1", starting with the first tape, expecting it to fail at the end of the tape as you've described Rebuild to "tapeset2", starting with the first tape, expecting it to fail... Take tape 1 out of the drive! Repair "tapeset2" and, when it says insert tape 1, mark tape 1 as missing and continue with tapes 2 and 3 You've now got the most data back that you can, albeit in two sets. You could then try and combine them by copying "tapeset1" to a new "diskset1", then "tapeset2" to that "diskset1" -- "diskset1" can then be moved to your newer machine for the conversion. Your choice as to whether "diskset1" is a File Set or Removable Media Set. You could, perhaps should, copy the two tape sets to two disk sets, convert them to the new format, then combine them -- it'll take longer but might be "safer". I'd also start to wonder about the chances of me ever needing to go back to data from 10+ years ago and whether it's worth all this extra work! That'd be a struggle between my innate laziness and my OCD, but you may have compliance issues to satisfy.
  21. I won't claim it's best -- but it's what works for me! Meanwhile, I've run some tests -- and I am getting snapshots transferred in a v6 File Set conversion. Couple of wrinkles: The File Sets were created in RS v6.1.230 rather than transferred from tape The conversion was done with RS v13.5 -- the closest I have to v10 on hand Only one snapshot is shown in the rebuilt set until you go to "Media Sets", select the rebuilt set, select the "Backups" tab, and use the "Retrieve..." button to show the others. Alternatively you can use the "Restore" wizard, select the single-snapshot set, and use the "More Backups..." button to show the other snapshots. So the snapshots are transferred. Whether that's because I'm using a different converting version of RS, a different workflow, or just delving deeper into the end result to find them -- who knows?
  22. I tend to play it safe, and create the new media set myself rather than relying on the software -- mainly because it gives me extra chances to check the name, settings, etc! Finally got to go in to work this week, so I've fired up the old v6 server and I'll do some test wrt snapshots and the v6 set conversion.
  23. That log actually suggests that the scan is failing during the "SAR MacBook Air" operation because of a network disconnect and it then can't access the other APFS volume groups because the laptop isn't on the network -- they're a symptom, not a cause, so omitting them won't help. Solve the network problem. I'd start by defining a small "Favourite" folder, seeing if that works, then expanding. It might be a time-based issue, eg aggressive sleep settings, maybe a dodgy drive, or something else -- so I'm afraid you've some troubleshooting ahead...
  24. They were called "Removable Disk Backup Sets" back then. Their main advantages over "File Backup Sets" is that they can span media and can be larger than the maximum file size permitted by your OS. "Removable" sets store their catalog separately, "File" sets store the catalog in the file's resource fork until that exceeds the OS's resource fork size limit (16MB) at which time it is split out into a separate file, which is what you are seeing. Check your v6 "File" set. Does that have snapshots? It may be that you are losing them in the v6 tape->file part of the process, and not during the v10 rebuild.
  25. Your guess was perfect! IIRC from my testing, each client/volume (volume can also be a Favorite Folder) combo gets its own "container" in the Storage Group. If you'd called the new Server "FileServer" and the volume/Favorite in that "VMFS03_Helios" then RS might have carried on with the same container and not doubled up your data but, as you've seen, there's no dedup across containers. Storage Groups aren't appropriate for what you are trying to do -- they're really a way to parallelise multiple slow backup streams from different clients to a (pseudo) single set. Use a cloud-stored Disk Media Set instead. You might be able to retain continuity by migrating your Storage Set backups to the new Disk Media Set using "Copy Media Set" scripts, but you'll probably be paying ingress/egress fees on all that data all over again...
×
×
  • Create New...