Jump to content

Nigel Smith

Members
  • Posts

    353
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by Nigel Smith

  1. I had completely forgotten about that button! Probably because it's always dimmed for me... Ah... Grooming a set stored on a NAS is an absolute shedload of network traffic, especially in storage-optimised grooming. See the "Grooming Tips" page, section 11. Do everything you can minimise problems, ideally (if possible) setting up a short-cabled direct connection from your Mac to the SAN -- you may want to use different interfaces on both machines and hard-code the IP addresses if you normally use DHCP. And make sure there's no scheduled processes on either machine that might interfere. Since the Synology checks OK, try copying the "bad" files to your Mac -- if that works and the "bytes" size shows the same in Finder's Get Info, it's even more likely that there was a network glitch. Experience has shown that RS is a lot more sensitive to such things than eg a Finder copy, so a NAS connection that seems OK in normal use can be too flakey for a "big" RS operation.
  2. I leave it off myself, for that very reason. I asked because it may have resulted in the client mistakenly using an "old" file listing from the last backup (so nothing needs to be backed up because you already have). If not on, that's probably not the issue. Also, Instant Scan isn't compatible with APFS. So "jorge" is your user folder and you've defined that as a Favourite in RS? What state is the client machine at when the backup fails -- logged out, logged in but asleep, logged in but on screen lock, logged in and in use? -- and when it succeeds? But... Certification for Mojave only arrived with RS v15.5. If you've a Mojave test system handy you could try downloading the 15.5 client, installing it, and seeing if your v14 server can still back it up and, if so, if there's any improvement. Otherwise, it's worth noting that even if your server is limited to High Sierra, you can still upgrade to RS v17.
  3. The logs are saying something different -- that the remote client is contacted/scanned just fine but no files are found that need to be backed up, while the local volume does have files to be backed up and they are. So it's a problem with the remote source, and you need to include OS version, RS client version, and disk format of that. Also whether "jorge" is the system volume, a Favourite Folder you've defined in RS, or another volume (icons may not have pasted into your post). It doesn't look like you are using any Rules, but check just in case. I would have guessed at an issue with Instant Scan but, on my system at least, use of that is included in the logs...
  4. As of RS v13(? -- David will know) Fast Catalog rebuild is always "on" for Disk Media Sets unless you enable grooming, and then it's "off". In v17, and maybe before, it isn't even shown as an option, but I suspect the UI took time to catch up with the behavioural change and they disabled the option rather than making a new layout. Which is why I was asking if you'd enabled grooming after the rebuild. It may just be that the logs are mis-reporting on that line. What I don't understand is your first sentence: As I understand it, grooming via Media Set options is either off, to a set number of backups, or to a defined policy -- not much scope for you triggering grooming yourself. So how did you do this? That may have a bearing on the matter. I'd also try doing the rebuild then backing something up to that set, even just one small file, before the grooming operation. Other questions: How much free space do you have on the system disk? Where is the 10TB+ of data stored, and how much free space is there on that? When was the last time you disk-checked it? Rebuild log says RS is skipping a file -- check that on your disk media, can you replace it from eg tape. Same for the files mentioned in the groom log. It might also be worth downloading the v17 trial, maybe on a another machine, and trying the rebuild on that. If successful you might even (I haven't tried it!) be able to copy the new catalog back to the v15 machine and use it there -- you can move catalogs up versions, but I've never tried down! If you can't but the rebuild worked, at least you'll know upgrading to v17 is one way out of your problem.
  5. And do you still have "two mounted disks named prepress", as per the previous warning? Might explain it... If so, unmount them all (may be easiest to just restart the RS Server machine), mount the volume (I'm assuming you do that manually. And check with "ls -l /Volumes" in Terminal, making sure "prepress" is listed only once), and run the script again.
  6. Glad it's working. I'm still going to blame the Win Server rather than RS -- for no better reason than bitter experiences with Windows servers 😉. A good test, next time it happens, would be to request the server be restarted without you doing anything to the RS installation.
  7. I didn't want to mention kswisher's work without some checks of my own -- there's even more chance of undocumented behaviours being broken by "updates" than the documented ones! Some quick tests suggest this is still valid, so "Folder Mac Path is like */Users/*/Documents/" will select the Documents folder of every user account of a machine. Note that "*" is "greedy", so "*/Users/*/Xcode/" will match /Users/nigel/Documents/Programming/SuperSecretStuff/Personal/ButNotHidden/Xcode/. Given the lack of official documentation you should test, test, and test again. While there's no preview, I do this by running the backup then using the "Past Backups" pane to "Browse" with "Only show files copied during this backup" checked. But you should still be able to do it the way I used to -- create a new media set, back up an entire client to that set, then do a "Restore" using "Search for files...", select your rule(s) and the backup set, then the destination. The "Select Backups" step will allow you to preview the results. When you are doing a lot of fiddling to get a rule right, this can be a lot quicker than repeated backup attempts (and there's a lot less impact on the client!). Also note that Rules don't reduce scan time -- every file on a (RS-defined) volume is scanned/tested, there are no "don't even look in this folder" shortcuts. The only way to do that is via the RS Client's "Privacy" settings.
  8. Can you screenshot your "failing" rule setup? Also, be careful how you "embed" rules -- a saved exclusion rule goes in the "Includes" section when you embed it (as you've done above with the built-in) and IIRC multiple exclusion rules should be "Any"ed. As David says, trailing slashes on your Mac paths -- implied by the documentation and even if not strictly necessary prevents a false match with eg "/Users_Important_Files" -- and no, there's no documented wildcarding. There is an "Is like" match option, but I don't think anyone knows how -- or even if -- it works! pp177 of the User Guide -- as much as I like to complain about the documentation, this is something they did include (albeit in a "blink and you'll miss it" way). It should certainly be more obvious, eg a tooltip in the UI.
  9. I'll let you into a secret -- if I was your IT team I would have probably said "No, there are no characters blocked by Acronis <tippy-tappy-type-fix-config> so try again and see what happens". 😉 More seriously, was there a restart of the backup machine between not working and working? I'm wondering if there might have been a freaky AFP cache problem or multiple mounts of the same share, either of which could be caused by disconnect/recovery and wouldn't be obvious unless you listed /Volumes.
  10. Repeat the test I did with your setup, with test files on both the Windows file server and any old Mac -- limit it to a single folder for speed 😉 Then download the RS17 trial onto a newer-OS Mac and repeat the tests from both servers, once using AFP mounting then again using SMB. You're hoping for both old and new RSs to succeed with the Mac file server, for both old and new RSs to fail with the Win server over AFP, and the new RS to succeed with the Win server of SMB -- that'll pretty much point the finger at the Win Server and/or Acronis, putting the ball firmly in IT's court for further troubleshooting.
  11. Logs show a Fast Catalog Rebuild and, IIRC, that can only used on Disk Media Sets when grooming isn't turned on. Perhaps you are falling foul of turning on grooming too late, after the (default) Fast Catalog Rebuild format has been used? Have you groomed that set previously? Are you using the set's builtin options or are you running a separate script?
  12. I hadn't tried -- but I have now, and no problems at all. Folder call "test" containing "test1.txt", "test-2.txt", and "test_3.txt". Shared over AFP from 10.14.6, mounted over AFP on RS v6.1.230 machine running 10.3.9: Different RS host OS to you, but you can easily reproduce the test with any Mac and an HFS+ formatted volume to share whole or part of.
  13. Not until you can define that exactly 😉 Rules are smart, but also very literal. I've already explained about "last access" vs "last modified", but also consider that you will be in a situation where only half a project can be restored, because some of the files in it were last used 729 days ago and others at 731 days. If you work in projects, IMO it makes more sense to manage your backups by removing them from the SAN (to archive, obv) 2 years after they've finished -- they won't get backed up any more because they aren't there(!), and your Grooming policy will remove them from the Disk Media Set after the appropriate number of backups. No need for a rule in your backup script (and remember that, if time is important, every file on the volume has to be scanned before it can be excluded by the rule), no need to run a separate Grooming script. If you still want to use a date-based rule, your first job is to test all the software that you use to access your SAN-stored files and find out whether "last access" will be honoured. Without that you won't be able to backup files that are read but not modified within your date window.
  14. ...and... ...makes complete restores difficult unless you never have anything older than 24 months on your SAN. That's where no backup rules and an "in set" Grooming policy win out -- you can always restore your SAN to how it was up to n snapshots ago with a single operation. Using date-based Rules will mean you can only restore all files that match that criteria, and may then have to go through other sets/tapes to find older items (which can be a real pain, takes time, and is prone to error). So you need to ask yourself "If my SAN fails and I have to replace it, do I want to restore it as it was at a certain time or will restore only files modified in the last... be good enough?".
  15. I wouldn't, for a few reasons: I like to disaster recover to the last, best, complete state. If you select as above there could be files that were on the SAN but won't be restored -- if they aren't worth backing up, why are they still on the SAN? If they should be on the SAN, as part of an on-going project, shouldn't they be backed up even if they haven't been modified? You could get round the above by restoring to the last time-point then overlaying any missing older files from previous backups -- but that's a lot of work and error-prone compared to simply restoring everything from a single snapshot You should also include files that haven't been modified in the last 24 months but have been accessed -- obvious examples are templates that you open then "Save As...", or images that you link (rather than embed) in documents. Perhaps in your case a drum loop that you import into a project? You might need that original loop, even though it's never itself modified. Not all systems set an access flag, some are way too keen and set it for things you wouldn't consider an access for your backup purposes, so you should test it very carefully Given the above, I'd back up everything on the SAN and rely on my archiving process to manage the amount of data on there. I also wouldn't groom the disk set, but that's because I find it much easier to keep things straight in my head if a set has everything backed up between its start and end -- I just archive off to tape and start a new set when it gets unmanageable. YMMV, so remember that there are two ways to groom in RS: As a property of the Disk Media Set, where you only set the set to retain the last n backups of a source By running a separate Groom script, where you use RS's Selectors to decide what to keep/remove If you still want to go by file properties, a Groom script will be the necessary. But given that you are considering backing up to both disk and tape I strongly recommend you look at the "Staged Backup Strategies" section (pp183 of the RS17 Mac UG). Not, in your case, for speed of getting data to your tape drive but because it reduces load on your SAN and gets it back to "production speed" more quickly -- if your users work odd hours and across your backup window, they'll thank you (Hah! When does a user ever thank a sysadmin?). So I think I'd do: Nightly backups of all the SAN's files to the Disk Media Set Daily transfers from that to tape (means only 1 of your 3 tape sets is on site/travelling at a time, reducing risk of loss) Have a grooming policy that balanced how far back you usually go to restore a previous version of a file with the capacity of your Disk Set media That last is particular to your setup, remembering that you can still go back further than n backups by going to your tapes -- it'll just take longer than a Disk Media restore. So many ways to do things! Pick your poison, try it out, refine as you understand your needs more.
  16. Files Connect can enforce a filename policy. Is it possible that it's set to disallow hyphens, which Illustrator (connecting over SMB) isn't bound by? I'm pretty sure if this was a Mac/RS/AFP thing we'd have bumped into it before, but I'll try and test anyway although my v6.1 is on an older OS.
  17. I was trying -- and failing! -- to avoid RS terminology to get away from the software specifics and get oslomike to look at it more theoretically. That's difficult to do when RS uses the same terms as you'd naturally use: backups and archives 🙂 You can archive an (RS) backup. You can restore from an (RS) archive. So in practice they're shades of grey rather than black-and-white different, but it can help in creating a strategy to think of them as different. Backups -- what you need for disaster recovery -- disk failure, accidental deletion, building fire, etc. Archives -- safe storage of things not needed now but may be needed later, or must be kept for compliance etc. In an ideal world you'd have all the data you ever used/created on super fast, instantly accessible, storage which you'd then back up (multiple copies, obviously!) in case of disaster. In the real world we usually can't afford to do that, so we keep the data we need, or will likely need, on the super fast storage and anything that is unlikely to be needed (or isn't needed instantly-on-demand) can be archived to cheaper slower, or even off-line, storage. So oslomike's plan, above, is a good backup plan (assuming the tape sets are rotated so one is always off-site!) with "accidental archiving" -- old files are somewhere on the backup tapes, and still retrievable, though there's no provision for removal of those old files from the SAN. I'd add in an actual "deliberate archiving step", which doesn't need to be anything fancy -- eg once a year, copy all projects that finished more than 18 months ago to 2 (or more!) dedicated "archive" sets, verify those by retrieving data from them, then remove that data from the SAN. The more business critical that archival data, the more care and copies you should take -- you might want to archive to local (slow) disk storage, to a cloud service with multi-region replication, and to tapes kept in off-site secure storage, with bonus points if you use different methods for each so you aren't tied to Retrospect for it all (for example, you could use RS or some other method to push actual files to the cloud service rather than using RS's storage format). Whether that's worth it to you is another matter, but it might give you some ideas.
  18. OS X's Samba daemon does clever things to maintain filenames on Windows shares, and you may be falling foul of an unintended consequence. Also worth checking that it actually is an ASCII hyphen and not an en-/em-dash or Unicode hyphen. What version of SMB/CIFS is the fileserver running? At what stage are you getting the -43 error? Have you tried creating a hyphen-named file using the Win Server's OS/GUI (rather than with a network client) and seeing if that can be backed up?
  19. RS can do that, just by using filters on what is backed up. Grooming is the step after that where you then remove things from your backup set so it doesn't keep growing and growing. Personally I've never used grooming and, instead, start new backup sets every year -- I do that from scratch with a new full backup of every client, but you could do it by transferring the last backups from "Old Set" to "New Set" and continuing with incrementals to "New Set". The previous year's backups are "archived" by putting the tapes into secure storage. No, I mean automated scheduled checks of your NAS/SAN's integrity -- eg parity checking. Your SAN can probably do that, but if you just had a bunch of external drives you'd have to do it yourself (or rely on SMART warnings, by which time it might be too late). If it was me, I'd keep the SAN only for "live" data -- a good, performant, SAN is an expensive way of storing old data that you only need occasional (if any) access to. I'd get a slower, cheaper, NAS and move that old data to there from the SAN -- that NAS would now be my archive. How/when that happened would be a "business rules" decision -- for example, if you worked by project you might archive the whole project when the final invoice was issued (on the theory that work stops at that point so the data is "fixed"), or if work was less structured you might archive anything that hasn't been accessed in the last 12 months. Or you may not bother at all -- it may be better to pay for extra SAN storage than to waste your (chargeable!) time on such things 😉 There are many ways to skin this particular cat, so start with what you want to achieve, figure out the resources available to you, and go from there.
  20. You'll find all older versions here. Licensing may be an issue -- if you have a license for a newer version, and that key doesn't work with the old, you could try asking Support for a downgrade license. But if you've got a newer version, why not use that for the rebuild instead and see if that transfers the snapshots?
  21. Exactly the same (I deliberately created them with separate catalog files, to match your situation). The v6 catalog file will be ignored -- this is a "rebuild", which starts from scratch and reconstructs the backup using only the data file, rather than a "repair". Try the alternate route -- attempt a "Restore" operation on the v10 set and use the "More Backups..." button to see if they are shown. Or, simply give up 😉 Do you really need to do a "point in time" restore of a whole volume from x years ago? If not, you can likely do what you'll need by simply using filters on the whole set -- eg "all files and sub-folders in the Important Project folder last modified in 2010" then manually selecting what to restore from the results.
  22. Sorry, I didn't make my point very well (or, indeed, at all!). We often use "backup" and "archive" interchangeably, but you may find it helpful to consider them as two different things -- "backup" for recovery (disk failure, wrongly deleted files, reverting to a previous version), "archive" for long-term storage of data you want to keep. In many ways this is a false dichotomy -- you may need to keep your backups long-term (eg compliance) and you can restore files from your archive -- but it can help from a management POV to keep the two separate. Of course, being a belt-and-braces type guy, I'd then archive my backups and make sure my archive was backed up 🙂 More copies are good! It's really a data management/business rules thing. It helps us because we tend to work on projects (or by person) so when the project is finished (or the person leaves) all associated data can be archived, keeping it in a single place while also freeing up resources on the "live" systems. YMMV. It doesn't really matter whether you use DAS, NAS, a bunch of drives in a cupboard that you plug in when needed -- whatever works for you! NAS is more expensive per TB than comparable DAS because you are paying for the computer that "runs" it as well as the storage, but because it is a complete "system" you can take advantage of in-built scheduled disk-scrubbing etc rather than having to roll your own health checks on DAS. But if your RAID is a SAN it probably has those already -- in many ways, a NAS is just a poor man's SAN 😉 But the best system is the one that works for you. Managing data is a necessity but, after a certain point, can cost more than it's worth to your business. Only you know where that point is and how best to get there.
  23. No magic -- just LaunchDaemon doing its thing. From the Engine's plist: <key>KeepAlive</key> <true/> So if you want to force-quit the RS Engine you'll have to unload the LaunchDaemon plist first. Untried, but sudo launchctl unload /Library/LaunchDaemons/com.retrospect.retroengine.plist ...in Terminal will probably (I'm looking at v13) do the trick.
  24. Then, tbh, you need a better system for archiving that data. Like with backups (and I draw a distinction between archive and backup), you should be thinking "3-2-1" -- 3 copies on 2 different media with 1 offsite and, importantly, you should be rotating your archives on to new media more often you have been. High-resilience disk-based storage is relatively cheap and you should use that for your "primary" archive copy. Don't forget to check application versions etc -- you may be near the point where you'll need to re-write old data to new formats... I wouldn't know -- and I'm not sure I'd trust anything over Retrospect for recovering RS's proprietary format (tar tapes would be a different matter). Best to avoid any problems with "3-2-1", media rotation, regular checks that you can retrieve files, and so on.
  25. Don't know about best, but what I'd try is: Rebuild to "tapeset1", starting with the first tape, expecting it to fail at the end of the tape as you've described Rebuild to "tapeset2", starting with the first tape, expecting it to fail... Take tape 1 out of the drive! Repair "tapeset2" and, when it says insert tape 1, mark tape 1 as missing and continue with tapes 2 and 3 You've now got the most data back that you can, albeit in two sets. You could then try and combine them by copying "tapeset1" to a new "diskset1", then "tapeset2" to that "diskset1" -- "diskset1" can then be moved to your newer machine for the conversion. Your choice as to whether "diskset1" is a File Set or Removable Media Set. You could, perhaps should, copy the two tape sets to two disk sets, convert them to the new format, then combine them -- it'll take longer but might be "safer". I'd also start to wonder about the chances of me ever needing to go back to data from 10+ years ago and whether it's worth all this extra work! That'd be a struggle between my innate laziness and my OCD, but you may have compliance issues to satisfy.
×
×
  • Create New...