Jump to content

Nigel Smith

  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Nigel Smith

  1. Since I'm in testing mode with the latest RS... I've just backed up a latest-DSM share over AFP, so it's possible. That you are seeing scanning start (and no "Retrospect has detected it is not listed under "Full Disk Access") suggests that FDA is enabled. That RS then cannot find the files it has scanned to then back them up suggests a network disconnect between those two phases -- possibly triggered by whatever is throwing the shutdown alert, though it could be that the alert is the result of the disconnect. What's the actual alert text? Is your test a manual run, rather than sitting waiting to watch the out-of-hours schedule? What else is the trashcan doing? Any task or power scheduling on the Mac or Synology? Any non-default options/preferences in your script or RS server?
  2. This is almost my situation, but with a fibre channel ADIC library with LTO-2! Yes, it's the SANLink that causes the problem, and I see it a lot -- as in almost every tape. I can put up with it because I'm still recycling through old tapes and so lose little when it happens, but I'd be looking for other solutions if I was using new LTO-6s... Note that I haven't updated Mac OS or the to the latest SANLink drivers, preferring a working backup solution with tolerable problems to a potentially fubared situation! If you've updated you may have better results than me. Clean install onto a test machine of the RS16 trial was totally hassle free. Once I'm happy with my tests I'll be trying the upgrade of the old system (which will then be moved to new hardware), so thanks for the warnings -- though, to be honest, it could be ten times worse than you describe and still be less hassle than meeting our purchasing compliance rules to buy the software in the first place!
  3. Well, I think we've checked all the obvious things -- if nothing else, you've a bunch of answers ready for a Support ticket 😉 I agree that it's really annoying that what worked in v12 is failing in v16, and maybe they can quickly work out why. Personally, I'd be confident about the backup. You've proven to yourself that at least some of the "didn't compare" files are, in fact, identical in terms of data. While the only way to be almost sure is to restore and manually compare every single one, a reasonable random sample should be enough to quell any disquiet. But Retrospect themselves say that the Verify step is optional, and tell you to turn if off for Cloud backups. So the main problem at the moment (since you can ignore those annoying error messages by not logging them!) is that the failed compare step means the files will be backed up over and over again even though they haven't changed. Either put up with the wasted resources or turn off the Verify step, problem solved...
  4. Yep -- does exactly what it says (which is maybe why it doesn't get mentioned). In practice it means you get a smaller catalog to store in exchange for a performance hit when reading/writing that catalog. Whether that's worthwhile depends on so many things particular to your setup that it's impossible to say -- try it and see. I don't compress catalogs, on the (probably bogus!) assumption that disk space is a lot cheaper than processing power. Glad you got an answer on this. Whilst it could certainly be reported better by RS, the end result would be the same -- the tape is "finished" (for whatever reason) and you have to add another. Perhaps I've just got used to it -- my system, which by any measure shouldn't work at all! -- does this quite often, when the data flow from Mac to tape library is interrupted. If you get this a lot, start your troubleshooting there. No comment on the rest because I'm still running an old version of RS on the Mac -- although I'm supposed to be updating soon, and now I'm starting to worry!
  5. Win 7 can do SMB2.1, IIRC. For the NAS, it will depend on which version of ReadyNAS OS you are running. The easiest way I know for you to check is, madly, to connect to the share with one of your Macs and then in the Terminal type: smbutil statshares -m /Volumes/sharename or, to list them all: smbutil statshares -a Since your Mac will start high and negotiate down to find a match, you'll see the best the NAS can provide in the "SMB_VERSION" line for that share. I think NAS OS 4 and 5 only support SMB1 (SMB2 was experimental, and could be worth a try). You could consider an unsupported upgrade to NAS OS 6 -- but that seems a huge amount of work (and risk, given the current state of your backups!) for something that might not help... However... Which suggest it might be something along the path to that directory that's causing the problem. Have a careful look at all the directory names above, and make sure there are no strange characters -- easiest done by ssh-ing to your NAS and using command line tab completion to list the whole path. Reason being that it is very easy to create SMB-invalid names on a Mac, eg a trailing space in a folder name, which the Mac SMB client then cleverly encodes so it doesn't cause problems on the server -- but it can cause problems further on, if a later process/client can't cope with the encoding. (Springs to mind because I'm currently having problems with a data migration where some users have used spaces/periods at the end of folder names, along with angle brackets, symbols, forward slashes... Grrrr!)
  6. I guess he did, though now I'm even more confused -- that means he posted about his problem 4 hours after he'd started backing up a restored volume, and at least 4-plus-however-many-hours-a-TM-restore-takes after he encountered it. It's also strange that the backup required 1TB for 844GB of data -- I'd assumed that was RS trying to cope with the TM hard links, but obviously not. Point still stands -- you can back up a TM volume with Retrospect and you can then restore files/folders from that backup (though I doubt you can restore the entire volume and still have it working as a Time Machine volume). Agreed that you can't seamlessly switch from TM to tape -- but you could (probably) back up the TM volume to tape, then continue backing up the client while taking advantage of the deducing of any files already migrated from TM (assuming "exact path match" is turned off). Personally though, I'd just duplicate the TM volume with CCC and put that copy in a safe place, carry on using the TM volume as-is, and start also backing up with RS. Whoever said we must only use one or the other, and not both?
  7. There's no sensible reason why changing the log level would change how the comparisons would work -- it's more likely that whatever was causing the "change" didn't happen this time (unless you also changed some other options?). Again, an obvious thing would be the differences between Mac and Windows permission schemes, and how the NAS manages and presents them. Simplified, even when using ACLs rather than POSIX mode bits, your NAS can't accurately replicate the Windows security schema. How this is handled depends on implementation, but I've seen before where a share accessed by both Macs and PCs can end up in the middle of a tug-of-war, the metadata changing depending on which platform last accessed it. Hence the advice about turning off the security info option, and also the question about CIFS (less likely to happen if everything is actually using SMB2 or 3).
  8. Sorry -- bolded the wrong word. I'll try again -- Mac clients. Lennart's issue was with a Mac server backing up a mounted SMB share and it looks like, in that situation, the server considers itself a client with respect to RS's client options. OP has a PC server backing up a mounted SMB share, so Mac client options won't help. But, assuming the same mechanisms that Lennart revealed also apply to Windows, we could look at the Windows client options. There's no obvious option matching the Mac's "Use attribute mod...", but the most similar is the "security information" option. Of course, this is Retrospect -- so a Mac-client option specifically stated not to affect Windows may just solve the problem!
  9. People often use CIFS and SMB interchangeably -- vaguely correct originally, but SMB2 (and now 3) are different beasts altogether. OP may just be loose with his terminology, but it may be that he's forcing a CIFS connection either deliberately (old software that requires it) or accidentally (old settings that can now be changed). If it's a metadata issue, the connection protocol might matter... The PS you added still doesn't apply to OP's situation. That setting is for Mac clients -- "No option in this category affects Windows or Linux clients" (last line of p370). The closest similar option I could find for mounted shares was the Windows Security was the one mentioned. I still think the clue is in the consistency -- it looks like none of the previously successfully compared files were backed up again, while all the "didn't compare" files were both backed up again and still didn't compare. Find the commonalities within, and the differences between, the two groups and we could solve this.
  10. Except he did back up his Time Machine disk -- there's no mention of an initial TM restore. And I've done it before, too. It can be horribly slow, but so can a TM restore if you have a lot of time points on the volume. Will he be able to restore a TM volume backup to an external disk and have it seamlessly become a TM volume again? I don't know, haven't tried it, but the log warning that "Copying hard-linked directories (such as those created by Time Machine) is not supported" suggests not (which is why I suggested the disk image route if that's what he actually wanted to do). Does that matter anyway? You can still restore files/folders from each TM time point, so the data TM has backed up can be "made safe" with RS.
  11. Probably not -- I think that's the entire message. You could mess around with log levels in "secret preferences" (Ctrl-Alt-P-P on a PC) but I don't think you'll get anything more useful. But what you've given us may be enough. Appended/deleted data would give you "<Path>: different data size" in the log, connections errors are more explicit, etc. A plain "didn't compare" is generally database-type files whose contents can change without the size altering (sounds like too many for it to be that) or differences in metadata (reinforced by the fact that your restored files are, data-wise, identical to the originals). David's link above is for Macs, but suggests another approach -- it may be problems with security information, so try turning off "Back up folder security information from workstations" (the backup set's Options->Windows->Security). That assumes that you are mounting the NAS as a network share on the RS PC -- you haven't said if you're doing that or have installed RS Client on the NAS -- and, hopefully, you're using SMB rather than CIFS. Is there any correlation between files which did successfully compare and their server-paths, vs those that failed and their paths? Special character in a folder common to the failures but not the successes? To have exactly the same file-set fail comparison twice in a row suggests some commonality -- you've just got to find it! If you can't, well... there's a lot that can go wrong here -- a btrfs-formatted volume presented by a Linux OS over SMB (and maybe also AFP/CIFS) to clients with wildly varying security models... It's amazing it works at all! If you've recently bought RSv16 you could raise a support ticket -- the engineers may be able to tell what changed between v12 (which worked) and v16 (which doesn't entirely work). But the good news is that your data is safe, as you've proved, even if you are re-backing up files unnecessarily.
  12. As David intimates, the Time Machine format is pretty trick. Add to that the recent OS X security changes and it's hardly surprising that RS struggles to cope. And anyway, what are you asking it to back up? Every timepoint in Time Machine is a mish-mash of files and links, resolved on the fly -- would you like RS to revolve all those links for every timepoint and back up all the files, or just the latest and ignore "previous versions"? I can see two possible solutions, depending on what you want: Just the latest (or any single) timepoint -- use TM to restore that timepoint to an external HD, use RS to back that up All the things -- create a disk image of your TM backup folder, use RS to back that up. If you ever need to access your old TM backups: Retrieve the disk image, mount it, use it as a TM drive There may be other ways, eg a block copy to a second drive for a true backup of your Time Machine volume, depending on what you are actually trying to achieve here.
  13. How is this formatted? How are you connecting to it to do the backups? Most likely explanation is that the metadata doesn't match for some reason, eg the file is being "touched" but not changed between backup and verification, but without examples from your error log it's difficult to tell.
  14. IMO, the RS engine will *have* to be running -- the report is *generated* and sent at the selected time. So you'll have to find a way to make RS run around the time you want the email to be sent (or change the time to when RS is running!), or go round RS somehow. Maybe: Use a spoof scheduled script to start RS at 6.59 and quit it on finish -- poll for definitely-not-there shares, a script with no clients but a 7.05 shutdown time might work, or take this chance to back up your server settings, catalogs, etc. Use Windows scheduler to start RS at 6.59 and quit it after 7 Schedule your email to go after the last script of the day has finished, with shutdown after that -- you should know from experience how much leeway you need to build in so the script will finish before the mail gets sent My favourite 🙂 -- send an email after every backup event, filter with your email client and post-process into your own database. You can get much more detail than the reports offer, find out many more "fun facts" eg weekly churn on a daily backed-up client, better alerting on failed/missing clients -- even build your own report "dashboard" to report the things *you* care about. Generate your own emails from RS's log file (which appeals to the nerd in me, but is more pain for the same benefit as the last) Unfortunately, while you can use Script Triggers to act on RS events, I know of few ways of *telling* RS to do something from the "outside" -- no "and now email the report" command. But I'm a few versions behind, so that may have changed. But you could use an "EndApp" trigger to eg fire up a script to do no. 5 as the last act of the day.
  15. Or, since he knows the the time the script needs to run, he could use Windows Task Scheduler to launch RS when appropriate. Or even, if his BIOS supports it and he doesn't mind the security implications: set the PC to boot at a certain time and auto-login, set Windows Task Scheduler to fire up RS, use script hooks to to monitor and shut down both PCs when complete! These are computers -- we should be getting them to do things, instead of having to remember to do them ourselves! 😉
  16. Not at all -- you can, for example, schedule Proactive to run only for certain hours of the day. So OP could set Proactive to run from 2am-6am every day, with an 20 hour interval. If the server is running during that time the server will be backed up, and it will also back up the client if that's available. No client is a "graceful fail", no server and nothing happens 😉 What you can't do with a single Proactive script is set the order in which clients should be backed up, so no good if that's important. You can't shut down the backup server, as part of the script, when it's finished. And using a schedule as above would mean you couldn't use the "Early backup" request to get a daytime backup, so you'd have to make another script for that -- you *might* be able to set a second Proactive script, running from 6am-2am, with a ridiculously large interval setting, that allows earlys, but I haven't tried that myself... Proactive is very flexible -- which is sometimes a boon, sometimes a pain -- and is always worth considering in any situation where backup routines can vary (presence of clients, volumes, target sets, etc).
  17. Assuming incremental backups, no need to delete -- it'll just make the "proper" backup run faster because most has already been done. And consider using Proactive (unless standard scripts do something you need that Proactive doesn't), which is made for exactly this "sometimes here, sometimes not" situation. Re-reading your OP, it sounds like both computers get shut down and one is the RS server while the other is the client. Have a play with the "Look ahead time" in the general (rather than script) Schedule Preferences. I'm starting to think it's *because* you shut down the server that you are getting the "catchups" -- look ahead sees you've got something scheduled within the next 12 hours so makes sure it runs at the next opportunity (I'd assumed that you had the server running 24/7 and it was two clients you were restarting). It may be that setting "Look ahead" to 0 solves your problem, but that might require you to leave RS running on the server rather than quitting/autolaunching for the next scheduled run.
  18. Have you tried the "Options" section of your script? There's also scheduling options there, which only apply to that script (though the defaults reflect the Schedule settings in General Prefs, which might make you think otherwise...) and so would have no impact on manual backups. Set your "Start", "Wrap up" and "Stop" times to suit your working practices and required backup window and you should be good.
  19. Interesting... Most "transient" files are "here today, gone tomorrow" -- think cache files etc. But, for whatever reason, Windows doesn't seem to delete these update packages after they have been used. All I can think of (aside from clumsiness by MS!) is that they are also used when you uninstall System updates. So, to be safe, I'd probably exclude them from backups but would only delete them from disk once I was happy that I wouldn't need to uninstall.
  20. That's the information I was looking for... So *if* you are only backing up one volume *and* that volume is backing up/verifying successfully *and* you can restore from the backup *and* you get the un-named volume error *and* Retrospect carries on regardless -- I'd just ignore it. If the error is causing other problems, eg killing the script while there are still other machines to process, re-arrange things so the erring machine is the last to be done. If the error is truly killing the system, eg popping a dialog that must be dismissed before continuing, I'd look into script triggers and a GUI-targetted AppleScript to automatically dismiss the dialog so RS can continue. Some things are easier to work round than to fix 😉
  21. No -- I'm suggesting that it is successfully scanning, backing up, and verifying the storage volume, and is *then* failing to scan a nameless volume. What's the output from "lsblk --fs", without the device selector? I'm assuming that, as with a Mac client, you can set things to back up the complete Linux box, only selected volumes, etc. Perhaps a previous "only the storage volume" tick-box was forgotten in the transfer to the new server.
  22. It looks like you've got an un-named volume on the Linux client, and RS isn't happy about volumes without a name. Try setting things so that only the (named) storage volume(s) is/are backed up.
  23. Crazy suggestion -- try naming the new share "2nd_Online_Backup" instead, and see if that solves it. Reason being, different implementations of SAMBA have different lengths of "valid" names, and using the same first-13 characters in each may be confusing something (eg if it only parsed the first 8 characters). Otherwise, knowing your NAS make/model might help.
  24. Does the failing PC have wireless, ethernet, or both? If it was working and now isn't, you haven't made any changes to Retrospect client or server, and you also have a PC that it *is* working on... I'd say the first thing to do is to play "spot the difference" between the two PCs -- why are they behaving differently to the same cues?
  25. Mac screenie, but repeated mentions of Windows -- I'll assume you've got Windows client problems... See if you can find your current client version. Uninstall it, restart the client machine, re-install using a fresh download from here -- personally, I'd start with the most recent ( and, if that was still problematic, work my way back version by version. If you don't want to re-register the client with the server you could take a punt and simply re-install over the top of the old client. I've just installed the latest client on a clean, up-to-date, Windows VM without issues, so it looks like something specific to this instance rather than a generic Windows problem (but I don't deal with Windows much, so I'm probably wrong...).
  • Create New...