Jump to content

Nigel Smith

Members
  • Content count

    224
  • Joined

  • Last visited

  • Days Won

    13

Posts posted by Nigel Smith


  1. Retrospect doesn't do a UNIXy tree-walk, not bothering to look at anything "/backup/FileMaker/Progressive/" or lower. Instead it scans *every* file of a volume and applies its selectors to decide what to do. I'd assume from the errors that it is getting partway through scanning those directories' contents when, suddenly, they vanish.

    Whilst annoying in a simple case like you describe, it's also part of what makes the selectors so powerful -- for example, being able to exclude files on a path *unless* they were modified in the last 2 hours -- and why all the metadata needs to be collected via the scan before a decision can be made.

    Two ways round this. If you want to exclude most paths, define the ones you want as volumes and only back those up -- we only back up "/Users" so that's what we do, which also greatly reduces scan time. If you want to back up most but not all, which I guess is what you're after, use the "Privacy" pane in the client to designate those paths to exclude.

    • Thanks 1

  2. 14 hours ago, gnoelken said:

    But, for me, it is now a matter of figuring out why the "Find Files" method does not work.

    Only think left I can think of is an indexing issue with the set's database -- and that's assuming that there's a database, that the database is indexed, and the index is used when searching... I'm guessing all the "missing" .nef files were backed up in the same session?

    You could always try a catalog rebuild and see if the problem persists -- but make sure your original catalog is safe and you don't overwrite it in the process!


  3. 4 hours ago, gnoelken said:

    When I start with "filename ends with .nef" the deleted folder "2016-04-Florida" appears and my current folder "2016-04-01 - Watercolor" does not:

    So "Watercolor" is on your D:\ drive, contains "*.nef" files, but those files don't appear when you search your backups for them. Almost sounds as if they haven't been backed up -- have you checked for any exclusions in your backup scripts?

    4 hours ago, gnoelken said:

    I am searching the whole set but when I look at the Backup Set I can only browse snapshots within it.

    You can also browse the entire Backup set by doing a search with no criteria -- IIRC, on Windows it defaults to "Include everything" and "Exclude nothing" -- then browsing the results. It'll probably be a long list, you'll get every backed version of every document, but you'll at least be able to drill down to "2016-04-01 - Watercolor" and see what's in there. If you still don't see the .nef files that strongly suggests they were never backed up for some reason.

    I'd be inclined to do a quick test. Duplicate the backup script you've been using, define "2016-04-01 - Watercolor" as a volume, change the duplicate script to back up only that volume and ideally the destination to be somewhere new (stick, HD, share, cloud -- doesn't matter, it just pays to play safe and keep it separate from your "real" backups). Run it and see what happens -- do the .nef files get backed up?


  4. If I were you, I'd start again from scratch, but coming from the opposite direction. And remember that it pays to be as explicit as possible with selectors -- so files don't end with "nef", they end in ".nef".

    So start with only the "filename ends with .nef" selector. If that picks up all you expect, add "and Windows file or folder path starts with D:\Greg\" (remember -- explicit! Include the trailing backslash). Then, maybe a "Windows path of folder contains..." to get just your subfolder. But you may not need to even go that far if you can manually remove/select what you want from the results of the first filter.

    Selectors can be tricky beasts, which don't always behave the way you'd expect -- or the way selectors anywhere else would! But they work well once you master their own particular logic. If they then don't show what you expect it's usually a wrongly-chosen snapshot or similar, so search the whole set.


  5. 3 hours ago, Malcolm McLeary said:

    Hmmm ... I didn't provide details

    My apologies, I'd assumed that:

    14 hours ago, Malcolm McLeary said:

    In my Office, Retrospect is installed on a...

    ...was a detail, in that RS was installed on a machine that was just lying around in your office rather than a secured server room. In such situations we've used lockable security enclosures and hard-wired power so cleaners/users/random passers-by can't "accidentally" power cycle the machine after "accidentally" plugging in a bootable USB, etc.

    And I agree, requiring a login is a major minus for RS on Windows.


  6. 6 hours ago, Malcolm McLeary said:

    Just saying that this configuration is not best practice ... just necessary given the application's design.

    Totally agree with all you wrote -- which is why I asked, since our Win RS server is in a locked and alarmed server room to which access is tightly controlled and so it isn't such an issue. We've also been burnt enough times by "auto-restarts" (on both Win and Mac) that we stop them wherever possible -- we'll control when updates are applied thank-you-very-much, and if a machine gets shut down because of power loss we both want to know why and to make sure it has come back up cleanly -- so having to log in isn't an issue, we're doing it anyway.

    I would add that there are plenty of ways to physically secure a machine in a more open situation such as yours, and that having RS run as a background process wouldn't solve any of the many other security issues that arise from physical access to a computer.

    It's obvious from the length of time this has been an issue that Windows's security features make switching the RS Engine to background daemon a non-trivial exercise, else it would have been done already. Until it does happen we'll just have to find workarounds -- and, being a Mac guy, I'm particularly partial to your idea of repurposing that old Mac Mini 🙂 


  7. 14 hours ago, Malcolm McLeary said:

     I use a Windows 10 box to run Retrospect and I don't want to have to have a desktop session running all the time.

    What's the issue with running a desktop session all the time, especially on a headless machine? OK, as a mainly Mac guy it grinds my gears that it is necessary on Windows -- but my Windows-administering colleague assures me there are no particular implications assuming the box is properly secured (and he'd slap me silly if it wasn't 😉 ).

    Serious question in case there's something he's missed, or our particular situation mitigates an issue that would be truly serious in the outside world (in which case I should stop advising people to do similar!).


  8. On 6/30/2020 at 4:36 PM, francisbrand said:

    When ever I open the app to look at my backups it startas OK and shows backups running (ProActive at the moment) but then freezes.

    Which freezes -- the Console app or Retrospect Engine (or both)? Do you have access to a second Mac you could run the console app from instead? If so, what happens? Is there time to disable some or all of your scripts? If so, turn all scripts off (you may need to crashed/restart a few times to get this done if you have a lot) and see if the crash happens even when the Engine is "idle".

    OS Console logs would be useful here -- probably the easiest way, since we don't know what we're looking for, is to note the time you launch the RS Console app and the time everything crashes, then filter to between those times and look for anything relevant.

    As David says, if v17 is a recent purchase/upgrade then raise a support case and let them do the hard work! But the more information you can provide the quicker the resolution.


  9. On 6/17/2020 at 7:30 PM, j.a.duke said:

    Any suggestions on what I should look at or how to get this all working again?

    Start with the clients on your internal network -- are they getting backed up Proactively?

    Are the remote clients connecting to your network over a VPN, and you're then catching them with the Proactive script? Or are they truly outside -- check your server is still available on ports 497 and 22024 to the outside world.


  10. On 5/30/2020 at 11:12 PM, MrPete said:

    The problem with belt-and-braces: for some of us, the same computer can show up with different IP addresses in the same overall network (ie connecting to different subnets.)

    Totally agree, with both this and your previous post. We never static just for Retrospect, simply restart the client when needed (though the occasional missed backup can be annoying, it isn't the end of the world, and "moving" clients often miss backups anyway). But on a relatively "fixed" home or small business network, where IPs are only usually DHCPed because it's the default option, b'n'b helps with problems caused by... let's say "less compliant"... DHCP servers.


  11. 3 hours ago, DavidHertzberg said:

    So am I going to investigate this further in my installation... No I am not

    Smart move, IMO!

    These are deep waters, best left unrippled. Especially when you remember that network communication is not directly via IP address, but is next-hop routing via the mapping of IP addresses to gateway/MAC address in ARP tables. Table updates aren't instant, which is why I can quite easily see why my guess might happen -- step 5 is based on the MAC address of the previously  detected client, obviously still "valid" since the interface used wasn't changed (just the IP address). But when we get to step 7 it's aged out/replaced, the IP address is no longer valid, and you get a comms fail.


  12. Not so fast...

    3 hours ago, DavidHertzberg said:

    Then, after successfully finishing scanning, the "backup server" said "Wait a minute, that's not the Source I'm supposed to back up"

    This is what I think might be happening (and why a WireShark run would help):

    1. Client is on "Automatic" location -- x.x.x.202
    2. You switch to "Retrospect Priority", client address now x.x.x.201, and immediately run the server script
    3. Server multicasts to all devices, asking for client
    4. Client responds, but we know the client doesn't instantly reflect a network change, so says "Yay! Me! Here and ready on x.x.x.202!"
    5. Scan gets done
    6. By now, the client is listening is on x.x.x.201:497 (or, rather, is no longer listening on x.x.x.202:497)
    7. Server initiates the backup "Hey, x.x.x.202, give me all these things!"
    8. Silence...
    9. More silence...
    10. Server assumes network communication has failed and throws -519

    Step 4 is total guesswork from me -- all we know is that there must be some mechanism for a multicasted client to tell the server its IP address. If I'm right, they might be able to fix this on the client, though it may dependent on the OS promptly informing all network-using services of an IP change (the client unnecessarily spamming the OS for updates would be horribly inefficient). Or they might be able to fix this on the server, with a re-multicast after step 8's failure to pick up the new address.

    But, even in these days of devices often changing networks, I doubt the above crops up very often and probably isn't worth fixing (directly, at least). x509's "binding to a bogus address" is much more common, and if solving that solves other issues too -- bonus!


  13. 1 hour ago, DavidHertzberg said:

    (That error number is an argument for my side in the 6-months-back dispute with Nigel Smith over whether the "backup server" uses the multicast Piton Protocol in finding a Source defined with a correct MAC address using Add Source Direct.  My "backup server" would have found the MBP by name if it had used the Piton Protocol, or it would have issued a -530 error.)

    You're viewing the Piton protocol too narrowly. It's the protocol(s) by which server and client communicate and includes discovery, access and data transfer (amongst other things) and is used in the unicast (defined IP client, as above), broadcast and multicast "location" (using that since "discovery" usually means "first time ever finding a client" in RS) of a client on the network and all subsequent communication.

    You'll have to do a lot more digging with eg WireShark to know exactly why you saw what you saw -- I'd expect it to throw a -530 (because the client was still listening on x.x.x.202:497) or just work, not throw a -519 -- but I suspect that permanently binding the client to x.x.x.201 with "ipsave" might eliminate the issue.

    -530 is quite clear -- the client couldn't be found. That -519 is separate implies that the client could be found but then there was a problem, but I'm probably reading to much into it. All we really know is that "network communication failed", for whatever reason.


  14. 17 hours ago, DavidHertzberg said:

    So you wouldn't have to do scripting or get into your router's assignments table

    Would just warn that different routers' DHCP servers behave in different ways. Some treat the address blocks reserved for statics as inviolate, some will continue to offer those addresses when no MAC address has been set, etc. I always belt-and-brace, putting MAC addresses in the router's table and setting static IPs on the clients, when I need a definitely-fixed IP.

    Also, some routers force a certain (often limited) range for statics and others let you do as you will, so check your docs before planning.

    • Like 1

  15. 18 hours ago, denno said:

    I have a number of different folders I'm backing up with different scripts/schedules from my Mac and external drives. Is the typical approach to have one disk media set as the destination for all of those sources or separate media sets based on location? (Or personal preference?

    There are pros and cons to both approaches. But consider this first -- how will you restore your system disk if there's a disaster, have you tested it, and does splitting it into separate "Favourite" folders result in way more work than the benefits are worth?


  16. 16 hours ago, MrPete said:

    That's pretty complex compared to restarting the client ;) ... but sure it could be scripted.

    Of course -- would I offer anything simple? 😉

    More seriously, if the client is "confused" by network interfaces when it starts up, can we guarantee it won't also be "confused" on a restart? While it should be better, since it is restarting when there is (presumably) an active interface, it might be safer to explicitly tell the client what to do rather than hoping it gets it right.

    And a batch script triggered by double-click is a lot easier for my users than sending them to the command prompt.

    As always, horses for courses -- what's best for me isn't best for a lot of people here, but might nudge someone to their own best solution.


  17. On 5/15/2020 at 9:01 PM, MrPete said:

    * (Alt workaround for static clients: https://www.retrospect.com/en/support/kb/client_ip_binding ).

    Not just statics -- you can also use it for DHCP clients. And it wouldn't take much work to write a script that would find the current active IP and do a temporary rebind. On a Mac you can even tie it in to launchd using either NetworkState, or with WatchPaths on /private/var/run/resolv.conf (although, in my experience, Mac clients do get there eventually and rebinding is only necessary if you are in a hurry to do something after a network change).


  18. On 5/14/2020 at 11:28 AM, DavidHertzberg said:

    To cope with these problems, I have made the following two suggestions: (a) Run a disk-to-tape script nightly, but make it as short as possible by having it add to tape only what the Snapshot of the latest disk-to-disk Backup shows has just been backed up

    From my earlier back-of-an-envelope calculations, both D2D and D2T should fit in overnight. More importantly, because he isn't backing up during the day, the "to tape" part can happen during the day as well (my guess is that he was assuming nightlies would take as long as the weekend "initial" copy, rather than being incremental), so he should have bags of time.

    On 5/14/2020 at 11:28 AM, DavidHertzberg said:

    (b)  Do something to prevent Veeam seemingly re-making backups of files it has already backed up.

    I know nothing about Veeam's file format, only that it's proprietary (rather than eg making a folder full of copies of files). It may be making, or updating, single files or disk images -- block level incrementals may be the answer. Or it may be that Veeam is actually set to do a full backup every time...

     

    On 5/14/2020 at 11:28 AM, DavidHertzberg said:

    P.P.S: I added to my Support Case a suggestion that the Retrospect term "Snapshot" be changed to "Manifest". 

    It is a snapshot, in both computerese and "normal" English -- a record of state at a point in time. I don't think the fact that it is different to a file system snapshot, operating system snapshot, or ice hockey snap shot 😉 requires a different term -- the context makes it clear enough what's meant, IMO.

    • Thanks 1

  19. 16 hours ago, x509 said:

    Question:  Does Piton depend on SMB V1?  That protocol is supposed to be disabled due to serious security vulnerabilities.

    Second question:  Does Apple Bonjour interfere with Piton?

    Third question:  Does your head hurt now?

    No, no, and no 😉

    Long time since I've seen Norton firewall, but make sure that you are opening port 497 on both TCP and UDP protocols (direct connection only need TCP, discovery uses UDP). Windows also has a habit of changing your network status after updates, deciding your "Home/Private" network is "Public" instead, if Norton makes use of those distinctions (Windows Firewall does).

    Easiest way to check for discovery is Configure->Devices->Add... and click Multicast -- is the device listed? Also try Subnet Broadcast. 

    I have no particular problems with DHCPed PCs at work, so it's something about your setup. As David says, you could get round it by assigning static IPs -- check your router documentation first, some "home" routers supplied by ISPs have severely limited ranges that can be reserved for static mapping -- which can also make life easier for other things, eg just use "\\192.168.1.x" to access a share instead of hoping Windows network browsing is having a good day...

    Question: Are client and server both on the wired network, or is one (or both) wireless?


  20. Thanks for that, David -- a very clear explanation. And you're right -- the thing that's missing is a definition of "active backup", which is unfortunate given that it is fundamental to how "Copy Backups" works.

    Indeed, that's the only place the term "active backup" appears in the User Guide! Which is disappointing, since one of the things that should really, really, be clear in any instructions for a backup program is what will be backed up.

    But from what you say we'll get similar results using either "Copy Media" or "Copy Backup" with grooming retention set to at least the number of backups in a "rotation period" -- in Joriz's case that would be 15 (Mon-Fri = 5 backups a week, 3 tape sets rotated weekly -- 5x3=15).

    I'm starting to think that if you want everything, rather than a subset, on every tape set then "Copy Media" is more appropriate. But, again, I'd hesitate to say without proper testing.


  21. On 5/6/2020 at 2:21 PM, Joriz said:

    What kind of hardware are you actually using? Like a Mac Pro or a Mac Mini (with thunderbolt to SAS ?)

    Currently a 2014 Mac Mini in a Sonnet enclosure with dual interface 10GbE card. Attached to an old (but brilliantly reliable) ADIC Scalar 24 with a single LTO-2 drive via a Promise SANLink2 Thunderbolt to FC adapter (flakey, leads to a lot of prematurely "full" tapes, but the data is always good). Disk-wise we use a Thunderbolt-chained pair of LaCie 8Bigs for 84TB of space, but can overflow onto networked Synologys if required. The server itself, including RS catalogs, is backed up with Time Machine to an external USB drive as well as with RS.

    Next iteration will probably be a new Mac Mini with built-in 10GbE in a Thunderbolt 3 enclosure, still using the 8Bigs but permanently adding one or more Synology NASs for even more disk capacity ("private" network via the card while built-in handles the clients), and adding a SAS card (H680?) to connect to a Quantum Superloader 3 with single LTO-8 drive.

    As you can tell, there's a lot of using what we've got while upgrading what we can/must. That's partly funding issues, partly because I hate spending money (even other people's!) if I don't have to, but mainly because I like to stick with what I know works rather than deal with the inevitable issues with new kit.

    The above is with all due respect to our hosts. I've had a Drobo on my desk since soon after they came out in the UK, have been very happy with it, but relatively low storage density/lack of dual PSU[1]/rack mounting means chained 8Ds or similar aren't an option. And I've dreamt of having a BEAST in the room for years, but they are too deep for our racks (shallow because they have built-in chilled water cooling), and since we're on the third floor we have a loading limit -- the BEASTs have great storage density, but we'd have to leave the rack half empty because of the weight! If/when we relocate the room or replace the racks, BEASTs will be on my shopping list...

    I'll skip over the Windows PC running RS that came after the Mac Mini, when we were considering jumping ship after rumours of Apple letting the Mini die, because <shudder>Windows!>/shudder>. The RS side isn't too bad -- oh look, it's Retrospect 6 with some new features! -- but, as a Mac guy, Windows always seems clunky to me. It's still running, still doing its job (in parallel with the Mac Mini), but it never feels right...

    As always, different requirements and situations lead to different solutions -- and I'd never hold up what we've done as a good example, just what works for us 😉

    [1] Yeah, I know -- "Why do you want dual PSUs when the Mini only has one?" Because, in my experience, it's a lot easier to recover/rebuild the server than a big storage volume after a hard power-off. And we've got a good UPS, so the most likely reason for a power-off is clumsiness in the rack (yanking the wrong lead!) or a PSU failure which we've never had with a Mini but have seen a few of in other server room devices -- expensive Apple components, for the win!


  22. 13 hours ago, DavidHertzberg said:

    To answer the questions "What is an 'active backup'?"

    etc...

    And this is my confusion: p120

    Quote
    • They provide different methods for selecting which backups get copied, such as the most recent backup for each source contained in the source Media Set; Copy Media Sets scripts always copy all backups.

    ...which implies that you can select whether to have all, some, or only the most recent while Copy Media is always all, and p121

    Quote

    ▪  Copy most recent backups for each source
    ▪  Copy most recent backups for each selected source
    ▪  Copy selected backups
    ▪  Copy all backups

    ...where "backups" is plural, even for a single source.

    While I realise no manual is big enough to explain every single someone might come across, "Copy the most recent backup for each source" wouldn't take up much more virtual ink if, indeed, that is what happens.


  23. On 5/10/2020 at 2:22 AM, DavidHertzberg said:

    Nigel Smith: Maybe this bug was why you wrote "I don't know what the practical difference is."

    I wrote that because p118 of the guide seems to describe "Copy Media" and "Copy Backup" as two methods of achieving the same goal, though "Copy Backup" also has options to restrict/select what gets copied. Which makes me wonder why they aren't the same thing with expanded options. Which makes me think I'm missing something, either in what gets transferred, what can be restored from the resulting set, or in how it is done.

    Since the first two are generally important (data integrity) and the last may impact time required (and therefore backup windows needed in Joriz's scheme) and I can't run tests at the moment, I haven't a clue which is better suited.


  24. On 5/8/2020 at 3:13 PM, j.a.duke said:

    The folders below that "Retrospect" folder are not visible in the console navigation.

    Retrospect makes a "Retrospect" directory on the media you select as a target -- any backups go in there, and RS manages folder structure etc., which is why it doesn't show you more in the console.

    So for the new media destination just select the directory containing the "Retrospect" folder, RS will use the "Retrospect" folder and create a new folder in there to store the rdb files (probably Smaller Backup:Retrospect Data:FT_FD:Retrospect:FT_FD-1) and you should be good.

    And no -- it's not immediately obvious that's how it works. I've quite a few volumes with "Retrospect" folders inside "Retrospect" folders where I've selected the wrong one. But if you think about the expected operation -- you pick a volume to store files on (rather than a sub-directory of a volume) -- and it becomes a little clearer.

×