Jump to content

Nigel Smith

  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Nigel Smith

  1. It sounds like you are using SANmp to mount the volume on the server OS, so it shows up as a "Local" volume to Retrospect Server. As such, it will be available whenever mounted and can't be removed/re-added like you can a client. It appears you have a problem with that particular volume on your SAN. Can you use Retrospect's catalog/logs to narrow that down to a specific file or folder if you try a new backup? If not, it's time for a binary search -- back up the first half of the folders at the top level of the volume, and if that fails the problem is there while if it succeeds the problem is in the second half of the list. Do the same with the first half of the "problem" section, repeat until you find what's missing. Is the problem file/directory important? If not, I'd simply make sure my backup (apart from that file/directory) was good, then erase the volume via SANmp Admin then restore to it. Apparently you should be doing this every 6-12 months anyway (!) as preventative maintenance -- more details here. If it is important then I suggest you contact SANmp support for suggestions -- whilst I normally have faith in Disk Warrior, the extra layer of abstraction/mis-direction introduced by SANmp may be confusing things... Nige
  2. There's a way... but you won't want to use it. Start a Restore job and select the "Search for files..." option, "Continue" Leave the search as the default "Any" and a blank filename field, select the set your want to print, "Continue" Select a restore destination (don't worry about disk space, you won't be restoring), "Continue" After Retrospect has finished searching the sessions, click the little "Preview" button alongside the set details Go through the preview list and click on every disclosure triangle which might have contents you want to print out Select "Print..." from the File menu <recommended>Cancel the job once you grok the number of pages... Even for a subset of files (I've done it for someone who thought "the name might include 'December' or something") it's a horrible job. What are you trying to achieve, and why? There may be another way. For example, if you want a hard copy of the files backed up from a client you can: Go to "Past Backups" Find the client's most recent backup and click the "Browse" button associated with that Make sure the "Only show files..." box is not checked Click "Save..." ...and you'll get a CSV file that you can further process and/or print. You could do that for each client in the backup set, which may be both quicker than the above and closer to what you actually require. Nige
  3. Should be this screenshot -- they've simply linked "engine" twice rather than "engine" then "client". Henry, have you tried backing up the updated client yet? I still get a -519 error ("Can't track volumes"), but I'm assuming that's because I've only updated the client to 15.6, not the server. Nige
  4. There's clearly been an change in Mojave's behaviour since Retrospect published their instructions. I've had a google in case there's a way of adding non-apps via the command line, but no luck -- Apple's included tccutil allows you to reset privacy settings for apps, but not add or remove apps (or control panels etc). As it stands, we're waiting on updates -- either from Apple, to re-enable the old "add a control panel" behaviour, or from Retrospect with an "app-ified" client. We've already told our users to not update to Mojave without checking with us first so we can make sure they have other backup options in place (luckily they all will because we haven't started our RS rollout yet, but it's a good chance to check they are actually using those options!). Nige
  5. "Security & Privacy" isn't accepting .prefPanes (Retrospect Client) or bundles (InstantScan) as valid file types, so you can't add them to the exception list. I guess "apps" really does mean apps -- at least for now. I don't know if this is a GUI bug -- the pref pane doesn't think it can add a bundle but the underlying system would accept it if only it could be added -- or something more fundamental. Even if it does work, as things stand you'd still have to forget and then re-add each and every client. So you might want to wait for the "upcoming (Retrospect Client) release (which) will eliminate the uninstallation step and preserve your client settings." Nige
  6. No, this is what doesn't work for our situation. I had hopes, a few weeks ago, but it wasn't to be... RS Server knows which interface a client was added via, and only ever looks for the client on that interface. So a client registered when on the 183 address would only be backed up when it is the 183 subnet, never when on the 45. Brilliant in many situations, like departmentally-segregated subnets/VLANs where clients don't move around, but no good for us. Nige
  7. Almost. Static IP is applied in client's Network pane of System Prefs (or equivalent for Windows) and, after registering, it is reset to "Using DHCP" after which it might appear on either subnet via the magic of DHCP offers and acks. So just a brief, temporary, static allocation while sitting at the client machine and installing the RS client. Again, this isn't a security issue. Our core switches have aggregated connections to the Fortigate, which is our router and gateway to the outside. This gives us huge bandwidth along with redundancy and automatic fail-over if one of the core switches fails. The "unforeseen consequence" is that, like most routers, the Fortigate will not send a subnet broadcast out of the same interface it arrived in on (broadcast storm prevention) -- and since all our subnets are on that same aggregated interface, a shout from the 45-server can be routed to every interface except the aggregated one containing it and the 183-client. Direct IP is fine, Bonjour etc works (apparently the "network control" portion of the multicast subnet is treated differently to RS's multicast address, which threw us for a while) -- it's only this one use-case that has caused problems. There are ways round it, but they create more complication and/or other problems. For example, we could put each of the subnets onto their own VLAN because each of the subnets would then be a virtual interface on the aggregated interface and the RS packets could be routed because the incoming and outgoing virtual interfaces would be different. But that could screw up building-wide printer and share discovery without introducing another layer of fixes, etc, etc. But understand that I am not a formal network guy ? The above is gleaned from my testing and discussion with the central networking team (which usually includes them saying "Of course, if this was a Cisco...") and a hasty read of the Fortigate manual and Cookbook, so some of my terminology may be off although I hope the principles are understandable. Time for a proper course, I guess. So the TL;DR for the thread appears to be: "If you are ever in a situation where you have multiple subnets on a network and RS Server isn't seeing new clients outside of its own subnet, try registering the clients while they are on the server's subnet. They may then be available for backing up whichever subnet they subsequently find themselves on -- but monitor things closely!" Nige
  8. I obviously didn't explain this as clearly as I hoped. The 9-step list was purely to demonstrate that: If I set the client machine to be on the same subnet as the server and register it on the default interface via subnet discovery (steps 1, 2 and 3) then it doesn't matter if the client subsequently changes internal subnets (steps 4 and 5, 7 and 8 ) -- the server can still find it using subnet discovery and back it up (steps 6 and 9). But the client must be on the same subnet as the server for that initial discovery to happen. I.E. there is a subtle difference between the discovery process used to initially register a client and that used to see if a previously registered client is available for backup. I don't know what it is, but it is enough to allow the Fortigate to route the traffic between subnets -- this was a routing-of-broadcast-packets issue, and nothing to do with any security policies. We only have have to set a static IP on the client machine for initial registration, and then only if the client happened to pull a 183 address from the DHCP server. I can then set the client machine back to using DHCP and there's no more intervention required. A couple of extra steps in our usual setup process, so no biggie compared to wholesale network changes or "might work but not really recommended" routing kluges. I was at three successes from three attempts at the time of writing that, different machines with different Mac OSs. I've now used the routine successfully on a dozen different machines, though no PCs as yet, so I'm reasonably confident this is a good work-around for our specific problem. And I'm going to test Remote Backups thoroughly, ready for when the work-around stops working... I don't know why this works, and while my inner nerd would love to delve deeper my outer pragmatist is happy to shrug and move on to other issues. Hopefully that explains it with a bit more clarity. Nige
  9. Jon, Tape needs a smooth, fast, data stream to get both advertised performance and advertised capacity -- the tape always runs at a certain speed, and if the data arrives too slowly it either leaves gaps or stops and spools back then restarts, which also inevitably leaves gaps. So I think your "two interrelated problems" are just one -- data delivery to the drive. It might be that the data transfer is just too slow, but also may be that it is too "spurty". I'd start by benchmarking the connection with small numbers of big files, doing a standard files-to-tape backup. Try one or more multi gigabyte disk images or similar and, if they go through at better speeds than you are reporting above, the problem is likely with the Copy Backup and the way that process presents data to your tape drive. So if you do get good speeds with the big files, I'd consider a different way of off-siting. Sounds like your day-to-day restores will be done from disk media set and the tapes are a backstop/archive and possibly compliance step and there's no requirement to restore directly from them. So I'd back up to tape the disk media sets' RDB files instead, though that would mean that restoring any files from "archive" would mean first restoring all the RDB files to the disk array then restoring files from that "rebuilt" backup. Don't forget backup your catalogs as well, or that "rebuilt" backup will have to have a Retrospect "Rebuild". The above isn't as crazy as it sounds -- for years we did similar with RS6, backing up clients to Internet Backup sets on disk then taking those backup files to tape, to mitigate speed issues for the tape. Nige
  10. Aargh -- board ate my post (anyone else have to reset their password every time they want to log in?). Abbreviated version follows... Hardly foul-ups. Many places run with less IPs than potentially connected devices -- think of your local coffee shop. In our case we've many staff with multiple devices, most of which are seldom connected. Rotation students who are only here one day a week, one week a month, or one month a quarter. Early starters/finishers who hardly overlap with the night owls. And so on. So we rarely go above 80% usage our DHCP pool and many devices have the same IP day to day -- but potentially they could be on either subnet when RS comes a-knockin'. The gateway upgrade is vast improvement for 99% of our use-case. The only thing we've had a problem with is Retrospect, an unforeseen consequence that's only come to light since a change in backup policy -- we're returning to our old-style centralised backups for all machines after a flirtation with end-user backups to external HDs. All of which is moot. It seems that Retrospect uses different methods for initial "client detection" and subsequent "availability discovery". Clients can be added to the server only if they are on the 45 subnet but, once added, they can be seen/managed/backed up when attached to the 183. I'm three from three (so far) with the following work flow: Static client machine to a 45 address Install RS client via subnet broadcast discovery Register client on server, set volumes, etc Static client to 183 address Restart client machine Backup successfully Static client to 45 address Restart Backup successfully Even the restarts between subnet changes are unnecessary, at least on Macs -- although you initially get "Multicast port unavailable" in the client it can still be seen by the server, and that message clears after a few seconds anyway as IP bindings are sorted out. I'm keeping Remote Backups in reserve, running tests just in case, but the above should be good enough for our needs. Thanks again for all the help, Nige
  11. Actually public space -- the IPs given above are just examples. While it wouldn't take too much work to find what the ranges really are, a little obfuscation on a public forum isn't a bad thing. But I apologise for not making that clear and so wasting your time. However, moving to completely private behind our NATing Fortigate is certainly an option. As is moving to IPv6, for which we are getting increasing pressure from central Networking. Both would be long-term projects and neither comes under the heading of "fun" for me, so I'll do what I can with what we've currently got. Thanks anyway, Nige
  12. ff.ff.ff.ff.ff.ff.ff is a MAC address, the Layer 2 analog of Layer 3's -- it's an "ethernet broadcast" while the second is an IPv4 broadcast. That's the crux of the question -- will RS client respond to that in the same way as the normal IPv4 broadcast? Wireshark shows that the forwarded packets retain their IPv4 headers, originating from the server ( and using UDP port 497 -- however their destination is, which may mess with the client (i.e. it's receiving a broadcast but not from the expected IPv4 broadcast address. Server to Fortigate looks good, Fortigate to subnet looks OK, I now need to Wireshark a client to see what it is getting. I'd do this at home. But here we have 450+ non-static devices and ~350 addresses in the IP pool. So, quite aside from the work involved, we simply won't fit. As I understand it, P/PKA merely obviates the need to provide a backup password during client install. It still requires the server to poll for the new client as usual (though that process can be automatic) and so will have the same problem. But I'll have another look in case the server address can be included so the client can notify it of its presence. This is Plan B (or probably F or G by now ? ). Install the client normally, temporarily re-bind to a 45 address if necessary, add client to server. It appears that our sticking point is the initial add -- once the client is registered the server can detect it on either subnet. But I can't see why the mechanisms are different, I may be seeing an IP-caching artefact rather than a true detection, and this will need a lot of testing before I'm happy with it. From your follow-up: This might work. I had a quick look at Remote Backup for another problem, but it didn't help (no control over the other [private] network's settings) so didn't delve too deep. If I can create sub-folders in the Remote Backup Clients folder and assign those to different Proactive scripts -- to maintain concurrent operations and allow different clients to use different backup sets -- it might be a work-round if we can't get things working "properly". Thanks for the idea! And finally... I'm kinda hoping Mayoff will stumble across this thread. Having been helped by Robin before when trying (and succeeding!) to subvert Internet Backups to do things they weren't meant to, I know he's The Man and isn't averse to handling weird situations like this. Thanks David, this has been a great help. I'm desperately trying to solve this without a complete network re-do because, at that stage, we'd also be looking at things like client network login -- and that would make Retrospect's USP of Proactive backups redundant and almost certainly push us to changing to Netvault or similar. And we don't want to do that! Nige
  13. My bad -- we're actually using Multi Server Premium v15.1.2.100 on Windows Server 2016 for this. Clients are Mac and Windows of various vintages. But, as I far as I know, that shouldn't matter. The underlying mechanism for RS's subnet broadcasts has been the same for years (though it is handled differently at the OS level by Macs and PCs) and it is that which I am trying to get more info on. Server interface doesn't need adjusting, just "Default" with the 45 and 183 subnets defined -- and it has to be that way since each client can get either a 45- or 183-based DHCP provided address when they connect to our network (using different interfaces for each subnet works for client discovery, but clients are then only backed up when they are on the same subnet they were discovered on ? ). If it sounds a horrible mess -- it is! But it is like that for historic reasons, which we had no control over. We used to be OK because our network ran under a net mask and so RS broadcasts from a 45 covered that and the 183 (we have a third, unrelated, subnet but any client there is static IPed and so reachable directly), but a gateway "upgrade" last year resulted in both physical and logical topology changes which included tightening the mask -- a good thing in general, but not for this specific... Central Network's guys are suggesting Layer 2 broadcast forwarding (rather than Layer 3 as described in that Wikipedia article), but I'll be chasing my tail if RS client doesn't respond to Layer 2 ? (Oh, and thanks David -- nice to recognise a name from the past!) Nige
  14. All, Trying to get subnet broadcasting working. We have 2 subnets, both on the same interface of a Fortigate IPS -- for sake of argument, and Unicast is fine in both directions. Server sits on the 45-subnet and can broadcast-detect all clients on that subnet but no clients on the 183. We've set the second subnet definition on the server's default interface to be, giving a broadcast address of We've set up a static ARP entry and policy on the Fortigate so that anything from the server to goes to FF:FF:FF:FF:FF:FF, and the policy is showing traffic so it looks like the server's "shout" is at least getting that far. But I know nothing about the client's response. Does the server's "shout" include its IP address (if so, will the above hide that?) and is the client's response unicast? Does the client even respond to Layer 2 broadcast traffic, or does it require Layer 3? TIA to anyone with any answers, Nige
  15. Does your version of RS Server give access to the *Unix* Path condition? If so, try that with "begins with" and "/System/Library/Caches".
  16. Remember that each Snapshot is a "point in time". A file that was created once and never altered will "exist" in every snapshot from then on, but if you restore "every file" it will only be restored once in its original, never-changed form. Restore 10 snapshots and you get 10 identical copies of the 1 file in the backup :-) Your 4800:14,000 ratio isn't unreasonable, and it will depend on how often your clients create, edit and delete files. But, IMO, your "Find" approach is the correct one, assuming that's "Restore" and then "Search for files in selected media sets" -- it's the one I always used in previous versions of Retrospect. Try it for a sub-folder that you know contains changing files and you should find that edited versions are restored with incrementing numbers in the filename.
  17. All, Another variant on the "-1,124 ( invalid NTFS data)" error. Anyone seen this before? Setup: Xserve running OS X 10.7.5 and Retrospect Server 11.5.3 (103) backing up a variety of Mac and Windows clients to a Disk Media Set stored on an attached Xsan volume. Using both Scheduled and Proactive scripts, all to the same media set. Everything ran fine for the first week, but now the Mac clients are all throwing "!Trouble matching <client> to <catalog>, error -1,124 ( invalid NTFS data)". Windows clients are backing up as before. The Xserve and Xsan both pass volume consistancy checks, as do all the clients I've checked. Failing Mac clients can also be backed up to a new Disk Media Set on the same SAN without problem and without re-registering with Retrospect or even a restart. The only thing I can think of that changed over the weekend was that Retrospect bumped into what I assume is an 8TB volume limit for its Disk Media Set function -- despite more than 50TB of free space, there was a "New Media" request which was satisfied by simply pointing it to the SAN and letting it create a second directory. I can't believe that Mac clients can't cope with multi-disk Disk Media Sets -- that would be all over the Forum! And also something that, surely, the server and not the client mediates. So what *is* going on here? Even as I type: + Normal backup using Daytime at 07/04/2015 13:39:11 (Activity Thread 1) To Backup Set 2015... - 07/04/2015 13:39:11: Copying Users on <Mac-Computer1> Using Instant Scan !Trouble matching Users on <Mac-Computer1> to 2015, error -1,124 ( invalid NTFS data) + Normal backup using Daytime at 07/04/2015 13:51:29 (Activity Thread 2) To Backup Set 2015... - 07/04/2015 13:51:29: Copying Users on <Windows-Computer1> 07/04/2015 14:17:33: Snapshot stored, 33.9 MB 07/04/2015 14:17:38: Comparing Users on <Windows-Computer1> 07/04/2015 14:17:56: Execution completed successfully Completed: 495 files, 350.7 MB Performance: 637.6 MB/minute (429.4 copy, 1,315.1 compare) Duration: 00:26:27 (00:25:20 idle/loading/preparing) 07/04/2015 14:17:59: Script "Daytime" completed successfully + Normal backup using Daytime at 07/04/2015 14:19:09 (Activity Thread 2) To Backup Set 2015... - 07/04/2015 14:19:09: Copying Users on <Mac-Computer2> !Trouble matching Users on <Mac-Computer2> to 2015, error -1,124 ( invalid NTFS data) (Change in Activity Thread number is me triggering a schedule manually to instantly test another previously-failed machine with a fresh set).
  18. And tried. And -- wow! Same tests as above: Local restore -- 5.5GB/min AFP restore -- 2.6GB/min Client restore -- 3GB/min Nice one!
  19. Applied for. Further data points in the meantime -- again, all machines running, OS X 10.7.5, client version 6.3.029. Same 14GB restore every time, always to a new folder. Restore to a local disk -- 5.3GB/min Restore to client as a network AFP share -- 1.1GB/min Restore to client as a Retrospect client -- 20MB/min
  20. Adding a "me too". Retrospect version, client either 10.5.0 (145) or 6.3.029, all machines running OS X 10.7.5 Backup whizzed along at 748.8 MB/min but a test restore to new folder (so no match decisions) of a selected folder of 14 items/14GB is only hitting 20 MB/min Let me know what other test you want, Robin. This is a test setup, so anything up to and including reinitialising both server and client is yours for the asking. But hardware limits the server OS. I'm afraid.
  21. Thanks guys. And Robin -- good to know you're still around. Your help via the old RS mailing list was invaluable, so I'm already feeling better about upgrading.
  22. Still evaluating, but starting to plan ahead. Are there any coded limits to RS 10 that a new user should be aware of? I'm thinking back to the "old RS 6 days" and of hitting the suprise 32k file Internet backup set limit... So, aside from obvious licensing issues, anything else to watch out for? Is there a maximum catalog size, a cap on the number of client tags, rule length or nesting issues? Is a Disk Media Set truly limited only by the mount of storage available, and there's no max component-file count? Sources in a script?
  23. Starting off with an easy one for the assembled great'n'good before I start asking about things like subnet scanning for clients... So: A client gets backed up. That client can then request a restore. How does the client find the server?
  • Create New...