Jump to content

Nigel Smith

Members
  • Content count

    31
  • Joined

  • Last visited

Everything posted by Nigel Smith

  1. Nigel Smith

    can't link to

    I'm guessing this is Filemaker Server machine running under OS X... "Content" files uploaded to your database server and linked rather than embedded -- in this case a PDF -- are backed up once by Filemaker's internal scripts and then, subsequently, linked to so that you don't waste disk space. I *think* the latest backup has the real file and the previous backups contain links, but that should be easy enough for you to check. Check the timestamps in the paths above and you'll see that the older "instance" of the file can't be linked to the newer. So either you haven't duplicated that later "instance" or, for some reason, RS can't recreate the link in the older FMS backup directory. I'd worry about the first and honestly not care about the second since if I had to roll back to an earlier version of my database it wouldn't be that much extra work to manually copy "content" files into the right place. Which is it, and what are you duplicating to? Nige
  2. It's more likely that Retrospect doesn't get direct-enough access to the drive hardware for this to work. Worth a try to save some money, but I hope I didn't encourage you to waste too much time on it. Options: Get a compatible external drive Replace the internal drive in the Mini (sample iFixit teardown here) Do the restores on another Mac on which you can install or migrate Retrospect But I'm not sure any of that will help. If I'm getting the above right: Retrospect on the Mini works fine creating then restoring from a new DVD It doesn't work for 2001 (Sets 1 and 2) (CD-Rs) It does work for 2003 (Set 3) (CD-Rs) It doesn't work for anything more recent than 2003 (CD-Rs?) ...which implies it is a problem with the media rather than the drive -- I'm not sure that e.g. an alignment problem would cause failure on some but not all CDs. CD-Rs can degrade surprisingly quickly in the real world, unless you've used expensive "archival grade" media, and retrieving data from 10-15+ years-old disks will always be a dodgy proposition. I never used RS much with optical media, so hopefully the David and Lennart will chime in here: Could you use Disk Utility to image the CD-R, then burn that image off to another disk that could be read by RS? I guess there's a slim chance that the OS's disk access routines are more robust/forgiving than RS's, allowing you to recreate a CD that's readable by RS. Nige
  3. To add to the above -- is this test using lots of smaller files or fewer larger ones? I'd benchmark using something like: Current data set using disk-to-disk backup Current data set using disk-to-tape backup (which you've already done) 1TB tarball, disk-to-disk 1TB tarball, disk-to-tape ...and I wouldn't be surprised if you end up using some form of disk-to-disk-to-tape if you want to maximise tape-write speed -- at least until the API changes David mentioned come down the pipe. But -- do you want to, if it's at the expense of added complexity? While it's nice to go as fast as you can go, if you're completing during your backup window and not wasting too much tape (i.e. the drive's variable-speed operation copes well enough to reduce "shoe-shining" either completely or to an acceptable amount), then having another step to manage may not be worth the hassle. Nige
  4. Doug, Have you got access to another Mac with a CD/DVD drive and a Firewire connection? Doesn't matter what it is -- you may even be able to use that old G4 laptop, depending on what died. What you might be able to do is: Start the other Mac up in Target Disk mode (hold down the T key at startup) Connect it to your Mac Mini with Firewire Both the HD *and* the optical drive will then be available to the Mac Mini as external drives Then put the disks into the other Mac's optical drive when prompted by RS on the Mini. Might get you round any drive problems without requiring a trip to Ebay. Nige
  5. RS reporting on the Mac's Console knocks spots off the Windows version. Particularly useful, I find, is the pre-rolled "No Backup in 7 Days". It would be even more useful if I could take that report and use it to generate emails to those affected users, including standard advice to restart their machine, plug in to the Ethernet rather use wireless, etc. Anyone know of a way to extract that data from the report, ideally scriptable? I can kluge something together by printing it to PDF, using Acrobat to save as text, then parsing the resulting file for client names and matching them against our user database -- but that amount of interaction means it won't get done very often... Would it be worth investigating Data Hooks to achieve this? Can I access the Retrospect API from outside the Console/Dashboard? We used to use the equivalent of today's script-hooks to send the summary of each backup directly into a database, and generate the emails from searching that, which could be another option. Anybody doing something similar, before I reinvent this particular wheel?
  6. On my MacBook Mojave test machine, screen sleeping/CPU working works just the same as always -- tick the box to "Prevent computer from sleeping..." in Energy Saver's "Power Adapter" tab, leave "Wake for network access" and "Enable Power Nap..." untucked (since we don't need them), screen will eventually go black but computer is still available over ssh, sharing, etc. What machine(s) are you using? I might have something around I can try and duplicate on.
  7. Nigel Smith

    Can't access volume Error -1102

    Earlier you said the backup machine was mounting the volume read-only, now you say it's the only one that's write-exclusive. It probably doesn't matter, may be just a slip of the keyboard, but you might want to check in case it's that that has changed and has started causing problems (I don't see how it would, but I always worry when there's an inconsistency...). Re: Permissions -- you've got the SANmp volume access permissions as you mention, but you also have the usual file/folder permissions. I was just trying to make sure that the backup machine can both mount the disk in a way that allows it both uninterrupted (i.e. no other machine has SANmp exclusive-write) and unhindered (i.e. the user account Retrospect is running as has at least read access to all the data on the volume, including metadata). I'm assuming that SANmp log-in controls how the volume is mounted while the OS separately manages file permissions but, never having used SANmp, that's a big assumption! But I think you are right. It's erase/restore time, if only because that's the first thing SNS will tell you to do if you contact their support. Nige
  8. Nigel Smith

    Can't access volume Error -1102

    So do you do regular erase/restore maintenance, as SANmp recommend? If not, that would be my first fix. If you do, and have done so in the last 6 months, I'd be inclined to not bother until the next scheduled erase -- it sounds like you are backing up the actual data even if you aren't getting the state data, and that should be easy enough to rebuild. But do some test backups first! Speaking of which, further up you mentioned a restore test and "So, I went to restore files and folders I was able to select the disk but it had a 'yellow exclamation sign' on the disk". Possibly a silly question, but were you trying to restore to the SAN volume? The one that's mounted read-only, so you can't write the restore to? ? Other random thoughts, based on no knowledge of SANmp at all... Have you got a client on your network that intermittently mounts the problem volume in "write-exclusive" mode? That might cause a similar problem -- schedule your backups for when that client is not in use. I believe you "sign in" with SANmp -- does this also grant permissions? Does the backup server's sign-in ID have full read-only access to that volume, including all metadata? the problem may have started when someone set special access permissions on a project directory or similar... Nige
  9. Nigel Smith

    Can't access volume Error -1102

    It sounds like you are using SANmp to mount the volume on the server OS, so it shows up as a "Local" volume to Retrospect Server. As such, it will be available whenever mounted and can't be removed/re-added like you can a client. It appears you have a problem with that particular volume on your SAN. Can you use Retrospect's catalog/logs to narrow that down to a specific file or folder if you try a new backup? If not, it's time for a binary search -- back up the first half of the folders at the top level of the volume, and if that fails the problem is there while if it succeeds the problem is in the second half of the list. Do the same with the first half of the "problem" section, repeat until you find what's missing. Is the problem file/directory important? If not, I'd simply make sure my backup (apart from that file/directory) was good, then erase the volume via SANmp Admin then restore to it. Apparently you should be doing this every 6-12 months anyway (!) as preventative maintenance -- more details here. If it is important then I suggest you contact SANmp support for suggestions -- whilst I normally have faith in Disk Warrior, the extra layer of abstraction/mis-direction introduced by SANmp may be confusing things... Nige
  10. Nigel Smith

    printing the content of a media set

    There's a way... but you won't want to use it. Start a Restore job and select the "Search for files..." option, "Continue" Leave the search as the default "Any" and a blank filename field, select the set your want to print, "Continue" Select a restore destination (don't worry about disk space, you won't be restoring), "Continue" After Retrospect has finished searching the sessions, click the little "Preview" button alongside the set details Go through the preview list and click on every disclosure triangle which might have contents you want to print out Select "Print..." from the File menu <recommended>Cancel the job once you grok the number of pages... Even for a subset of files (I've done it for someone who thought "the name might include 'December' or something") it's a horrible job. What are you trying to achieve, and why? There may be another way. For example, if you want a hard copy of the files backed up from a client you can: Go to "Past Backups" Find the client's most recent backup and click the "Browse" button associated with that Make sure the "Only show files..." box is not checked Click "Save..." ...and you'll get a CSV file that you can further process and/or print. You could do that for each client in the backup set, which may be both quicker than the above and closer to what you actually require. Nige
  11. Nigel Smith

    Full Access Mojave

    Should be this screenshot -- they've simply linked "engine" twice rather than "engine" then "client". Henry, have you tried backing up the updated client yet? I still get a -519 error ("Can't track volumes"), but I'm assuming that's because I've only updated the client to 15.6, not the server. Nige
  12. Nigel Smith

    Full Access Mojave

    There's clearly been an change in Mojave's behaviour since Retrospect published their instructions. I've had a google in case there's a way of adding non-apps via the command line, but no luck -- Apple's included tccutil allows you to reset privacy settings for apps, but not add or remove apps (or control panels etc). As it stands, we're waiting on updates -- either from Apple, to re-enable the old "add a control panel" behaviour, or from Retrospect with an "app-ified" client. We've already told our users to not update to Mojave without checking with us first so we can make sure they have other backup options in place (luckily they all will because we haven't started our RS rollout yet, but it's a good chance to check they are actually using those options!). Nige
  13. Nigel Smith

    Full Access Mojave

    "Security & Privacy" isn't accepting .prefPanes (Retrospect Client) or bundles (InstantScan) as valid file types, so you can't add them to the exception list. I guess "apps" really does mean apps -- at least for now. I don't know if this is a GUI bug -- the pref pane doesn't think it can add a bundle but the underlying system would accept it if only it could be added -- or something more fundamental. Even if it does work, as things stand you'd still have to forget and then re-add each and every client. So you might want to wait for the "upcoming (Retrospect Client) release (which) will eliminate the uninstallation step and preserve your client settings." Nige
  14. Nigel Smith

    Subnet Broadcasting

    All, Trying to get subnet broadcasting working. We have 2 subnets, both on the same interface of a Fortigate IPS -- for sake of argument, 192.168.45.0/24 and 192.168.183.0/25. Unicast is fine in both directions. Server sits on the 45-subnet and can broadcast-detect all clients on that subnet but no clients on the 183. We've set the second subnet definition on the server's default interface to be 192.168.183.12/30, giving a broadcast address of 192.168.183.15. We've set up a static ARP entry and policy on the Fortigate so that anything from the server to 192.168.183.15 goes to FF:FF:FF:FF:FF:FF, and the policy is showing traffic so it looks like the server's "shout" is at least getting that far. But I know nothing about the client's response. Does the server's "shout" include its IP address (if so, will the above hide that?) and is the client's response unicast? Does the client even respond to Layer 2 broadcast traffic, or does it require Layer 3? TIA to anyone with any answers, Nige
  15. Nigel Smith

    Subnet Broadcasting

    No, this is what doesn't work for our situation. I had hopes, a few weeks ago, but it wasn't to be... RS Server knows which interface a client was added via, and only ever looks for the client on that interface. So a client registered when on the 183 address would only be backed up when it is the 183 subnet, never when on the 45. Brilliant in many situations, like departmentally-segregated subnets/VLANs where clients don't move around, but no good for us. Nige
  16. Nigel Smith

    Subnet Broadcasting

    Almost. Static IP is applied in client's Network pane of System Prefs (or equivalent for Windows) and, after registering, it is reset to "Using DHCP" after which it might appear on either subnet via the magic of DHCP offers and acks. So just a brief, temporary, static allocation while sitting at the client machine and installing the RS client. Again, this isn't a security issue. Our core switches have aggregated connections to the Fortigate, which is our router and gateway to the outside. This gives us huge bandwidth along with redundancy and automatic fail-over if one of the core switches fails. The "unforeseen consequence" is that, like most routers, the Fortigate will not send a subnet broadcast out of the same interface it arrived in on (broadcast storm prevention) -- and since all our subnets are on that same aggregated interface, a shout from the 45-server can be routed to every interface except the aggregated one containing it and the 183-client. Direct IP is fine, Bonjour etc works (apparently the "network control" portion of the multicast subnet is treated differently to RS's multicast address, which threw us for a while) -- it's only this one use-case that has caused problems. There are ways round it, but they create more complication and/or other problems. For example, we could put each of the subnets onto their own VLAN because each of the subnets would then be a virtual interface on the aggregated interface and the RS packets could be routed because the incoming and outgoing virtual interfaces would be different. But that could screw up building-wide printer and share discovery without introducing another layer of fixes, etc, etc. But understand that I am not a formal network guy ? The above is gleaned from my testing and discussion with the central networking team (which usually includes them saying "Of course, if this was a Cisco...") and a hasty read of the Fortigate manual and Cookbook, so some of my terminology may be off although I hope the principles are understandable. Time for a proper course, I guess. So the TL;DR for the thread appears to be: "If you are ever in a situation where you have multiple subnets on a network and RS Server isn't seeing new clients outside of its own subnet, try registering the clients while they are on the server's subnet. They may then be available for backing up whichever subnet they subsequently find themselves on -- but monitor things closely!" Nige
  17. Nigel Smith

    Subnet Broadcasting

    I obviously didn't explain this as clearly as I hoped. The 9-step list was purely to demonstrate that: If I set the client machine to be on the same subnet as the server and register it on the default interface via subnet discovery (steps 1, 2 and 3) then it doesn't matter if the client subsequently changes internal subnets (steps 4 and 5, 7 and 8 ) -- the server can still find it using subnet discovery and back it up (steps 6 and 9). But the client must be on the same subnet as the server for that initial discovery to happen. I.E. there is a subtle difference between the discovery process used to initially register a client and that used to see if a previously registered client is available for backup. I don't know what it is, but it is enough to allow the Fortigate to route the traffic between subnets -- this was a routing-of-broadcast-packets issue, and nothing to do with any security policies. We only have have to set a static IP on the client machine for initial registration, and then only if the client happened to pull a 183 address from the DHCP server. I can then set the client machine back to using DHCP and there's no more intervention required. A couple of extra steps in our usual setup process, so no biggie compared to wholesale network changes or "might work but not really recommended" routing kluges. I was at three successes from three attempts at the time of writing that, different machines with different Mac OSs. I've now used the routine successfully on a dozen different machines, though no PCs as yet, so I'm reasonably confident this is a good work-around for our specific problem. And I'm going to test Remote Backups thoroughly, ready for when the work-around stops working... I don't know why this works, and while my inner nerd would love to delve deeper my outer pragmatist is happy to shrug and move on to other issues. Hopefully that explains it with a bit more clarity. Nige
  18. Jon, Tape needs a smooth, fast, data stream to get both advertised performance and advertised capacity -- the tape always runs at a certain speed, and if the data arrives too slowly it either leaves gaps or stops and spools back then restarts, which also inevitably leaves gaps. So I think your "two interrelated problems" are just one -- data delivery to the drive. It might be that the data transfer is just too slow, but also may be that it is too "spurty". I'd start by benchmarking the connection with small numbers of big files, doing a standard files-to-tape backup. Try one or more multi gigabyte disk images or similar and, if they go through at better speeds than you are reporting above, the problem is likely with the Copy Backup and the way that process presents data to your tape drive. So if you do get good speeds with the big files, I'd consider a different way of off-siting. Sounds like your day-to-day restores will be done from disk media set and the tapes are a backstop/archive and possibly compliance step and there's no requirement to restore directly from them. So I'd back up to tape the disk media sets' RDB files instead, though that would mean that restoring any files from "archive" would mean first restoring all the RDB files to the disk array then restoring files from that "rebuilt" backup. Don't forget backup your catalogs as well, or that "rebuilt" backup will have to have a Retrospect "Rebuild". The above isn't as crazy as it sounds -- for years we did similar with RS6, backing up clients to Internet Backup sets on disk then taking those backup files to tape, to mitigate speed issues for the tape. Nige
  19. Nigel Smith

    Subnet Broadcasting

    Aargh -- board ate my post (anyone else have to reset their password every time they want to log in?). Abbreviated version follows... Hardly foul-ups. Many places run with less IPs than potentially connected devices -- think of your local coffee shop. In our case we've many staff with multiple devices, most of which are seldom connected. Rotation students who are only here one day a week, one week a month, or one month a quarter. Early starters/finishers who hardly overlap with the night owls. And so on. So we rarely go above 80% usage our DHCP pool and many devices have the same IP day to day -- but potentially they could be on either subnet when RS comes a-knockin'. The gateway upgrade is vast improvement for 99% of our use-case. The only thing we've had a problem with is Retrospect, an unforeseen consequence that's only come to light since a change in backup policy -- we're returning to our old-style centralised backups for all machines after a flirtation with end-user backups to external HDs. All of which is moot. It seems that Retrospect uses different methods for initial "client detection" and subsequent "availability discovery". Clients can be added to the server only if they are on the 45 subnet but, once added, they can be seen/managed/backed up when attached to the 183. I'm three from three (so far) with the following work flow: Static client machine to a 45 address Install RS client via subnet broadcast discovery Register client on server, set volumes, etc Static client to 183 address Restart client machine Backup successfully Static client to 45 address Restart Backup successfully Even the restarts between subnet changes are unnecessary, at least on Macs -- although you initially get "Multicast port unavailable" in the client it can still be seen by the server, and that message clears after a few seconds anyway as IP bindings are sorted out. I'm keeping Remote Backups in reserve, running tests just in case, but the above should be good enough for our needs. Thanks again for all the help, Nige
  20. Nigel Smith

    Subnet Broadcasting

    Actually public space -- the IPs given above are just examples. While it wouldn't take too much work to find what the ranges really are, a little obfuscation on a public forum isn't a bad thing. But I apologise for not making that clear and so wasting your time. However, moving to completely private behind our NATing Fortigate is certainly an option. As is moving to IPv6, for which we are getting increasing pressure from central Networking. Both would be long-term projects and neither comes under the heading of "fun" for me, so I'll do what I can with what we've currently got. Thanks anyway, Nige
  21. Nigel Smith

    Subnet Broadcasting

    ff.ff.ff.ff.ff.ff.ff is a MAC address, the Layer 2 analog of Layer 3's 192.168.183.255 -- it's an "ethernet broadcast" while the second is an IPv4 broadcast. That's the crux of the question -- will RS client respond to that in the same way as the normal IPv4 broadcast? Wireshark shows that the forwarded packets retain their IPv4 headers, originating from the server (192.168.45.31) and using UDP port 497 -- however their destination is 192.168.183.15, which may mess with the client (i.e. it's receiving a broadcast but not from the expected IPv4 broadcast address. Server to Fortigate looks good, Fortigate to subnet looks OK, I now need to Wireshark a client to see what it is getting. I'd do this at home. But here we have 450+ non-static devices and ~350 addresses in the IP pool. So, quite aside from the work involved, we simply won't fit. As I understand it, P/PKA merely obviates the need to provide a backup password during client install. It still requires the server to poll for the new client as usual (though that process can be automatic) and so will have the same problem. But I'll have another look in case the server address can be included so the client can notify it of its presence. This is Plan B (or probably F or G by now ? ). Install the client normally, temporarily re-bind to a 45 address if necessary, add client to server. It appears that our sticking point is the initial add -- once the client is registered the server can detect it on either subnet. But I can't see why the mechanisms are different, I may be seeing an IP-caching artefact rather than a true detection, and this will need a lot of testing before I'm happy with it. From your follow-up: This might work. I had a quick look at Remote Backup for another problem, but it didn't help (no control over the other [private] network's settings) so didn't delve too deep. If I can create sub-folders in the Remote Backup Clients folder and assign those to different Proactive scripts -- to maintain concurrent operations and allow different clients to use different backup sets -- it might be a work-round if we can't get things working "properly". Thanks for the idea! And finally... I'm kinda hoping Mayoff will stumble across this thread. Having been helped by Robin before when trying (and succeeding!) to subvert Internet Backups to do things they weren't meant to, I know he's The Man and isn't averse to handling weird situations like this. Thanks David, this has been a great help. I'm desperately trying to solve this without a complete network re-do because, at that stage, we'd also be looking at things like client network login -- and that would make Retrospect's USP of Proactive backups redundant and almost certainly push us to changing to Netvault or similar. And we don't want to do that! Nige
  22. Nigel Smith

    Subnet Broadcasting

    My bad -- we're actually using Multi Server Premium v15.1.2.100 on Windows Server 2016 for this. Clients are Mac and Windows of various vintages. But, as I far as I know, that shouldn't matter. The underlying mechanism for RS's subnet broadcasts has been the same for years (though it is handled differently at the OS level by Macs and PCs) and it is that which I am trying to get more info on. Server interface doesn't need adjusting, just "Default" with the 45 and 183 subnets defined -- and it has to be that way since each client can get either a 45- or 183-based DHCP provided address when they connect to our network (using different interfaces for each subnet works for client discovery, but clients are then only backed up when they are on the same subnet they were discovered on ? ). If it sounds a horrible mess -- it is! But it is like that for historic reasons, which we had no control over. We used to be OK because our network ran under a 255.255.0.0 net mask and so RS broadcasts from a 45 covered that and the 183 (we have a third, unrelated, subnet but any client there is static IPed and so reachable directly), but a gateway "upgrade" last year resulted in both physical and logical topology changes which included tightening the mask -- a good thing in general, but not for this specific... Central Network's guys are suggesting Layer 2 broadcast forwarding (rather than Layer 3 as described in that Wikipedia article), but I'll be chasing my tail if RS client doesn't respond to Layer 2 ? (Oh, and thanks David -- nice to recognise a name from the past!) Nige
  23. Nigel Smith

    Excluding OS X path in Server 7.7

    Does your version of RS Server give access to the *Unix* Path condition? If so, try that with "begins with" and "/System/Library/Caches".
  24. Remember that each Snapshot is a "point in time". A file that was created once and never altered will "exist" in every snapshot from then on, but if you restore "every file" it will only be restored once in its original, never-changed form. Restore 10 snapshots and you get 10 identical copies of the 1 file in the backup :-) Your 4800:14,000 ratio isn't unreasonable, and it will depend on how often your clients create, edit and delete files. But, IMO, your "Find" approach is the correct one, assuming that's "Restore" and then "Search for files in selected media sets" -- it's the one I always used in previous versions of Retrospect. Try it for a sub-folder that you know contains changing files and you should find that edited versions are restored with incrementing numbers in the filename.
  25. All, Another variant on the "-1,124 ( invalid NTFS data)" error. Anyone seen this before? Setup: Xserve running OS X 10.7.5 and Retrospect Server 11.5.3 (103) backing up a variety of Mac and Windows clients to a Disk Media Set stored on an attached Xsan volume. Using both Scheduled and Proactive scripts, all to the same media set. Everything ran fine for the first week, but now the Mac clients are all throwing "!Trouble matching <client> to <catalog>, error -1,124 ( invalid NTFS data)". Windows clients are backing up as before. The Xserve and Xsan both pass volume consistancy checks, as do all the clients I've checked. Failing Mac clients can also be backed up to a new Disk Media Set on the same SAN without problem and without re-registering with Retrospect or even a restart. The only thing I can think of that changed over the weekend was that Retrospect bumped into what I assume is an 8TB volume limit for its Disk Media Set function -- despite more than 50TB of free space, there was a "New Media" request which was satisfied by simply pointing it to the SAN and letting it create a second directory. I can't believe that Mac clients can't cope with multi-disk Disk Media Sets -- that would be all over the Forum! And also something that, surely, the server and not the client mediates. So what *is* going on here? Even as I type: + Normal backup using Daytime at 07/04/2015 13:39:11 (Activity Thread 1) To Backup Set 2015... - 07/04/2015 13:39:11: Copying Users on <Mac-Computer1> Using Instant Scan !Trouble matching Users on <Mac-Computer1> to 2015, error -1,124 ( invalid NTFS data) + Normal backup using Daytime at 07/04/2015 13:51:29 (Activity Thread 2) To Backup Set 2015... - 07/04/2015 13:51:29: Copying Users on <Windows-Computer1> 07/04/2015 14:17:33: Snapshot stored, 33.9 MB 07/04/2015 14:17:38: Comparing Users on <Windows-Computer1> 07/04/2015 14:17:56: Execution completed successfully Completed: 495 files, 350.7 MB Performance: 637.6 MB/minute (429.4 copy, 1,315.1 compare) Duration: 00:26:27 (00:25:20 idle/loading/preparing) 07/04/2015 14:17:59: Script "Daytime" completed successfully + Normal backup using Daytime at 07/04/2015 14:19:09 (Activity Thread 2) To Backup Set 2015... - 07/04/2015 14:19:09: Copying Users on <Mac-Computer2> !Trouble matching Users on <Mac-Computer2> to 2015, error -1,124 ( invalid NTFS data) (Change in Activity Thread number is me triggering a schedule manually to instantly test another previously-failed machine with a fresh set).
×