Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 11/28/2019 in Posts

  1. 1 point
    Neither of which will work, because it is an RS client/OS problem. You can see how it should work with your Mac. Have the Client Preferences open while you are on your ethernet network, then unplug the ethernet and join a wireless network. RS Prefs will read "Client_name Multicast unavailable" in the "Client name:" section for a while (still bound to the unplugged ethernet) and then switch to the new IP address and read "Client_name (wirelessIPAddress)". (Going from memory, exact messages may be different, but you can see a delay then a re-bind to the new primary IP.) But in the same situation, Windows RS Client will go from the ethernet-bind to self-assigned-IP-bind but not then switch to the new wireless primary IP -- it gets stuck on the self-assigned address. Whether that's RS Client or Windows "doing it wrong" is something they can argue about amongst themselves... It does suggest another solution, though. That self-assigned IP is always in the 169.254.*.* subnet. If you are in a single-subnet situation and can configure your DHCP server appropriately you could have your network only use addresses in 169.254.*.* range, and both DHCP- and self-assigned addresses will be in the same subnet and the client will always be available.
  2. 1 point
    BYOD - Bitlock your own disaster?
  3. 1 point
    I'm constantly repeating similar to our Mac users. FileVault (macOS's similar feature) may be great for securing their files, but makes frequent usable backups even more important because a failed drive usually means the loss of the data on it. So we're on the fence whether to use it -- we have way more failed disks than lost/stolen laptops and work data isn't particularly sensitive/valuable so, in what is virtually a BYOD environment, it's up to the user whether they want the extra security for their personal stuff and if so they can take on extra responsibility for their backups.
  4. 1 point
    And the bad news is -- it does... "But Nige," I hear you say, "surely that's a good thing, allowing us to onboard Remote clients without making them come to the office?" I don't think so, because Remote Clients are automatically added: ...without the "Automatically Added Clients" tag, so there's no easy way to find them ...with the "Remote Backup Clients" tag, which means they will be automatically added to all Remote-enabled Proactive scripts ...with the client's backup "Options" defaulting to "All Volumes", so external drives etc will also be included I let things run and, without any intervention on my part, the new client started backing up via the first available Remotely-enabled script. Edit to add: I didn't notice this last night, but it appears that the client was added with the interface "Default/Direct IP", in contrast to the clients automatically added from the server's network which were "Default/Subnet". I don't know what this means if my home router's IP changes or I take the laptop to a different location (will the server consider it to now be on the "wrong" IP and refuse to back it up?) or if I know take it into work and plug in to the network (will the server not subnet-scan for the client, since it is "DirectIP"?). End edit Given the above I'd suggest everyone think really carefully before enabling both "Automatically add.." and "Remote Client Backups" unless they retain control of client installation (eg over a remote-control Zoom session) -- especially since I've yet to find out what happens if you have a duplicate client name (the next test, which I'm struggling to work out how to do).
  5. 1 point
    Finally got around to having a play with this. While RS17 still treats tags as "OR" when choosing which clients to back up in script and you can't use tags within a rule, you can use "Source Host" as part of a rule to determine whether or not a client's data is backed up by a particular Remote-enabled script. It involves more management, since you'd have to build and update the "Source Host" rules for each script, but there's a bigger problem: Omitting by Rule is not the same as not backing up the client. That's worth shouting -- the client is still scanned, every directory and file on the client's volume(s) or Favourite Folder(s) will be matched, a snapshot will be stored, and the event will be recorded as a successful backup. It's just that no data will be copied from client to server. (TBH that's the behaviour I should have expected from my own answers in other threads about how path/directory matching is applied in Rules.) So if you have 6 Proactive scripts, each backing up 1 of 6 groups of clients daily to 1 of 6 backup sets, every client will be "backed up" 6 times with just 1 resulting in data being copied. That's a lot of overhead, and may not be worth it for the resulting reduced (individual) catalog size. Also note: a failed media set or script will not be as obvious since it won't result in clients going into the "No backup in 7 days" report, since the "no data" backups from the other scripts are considered to be successful. For me, at least, Remote Backups is functionality that promises much but delivers little. Which is a shame -- if Remote Backup was a script or client option rather than a tag/group attribute, or if tag/group attributes could be evaluated with AND as well as OR logic, I think it would work really well.
  6. 1 point
    prophoto, Your OP is one of the least "pro" of any lately posted. 🙄 You don't say what version of Retrospect Windows you are using, or what version of Windows your "backup server" is running, or what version of what OS your "remote machine" is running. You probably should get someone who knows more about IT to help you with future posts to these Forums. Nevertheless, although I'm a Retrospect Mac administrator, I'll try to give you an answer based on no provided information. When you say "create a new backup set on a remote machine connected via a site to site VPN", you must mean the destination is a NAS share on your VPN. Watch this video 3 times before you go any further. Don't create a Storage Group unless you really want to. As the video implies, you shouldn't put the Catalog for the backup set on the NAS; instead the Catalog should be in the default location on your "backup server"'s C:\ drive. Be especially sure you are following the procedure from video minute 0:36 to 0:48, and also from minute 2:04 to the end; maybe your problem is that you didn't configure automatic login per minute 2:04. If that doesn't solve your problem, and you are using a Retrospect version earlier than 17, consider doing at least a trial upgrade—AFAIK free for 45 days. The cumulative Release Notes for Retrospect Windows lists a fix, under 17.0.0.180, that may also apply to creating a backup set on a NAS share:
  7. 1 point
    Malcolm McLeary, When Nigel Smith says "define the ones you want as volumes", he probably means Retrospect-specified Subvolumes. Described on pages 349–351 of the Retrospect Windows 17 User's Guide, they were renamed Favorite Folders in Retrospect Mac 8. I use a Favorite Folder in a Backup script; it works. However Retrospect Windows also has defined-only-in-Retrospect Folders, which are described on pages 348–349 of the UG as a facility for grouping source volumes. The description doesn't say so, but you can possibly move defined Subvolumes—even on different volumes—into a Folder. Since the Folders facility was removed in Retrospect Mac 8, I didn't know it even existed until I read about it 5 minutes ago. That's to say Your Mileage May Vary (YMMV), as we say in the States (in a phrase originally used in auto ads). If they work as groups of Subvolumes, they may simplify your backup scripts.
  8. 1 point
    Retrospect doesn't do a UNIXy tree-walk, not bothering to look at anything "/backup/FileMaker/Progressive/" or lower. Instead it scans *every* file of a volume and applies its selectors to decide what to do. I'd assume from the errors that it is getting partway through scanning those directories' contents when, suddenly, they vanish. Whilst annoying in a simple case like you describe, it's also part of what makes the selectors so powerful -- for example, being able to exclude files on a path *unless* they were modified in the last 2 hours -- and why all the metadata needs to be collected via the scan before a decision can be made. Two ways round this. If you want to exclude most paths, define the ones you want as volumes and only back those up -- we only back up "/Users" so that's what we do, which also greatly reduces scan time. If you want to back up most but not all, which I guess is what you're after, use the "Privacy" pane in the client to designate those paths to exclude.
  9. 1 point
    Would just warn that different routers' DHCP servers behave in different ways. Some treat the address blocks reserved for statics as inviolate, some will continue to offer those addresses when no MAC address has been set, etc. I always belt-and-brace, putting MAC addresses in the router's table and setting static IPs on the clients, when I need a definitely-fixed IP. Also, some routers force a certain (often limited) range for statics and others let you do as you will, so check your docs before planning.
  10. 1 point
    From my earlier back-of-an-envelope calculations, both D2D and D2T should fit in overnight. More importantly, because he isn't backing up during the day, the "to tape" part can happen during the day as well (my guess is that he was assuming nightlies would take as long as the weekend "initial" copy, rather than being incremental), so he should have bags of time. I know nothing about Veeam's file format, only that it's proprietary (rather than eg making a folder full of copies of files). It may be making, or updating, single files or disk images -- block level incrementals may be the answer. Or it may be that Veeam is actually set to do a full backup every time... It is a snapshot, in both computerese and "normal" English -- a record of state at a point in time. I don't think the fact that it is different to a file system snapshot, operating system snapshot, or ice hockey snap shot 😉 requires a different term -- the context makes it clear enough what's meant, IMO.
  11. 1 point
    Nigel Smith, I finally figured out the explanation for the confusing Copy Backup popup option explanation in the Retrospect Mac 17 User's Guide (you're citing that version, though Joriz runs 16.6) page 121. We must detour to page 177 of the Retrospect Windows 17 UG, where it is written of Transfer Snapshot—the equivalent operation with the same options described in a slightly-different sequence: So why did they butcher this explanation for Retrospect Mac 8 and thereafter? The reason is that the term "snapshot" was abolished in the Retrospect Mac GUI because by 2008 the term had acquired a standard CS meaning, eventually even at Apple. Starting in 1990 the Mac UG (p. 264) had defined it: The term "active Snapshot" is not defined even in the Windows UG; it means a Snapshot that the "backup server" has given the status "active" by keeping it in a source Media Set's Catalog. As we see from Eppinizer's first paragraph quoted here up-thread, it is the single latest Snapshot if the source Media Set has grooming disabled—but is the "Groom to keep this number of backups" number of latest-going-backward-chronologically Snapshots otherwise. So that's why the choices in the Copy Backup popup have the word "backups" as plural. I'll create a documentation Support Case asking that the august Documentation Committee put Eppinizer's definition of "active" backups/Snapshots into the UGs. But "a backup that is kept in the Catalog" sounds silly.
  12. 1 point
    Joriz, First read pages 14-15 of the Retrospect Mac 13 User's Guide. That's why grooming isn't doing anything to reduce the size of your disk Media Sets. If even one source file backed up to an RDB file is still current, then performance-optimized grooming won't delete the RDB file. You should be using storage-optimized grooming unless your disk Media Sets are in the cloud—which you say they aren't. (It seems the term "performance-optimized" can trick administrators who aren't native English speakers, such as you.) There's a reason performance-optimized grooming was introduced in the same Retrospect Mac 13 release as cloud backup. It's because rewriting (not deleting) an RDB file in a cloud Media Set requires downloading it and then uploading the rewritten version—both of which take time and cost money.
  13. 1 point
    Easier stuff first... This is usually either disk/filesystem problems on the NAS (copy phase) or on the NAS or target (compare phase), or networking issues (RS is more sensitive to these than file sharing is, the share can drop/remount and an OS copy operation will cope but RS won't). So disk checks and network checks may help. But if a file isn't backed up because of an error, RS will try again next time (assuming the file is still present). RS won't run again because of the errors, so you either wait for the next scheduled run or you trigger it manually. Think of it this way -- if you copy 1,000 files with a 1% chance of errors, on average 10 files will fail. So on the second run, when only those 10 files need to be copied, there's only a 1-in-10 chance that an error will be reported. Easy enough to check -- are the reported-to-error files from the first backup present in the backup set after the second? Now the harder stuff 😉 Is this overall? Or just during the write phase? How compressed is the data you are streaming (I'm thinking video files, for some reason!)? You could try your own speed test using "tar" in the Terminal, but RS does a fair amount of work in the "background" during a backup so I'd expect considerably slower speeds anyway... A newer Mac could only help here. I'm confused -- are you saying you back up your sources nightly, want to only keep one backup, but only go to tape once a week? So you don't want to off-site the Mon/Tues/Wed night backups? Regardless -- grooming only happens when either a) the target drive is full, b) you run a scripted groom, or c) you run a manual groom. It sounds like none of these apply, which is why disk usage hasn't dropped. If someone makes a small change to a file, the space used on the source will hardly alter -- but the entire file will be backed up again, inflating the media set's used space. If you've set "Use attribute modification date when matching" then a simple permissions change will mean the whole file is backed up again. If "Match only file in same location/path" is ticked, simply moving a file to a different folder will mean it is backed up again. It's expected the backup of an "in use" source is bigger than the source itself (always excepting exclusion rules, etc). At this point it might be better to start from scratch. Describe how your sources are used (capacity, churn, etc), define what you are trying to achieve (eg retention rules, number of copies), the resources you'll allocate (tapes per set, length of backup windows (both for sources and the tape op)), then design your solution to suit. You've described quite a complex situation, and I can't help but feel that it could be made simpler. And simpler often means "less error prone" -- which is just what you want!
  14. 1 point
    Wrong -- we're now into Week 6 of working from home. ...was to issue everyone an external hard drive to use with Time Machine or Windows backup. Rest of my attempted reply just got eaten. Suffice to say: Have tried Remote Backups before -- fail for us because you can't segregate clients into different sets Keep meaning to try Remote combined with Rules -- can you run multiple Proactives using Remote tag, each using a different set, each with different Rules to filter by client name? Previously felt client list was too long for this to work, but RS 17's faster Proactive scanning may make it feasible Tags need an "AND" combiner and not just an "OR". That may not be sensible/possible -- include ability to use Tags in Rules and you'd make Rules way more powerful
  15. 1 point
  16. 1 point
    I think (not sure) that you should double-click the catalog file (in Windows Explorer) while the backup is NOT running.
  17. 1 point
    Have you tried the "Options" section of your script? There's also scheduling options there, which only apply to that script (though the defaults reflect the Schedule settings in General Prefs, which might make you think otherwise...) and so would have no impact on manual backups. Set your "Start", "Wrap up" and "Stop" times to suit your working practices and required backup window and you should be good.
  18. 1 point
    I suggest you simply remove the systems from the backup schedule on the weekend, therefore when you boot the systems on Monday morning there won't be a slowdown. You obviously already know how to launch a backup job manually, so just do that.
  19. 1 point
    That's the information I was looking for... So *if* you are only backing up one volume *and* that volume is backing up/verifying successfully *and* you can restore from the backup *and* you get the un-named volume error *and* Retrospect carries on regardless -- I'd just ignore it. If the error is causing other problems, eg killing the script while there are still other machines to process, re-arrange things so the erring machine is the last to be done. If the error is truly killing the system, eg popping a dialog that must be dismissed before continuing, I'd look into script triggers and a GUI-targetted AppleScript to automatically dismiss the dialog so RS can continue. Some things are easier to work round than to fix 😉
  20. 1 point
    As to how the crashing bug report will be treated, read the fifth paragraph in this 2017 post. I have never had ASM, but Tech Support responds to my bug reports; for one bug I was given a test release with enhanced logging—although I didn't get personalized help.
  21. 1 point
    Dashboard actually got me excited- a more friendly user interface! But it is dog-slow (okay - I LOVE dogs, but this thing is a Basset Hound in Greyhound race.) It often hangs, gobbles resources, hangs again when trying to simply scroll, and ultimately gives little useful information. It's everything short of what we've come to expect from Retrospect - a lean, efficient, business-like and functional application. When Retrospect is invoked by a scheduled script, dashboard is the only option that comes up when you want to monitor the program itself - and the fact that there is no escape from dashboard only adds to the frustration. I get that it is well-intended - but it was executed poorly and ends up detracting from the program. I'm in the trial period (Windows 16.5) and likely will invest in Retrospect based on the last ten years of functionality and reliability with 7.7. The dashboard is the single biggest negative in my pluses and minuses column as I decide on making the purchase. Just my opinion here, but instead of the dashboard, might I suggest this approach: Develop a user interface that finally leaves the '90's behind. It would probably meet dashboard's intent with more digital elegance. Add a tray monitor - something we can mouse over and see the basics, or open and get more detail. Perhaps that goes to knowing your customers. Face it - this is a techie's software that requires a greater learning curve than the prettier faces like Cloudberry. I could be mistaken, but I suspect most Retrospect users (certainly me) would appreciate a backup solution that provides ease of both interface and access - and a tray monitor would be a simple, performance-oriented way to do just that.
  22. 1 point
    Kidziti, You raise a whole bunch of points in your post. Retrospect, or any other product in its class, is not just a point-in-time purchase. You also need to "invest" in learning the product (non-trivial) and doing configuration and tuning. You also want to be sure that your investment is protected long-term because of the financial strength of the vendor. I don't know much about Storcentric (or its competition) but I will observe that backup is a relatively mature market category. One could argue that web-based backup is a different market segment than more traditional premises-based backup, but I will leave that argument to others. And what is Storecentric's strategy in purchasing Retrosepct and Drobo, as opposed to any of their direct competitors, I simply don't know. That is an issue for Storcentric of course, but it's also the kind of issue that is catnip for product management types like me. However, on the more narrow decision to purchase an ASM. I have found that I can get quite good support for my issues, even though I have not purchased an ASM. of course, when release 17 comes out, then I will have to pony up for the upgrade. More generally you have to decide if Retrospect, as it exists today, meets your needs better than the competition. Salespeople are supposed to "SWAT," Sell What's Available Today. I would ignore the statement about a new release next March, because the reality of software development is that March can easily become May or July or September. And unless you know what is in that release, you don't know how important that release is for you with capabilities that you need now, but are not in the current release. Whatever you do, DO NOT buy shares in Storcentric. There is not much information on the website, certainly not who/what is funding this acquisition strategy. https://storcentric.com/
  23. 1 point
    FYI: You can delete the snapshots one by one. But it takes lots of time and Retrospect is "not responding" for a looong time for each snapshot.
  24. 0 points
    Can someone give me an idea whats going on here? All local machines have no trouble logging into the share and reading/writing files. I am able to create a new backup set on a remote machine connected via a site to site VPN. When I run the backup script it asks for media. I've tried dozens of times but I just can't get it to work. Thanks.
  25. 0 points
    I have three Proactive scripts defined: FT_FD to backup both our laptops and desktops, JAD-Cloud as a test to backup my laptop to Minio running on one of our Synology units and a regular JAD script to a disk storage set. In Schedule, I've set both JAD proactive scripts to every 3 hours while the FT_FD script is every 1 day. For the JAD scripts, my laptop SSD is the only source. For the FT_FD script, there are ~60 sources. My laptop is not in the source list for the FT_FD script. Neither of the JAD proactives have executed successfully/completely in the last 7 days (last successful backup was 1 June 2020). How can I ensure that my laptop actually gets backed up in a reasonable period? Thanks. Cheers, Jon
×