Jump to content


Popular Content

Showing content with the highest reputation since 01/24/2020 in all areas

  1. 2 points
    Perhaps you and David could form a club? If there's T-shirts, I'll join!
  2. 2 points
    The paragraph above, in my first post in this thread, is the only thing I wrote that could be considered "rude and disrespectful". But it had the effect I intended, which was to get you to supply the missing information in a post directly following it. Once you'd done that, Lennart_T and I and Nigel Smith were able to diagnose your problem—which is why I wrote "Do you see how much better help you get when you calm down long enough to explain what you're trying to do? 😀" at the top of my next post in the thread. Nobody can give you "an idea whats going on here" if you don't supply the necessary information, and I will always consider it "rude and disrespectful" for you—or anyone else—to expect people on these Forums to do so. You were probably very frustrated when you wrote that OP, but I don't consider that a sufficient excuse. You should try volunteering to answer other administrators' questions on these Forums; you'll soon understand my attitude toward administrators who don't supply the necessary information. You will also develop a "psychic" ability to read an administrator's "tone, demeanor and thoughts just by reading a few words on a screen", and by reading his/her past posts as well—as I did in this thread to guess what you were doing. I'm sorry to inform you that employees of Retrospect "Inc." haven't been reading these Forums for the past couple of years, and—unlike some other websites—there aren't any volunteer Forums administrators. So if you want to make—after 3 months 😲—a complaint about me, you'll need to create a Support Case for the head of Retrospect Tech Support to read. Here's how to create a Support Case for a bug—the closest equivalent; select "Forums" in the appropriate drop-down.
  3. 2 points
    OK so, life in my hands (and knowing a workmate had just arrived at the office, so could press buttons if needed), I connected to the remote VM and logged into Windows. RS Client was running, and I could turn it off and on again. Ctrl-Alt-Del, Task Manager, found "retroclient.exe", "end process". Launched Retrospect Client, which showed Status as "Off", and the "Protected by..." checkbox greyed out. Spinning wheel every time I tried to interact with it. File C:\ProgramData\Retrospect Client\retroclient still present from before. After 5 minutes of "Program not responding", clicked the "Close program" option. Launched Client again, this time "Run as admin", same results. Ended process. Messed around with the retroclient file, still not launching. In Task Manager's "Services" tab, found "Retrospect Client", right-clicked, "Restart service". (Interestingly, that rotated the retroclient[.logn] files and created a new retroclient file.) Launched Client, everything A-OK! So I think that's your alternative to a reboot -- Task Manager->Services and "Restart" the "Retrospect Client". You don't even have to kill the Retrospect processes (Sys Tray, Inst Scan). Would love to hear how you get on next time your wife's machine has this problem...
  4. 2 points
    pjreagan, How about the comparative number of files backed up, vs. the total bytes? Lots of smaller files means a slower backup and compare than fewer larger files. If there's a significant difference in the processor speeds between "AG-04" and "DD-13", that will also affect the Retrospect "client" backup speeds. This post in a 2017 thread, especially the first two paragraphs (the P.S. mostly rules out speed of the "backup server" for "client" files), covers the first part of experiments I ended up doing that have a bearing on this. This post in that same thread, and the one following, covers the rest. This post near the end of the same thread discusses my hypothesis on slowness of "client" backups. P.S.: This ArsTechnica Mac forum post says "My 2012 Era iMac with 1Tb Fusion Drive is starting to really slow down. It seems like when the drive is nearly full, the physical drive doesn't like to spin up anymore. ugh."
  5. 1 point
    redleader and Nigel Smith, There surely can't be any Catalog updating in the run of a Copy script, because that script doesn't designate a Media Set as a destination—no Media Set means no designation of a Catalog. That's why I thought my test might run faster than the equivalent Recycle run of my Backup script. In fact its copying phase ran slower. Maybe cramming copies of multiple source files into a single .rdb file is faster than adding each copied file to the macOS HFS+ filesystem, but I'm not inclined to investigate it. There weren't any files in the destination folder. I had deleted all those that were copied there by a test run I killed after 5 minutes, when I noticed the script mistakenly specified Copy all files (which wouldn't require name comparison) instead of Copy only missing files—which redleader specifies. In any case, my test proves that redleader could get his backing up done faster with Backup scripts. He could also use the resultant Catalogs to do grooming as Nigel Smith suggested. I don't bother with grooming, since I must—because I've experienced multiple cases of water leaks from an apartment two floors above mine—swap a portable HDD containing complete-as-of-Friday-morning backups of all my drives off-site once a week. P.S.: On pages 120–121 of the Retrospect Mac 16 User's Guide, under item 6. there are 5 paragraphs following this single-sentence paragraph: that describe not only the pop-up that redleader shows he chose in the screenshot in this up-thread post, but each of the other pop-up options. Those 5 paragraphs have been deleted from page 110 of the Retrospect Mac 17 UG—evidently by the StorCentric Slasher (my name for him 🤣 ).
  6. 1 point
    The confounding issue is that Macs behave "properly" -- when a "new" primary interface becomes "live" the client will, eventually, switch to it. On Windows the client often gets stuck -- I most commonly see it when users have started/woken their laptop up (client binds to wireless) then connect to the ethernet (which takes precedence for network traffic, but the client is still on wireless), but I've also seen what MrPete describes (client binding to internal IP during network change, and not releasing). That you are using Macs, plus the relatively simple nature of your home network, means that your suggested automated work-rounds will (probably) work. In more complicated situations, with Windows clients, that's far less likely. While I'm sure it could be made to work, the real solution is to fix the problem (which, hopefully, that in-progress-but-delayed bug fix will do).
  7. 1 point
    Nope. However, I just solved it, with the help of this freeware: http://backupchain.com/en/vssdiag/ Not at ALL what I expected: I was getting VSS errors indicating slowness while doing backup. Duuuh. That should not be a surprise. The actual issue: A separate partition (D:) is NOT part of the VSS shadowing at all. On this computer, that partition is used for a lot of active data storage, including continual security cam file saving There was a latent filesystem error (in Windows: chkdsk /f solves it) Fixing that (on D:) made VSS work properly on the C drive and all other drives.
  8. 1 point
    Neither of which will work, because it is an RS client/OS problem. You can see how it should work with your Mac. Have the Client Preferences open while you are on your ethernet network, then unplug the ethernet and join a wireless network. RS Prefs will read "Client_name Multicast unavailable" in the "Client name:" section for a while (still bound to the unplugged ethernet) and then switch to the new IP address and read "Client_name (wirelessIPAddress)". (Going from memory, exact messages may be different, but you can see a delay then a re-bind to the new primary IP.) But in the same situation, Windows RS Client will go from the ethernet-bind to self-assigned-IP-bind but not then switch to the new wireless primary IP -- it gets stuck on the self-assigned address. Whether that's RS Client or Windows "doing it wrong" is something they can argue about amongst themselves... It does suggest another solution, though. That self-assigned IP is always in the 169.254.*.* subnet. If you are in a single-subnet situation and can configure your DHCP server appropriately you could have your network only use addresses in 169.254.*.* range, and both DHCP- and self-assigned addresses will be in the same subnet and the client will always be available.
  9. 1 point
    BYOD - Bitlock your own disaster?
  10. 1 point
    I'm constantly repeating similar to our Mac users. FileVault (macOS's similar feature) may be great for securing their files, but makes frequent usable backups even more important because a failed drive usually means the loss of the data on it. So we're on the fence whether to use it -- we have way more failed disks than lost/stolen laptops and work data isn't particularly sensitive/valuable so, in what is virtually a BYOD environment, it's up to the user whether they want the extra security for their personal stuff and if so they can take on extra responsibility for their backups.
  11. 1 point
    And the bad news is -- it does... "But Nige," I hear you say, "surely that's a good thing, allowing us to onboard Remote clients without making them come to the office?" I don't think so, because Remote Clients are automatically added: ...without the "Automatically Added Clients" tag, so there's no easy way to find them ...with the "Remote Backup Clients" tag, which means they will be automatically added to all Remote-enabled Proactive scripts ...with the client's backup "Options" defaulting to "All Volumes", so external drives etc will also be included I let things run and, without any intervention on my part, the new client started backing up via the first available Remotely-enabled script. Edit to add: I didn't notice this last night, but it appears that the client was added with the interface "Default/Direct IP", in contrast to the clients automatically added from the server's network which were "Default/Subnet". I don't know what this means if my home router's IP changes or I take the laptop to a different location (will the server consider it to now be on the "wrong" IP and refuse to back it up?) or if I know take it into work and plug in to the network (will the server not subnet-scan for the client, since it is "DirectIP"?). End edit Given the above I'd suggest everyone think really carefully before enabling both "Automatically add.." and "Remote Client Backups" unless they retain control of client installation (eg over a remote-control Zoom session) -- especially since I've yet to find out what happens if you have a duplicate client name (the next test, which I'm struggling to work out how to do).
  12. 1 point
    Finally got around to having a play with this. While RS17 still treats tags as "OR" when choosing which clients to back up in script and you can't use tags within a rule, you can use "Source Host" as part of a rule to determine whether or not a client's data is backed up by a particular Remote-enabled script. It involves more management, since you'd have to build and update the "Source Host" rules for each script, but there's a bigger problem: Omitting by Rule is not the same as not backing up the client. That's worth shouting -- the client is still scanned, every directory and file on the client's volume(s) or Favourite Folder(s) will be matched, a snapshot will be stored, and the event will be recorded as a successful backup. It's just that no data will be copied from client to server. (TBH that's the behaviour I should have expected from my own answers in other threads about how path/directory matching is applied in Rules.) So if you have 6 Proactive scripts, each backing up 1 of 6 groups of clients daily to 1 of 6 backup sets, every client will be "backed up" 6 times with just 1 resulting in data being copied. That's a lot of overhead, and may not be worth it for the resulting reduced (individual) catalog size. Also note: a failed media set or script will not be as obvious since it won't result in clients going into the "No backup in 7 days" report, since the "no data" backups from the other scripts are considered to be successful. For me, at least, Remote Backups is functionality that promises much but delivers little. Which is a shame -- if Remote Backup was a script or client option rather than a tag/group attribute, or if tag/group attributes could be evaluated with AND as well as OR logic, I think it would work really well.
  13. 1 point
    prophoto, Your OP is one of the least "pro" of any lately posted. 🙄 You don't say what version of Retrospect Windows you are using, or what version of Windows your "backup server" is running, or what version of what OS your "remote machine" is running. You probably should get someone who knows more about IT to help you with future posts to these Forums. Nevertheless, although I'm a Retrospect Mac administrator, I'll try to give you an answer based on no provided information. When you say "create a new backup set on a remote machine connected via a site to site VPN", you must mean the destination is a NAS share on your VPN. Watch this video 3 times before you go any further. Don't create a Storage Group unless you really want to. As the video implies, you shouldn't put the Catalog for the backup set on the NAS; instead the Catalog should be in the default location on your "backup server"'s C:\ drive. Be especially sure you are following the procedure from video minute 0:36 to 0:48, and also from minute 2:04 to the end; maybe your problem is that you didn't configure automatic login per minute 2:04. If that doesn't solve your problem, and you are using a Retrospect version earlier than 17, consider doing at least a trial upgrade—AFAIK free for 45 days. The cumulative Release Notes for Retrospect Windows lists a fix, under, that may also apply to creating a backup set on a NAS share:
  14. 1 point
    Malcolm McLeary, When Nigel Smith says "define the ones you want as volumes", he probably means Retrospect-specified Subvolumes. Described on pages 349–351 of the Retrospect Windows 17 User's Guide, they were renamed Favorite Folders in Retrospect Mac 8. I use a Favorite Folder in a Backup script; it works. However Retrospect Windows also has defined-only-in-Retrospect Folders, which are described on pages 348–349 of the UG as a facility for grouping source volumes. The description doesn't say so, but you can possibly move defined Subvolumes—even on different volumes—into a Folder. Since the Folders facility was removed in Retrospect Mac 8, I didn't know it even existed until I read about it 5 minutes ago. That's to say Your Mileage May Vary (YMMV), as we say in the States (in a phrase originally used in auto ads). If they work as groups of Subvolumes, they may simplify your backup scripts.
  15. 1 point
    Retrospect doesn't do a UNIXy tree-walk, not bothering to look at anything "/backup/FileMaker/Progressive/" or lower. Instead it scans *every* file of a volume and applies its selectors to decide what to do. I'd assume from the errors that it is getting partway through scanning those directories' contents when, suddenly, they vanish. Whilst annoying in a simple case like you describe, it's also part of what makes the selectors so powerful -- for example, being able to exclude files on a path *unless* they were modified in the last 2 hours -- and why all the metadata needs to be collected via the scan before a decision can be made. Two ways round this. If you want to exclude most paths, define the ones you want as volumes and only back those up -- we only back up "/Users" so that's what we do, which also greatly reduces scan time. If you want to back up most but not all, which I guess is what you're after, use the "Privacy" pane in the client to designate those paths to exclude.
  16. 1 point
    Would just warn that different routers' DHCP servers behave in different ways. Some treat the address blocks reserved for statics as inviolate, some will continue to offer those addresses when no MAC address has been set, etc. I always belt-and-brace, putting MAC addresses in the router's table and setting static IPs on the clients, when I need a definitely-fixed IP. Also, some routers force a certain (often limited) range for statics and others let you do as you will, so check your docs before planning.
  17. 1 point
    From my earlier back-of-an-envelope calculations, both D2D and D2T should fit in overnight. More importantly, because he isn't backing up during the day, the "to tape" part can happen during the day as well (my guess is that he was assuming nightlies would take as long as the weekend "initial" copy, rather than being incremental), so he should have bags of time. I know nothing about Veeam's file format, only that it's proprietary (rather than eg making a folder full of copies of files). It may be making, or updating, single files or disk images -- block level incrementals may be the answer. Or it may be that Veeam is actually set to do a full backup every time... It is a snapshot, in both computerese and "normal" English -- a record of state at a point in time. I don't think the fact that it is different to a file system snapshot, operating system snapshot, or ice hockey snap shot 😉 requires a different term -- the context makes it clear enough what's meant, IMO.
  18. 1 point
    Nigel Smith, I finally figured out the explanation for the confusing Copy Backup popup option explanation in the Retrospect Mac 17 User's Guide (you're citing that version, though Joriz runs 16.6) page 121. We must detour to page 177 of the Retrospect Windows 17 UG, where it is written of Transfer Snapshot—the equivalent operation with the same options described in a slightly-different sequence: So why did they butcher this explanation for Retrospect Mac 8 and thereafter? The reason is that the term "snapshot" was abolished in the Retrospect Mac GUI because by 2008 the term had acquired a standard CS meaning, eventually even at Apple. Starting in 1990 the Mac UG (p. 264) had defined it: The term "active Snapshot" is not defined even in the Windows UG; it means a Snapshot that the "backup server" has given the status "active" by keeping it in a source Media Set's Catalog. As we see from Eppinizer's first paragraph quoted here up-thread, it is the single latest Snapshot if the source Media Set has grooming disabled—but is the "Groom to keep this number of backups" number of latest-going-backward-chronologically Snapshots otherwise. So that's why the choices in the Copy Backup popup have the word "backups" as plural. I'll create a documentation Support Case asking that the august Documentation Committee put Eppinizer's definition of "active" backups/Snapshots into the UGs. But "a backup that is kept in the Catalog" sounds silly.
  19. 1 point
    Joriz, First read pages 14-15 of the Retrospect Mac 13 User's Guide. That's why grooming isn't doing anything to reduce the size of your disk Media Sets. If even one source file backed up to an RDB file is still current, then performance-optimized grooming won't delete the RDB file. You should be using storage-optimized grooming unless your disk Media Sets are in the cloud—which you say they aren't. (It seems the term "performance-optimized" can trick administrators who aren't native English speakers, such as you.) There's a reason performance-optimized grooming was introduced in the same Retrospect Mac 13 release as cloud backup. It's because rewriting (not deleting) an RDB file in a cloud Media Set requires downloading it and then uploading the rewritten version—both of which take time and cost money.
  20. 1 point
    Easier stuff first... This is usually either disk/filesystem problems on the NAS (copy phase) or on the NAS or target (compare phase), or networking issues (RS is more sensitive to these than file sharing is, the share can drop/remount and an OS copy operation will cope but RS won't). So disk checks and network checks may help. But if a file isn't backed up because of an error, RS will try again next time (assuming the file is still present). RS won't run again because of the errors, so you either wait for the next scheduled run or you trigger it manually. Think of it this way -- if you copy 1,000 files with a 1% chance of errors, on average 10 files will fail. So on the second run, when only those 10 files need to be copied, there's only a 1-in-10 chance that an error will be reported. Easy enough to check -- are the reported-to-error files from the first backup present in the backup set after the second? Now the harder stuff 😉 Is this overall? Or just during the write phase? How compressed is the data you are streaming (I'm thinking video files, for some reason!)? You could try your own speed test using "tar" in the Terminal, but RS does a fair amount of work in the "background" during a backup so I'd expect considerably slower speeds anyway... A newer Mac could only help here. I'm confused -- are you saying you back up your sources nightly, want to only keep one backup, but only go to tape once a week? So you don't want to off-site the Mon/Tues/Wed night backups? Regardless -- grooming only happens when either a) the target drive is full, b) you run a scripted groom, or c) you run a manual groom. It sounds like none of these apply, which is why disk usage hasn't dropped. If someone makes a small change to a file, the space used on the source will hardly alter -- but the entire file will be backed up again, inflating the media set's used space. If you've set "Use attribute modification date when matching" then a simple permissions change will mean the whole file is backed up again. If "Match only file in same location/path" is ticked, simply moving a file to a different folder will mean it is backed up again. It's expected the backup of an "in use" source is bigger than the source itself (always excepting exclusion rules, etc). At this point it might be better to start from scratch. Describe how your sources are used (capacity, churn, etc), define what you are trying to achieve (eg retention rules, number of copies), the resources you'll allocate (tapes per set, length of backup windows (both for sources and the tape op)), then design your solution to suit. You've described quite a complex situation, and I can't help but feel that it could be made simpler. And simpler often means "less error prone" -- which is just what you want!
  21. 1 point
    Wrong -- we're now into Week 6 of working from home. ...was to issue everyone an external hard drive to use with Time Machine or Windows backup. Rest of my attempted reply just got eaten. Suffice to say: Have tried Remote Backups before -- fail for us because you can't segregate clients into different sets Keep meaning to try Remote combined with Rules -- can you run multiple Proactives using Remote tag, each using a different set, each with different Rules to filter by client name? Previously felt client list was too long for this to work, but RS 17's faster Proactive scanning may make it feasible Tags need an "AND" combiner and not just an "OR". That may not be sensible/possible -- include ability to use Tags in Rules and you'd make Rules way more powerful
  22. 1 point
  23. 1 point
    I think (not sure) that you should double-click the catalog file (in Windows Explorer) while the backup is NOT running.
  24. 0 points
    Can someone give me an idea whats going on here? All local machines have no trouble logging into the share and reading/writing files. I am able to create a new backup set on a remote machine connected via a site to site VPN. When I run the backup script it asks for media. I've tried dozens of times but I just can't get it to work. Thanks.
  25. 0 points
    I have three Proactive scripts defined: FT_FD to backup both our laptops and desktops, JAD-Cloud as a test to backup my laptop to Minio running on one of our Synology units and a regular JAD script to a disk storage set. In Schedule, I've set both JAD proactive scripts to every 3 hours while the FT_FD script is every 1 day. For the JAD scripts, my laptop SSD is the only source. For the FT_FD script, there are ~60 sources. My laptop is not in the source list for the FT_FD script. Neither of the JAD proactives have executed successfully/completely in the last 7 days (last successful backup was 1 June 2020). How can I ensure that my laptop actually gets backed up in a reasonable period? Thanks. Cheers, Jon