Jump to content


  • Content count

  • Joined

  • Last visited

  • Days Won


MrPete last won the day on November 5 2019

MrPete had the most liked content!

Community Reputation

8 Neutral

About MrPete

  • Rank
    frequent Poster

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. MrPete

    Yet another -530 client not found error

    The problem with belt-and-braces: for some of us, the same computer can show up with different IP addresses in the same overall network (ie connecting to different subnets.) This requires a dynamic solution for both server and client. AFAIK, at the server end, all methods work just fine. The only serious problem with dynamic IP's is on the client end, as has been discussed.
  2. MrPete

    Yet another -530 client not found error

    Hi all! Sorry for the long delay. The Real World is intense for me right now The following simple sequence has been a 100% reliable workaround for me: 1) AFTER an endpoint is stably connected to a LAN via any mechanism (wired, wifi, etc)... 2) Restart the Retrospect Client service (stop / start as described earlier) The client NEVER gets confused under such conditions, at least in my way-too-extensive testing. Confusion arises when the client initializes at the same time as the computer is initializing various things, including a variety of network-connected startup services etc. And/or if the network connection changes (eg wired to wifi, or to a different wifi.)
  3. MrPete

    Yet another -530 client not found error

    That's pretty complex compared to restarting the client ... but sure it could be scripted.
  4. MrPete

    Yet another -530 client not found error

    Hi all, I don't have time to go into detail on this... but in sum: * I found several MORE related issues. * Turns out, some of the problems cannot be repaired by use of a fixed IP on the server side * WORKAROUND: Restart the Retrospect Client service At admin cmd prompt, you can do two commands: net stop "retrospect client" followed by net start "retrospect client" * (Alt workaround for static clients: https://www.retrospect.com/en/support/kb/client_ip_binding ) * Retrospect engineers are grinding through it all. Based on my background, I am guessing they are needing to re-architect some pretty deep stuff. It's bug #8512. Last I heard it is being worked on and hopefully in a "soon" update of retrospect 17 In brief: * Particularly for clients (laptops etc) that can shift between wifi and wired etc... * The **client** can easily latch onto a wrong or invalid (169.254.*) IP address. That's NOT what the UI says (necessarily) * Once listening there (particularly on UDP for multicast), it will NEVER update until the client service restarts. TCP client also has similar failure modes. * There's more but that's the essential bit * (To see for yourself, have fun with SysInternals TCPview, sort by port, watch port 497. Retrospect already has all they need to fix it.)
  5. MrPete

    How to FORCE a file backup on each run?

    Nigel, I am cautious about mucking with original-file metadata. (Example: every file has more than one timestamp. Do we really want to be writing scripts that somehow are maintained to understand all metadata? I would rather not be in that line of work anymore ) I don't want a comprehensive copy every time. If this "don't add duplicate" switch still takes advantage of block-scanning etc... that WOULD be interesting! I'll check it out THANKS!
  6. MrPete

    How to FORCE a file backup on each run?

    Excluding is not an issue. Using Recycle Backup would essentially remove the value of Retrospect: no versions, no block-level-change efficiency. OUCH! Might as well just do a file-copy. Hmmm... that would be interesting: - Script a file copy of the file(s) of interest, ensuring the copy has a new timestamp. - Since they ARE big, Retrospect would take advantage of block-level efficiency to only back up the blocks that have changed! - Script deleting the copy (for security) after the RS run is complete. I'll try it!
  7. Due to security requirements, certain files may NOT get a timestamp update when they change. (Consider files that contain encrypted filesystems.) Right now, by default Retrospect ignores all files with an "old" timestamp. QUESTION: is there a way to get Retrospect to back up a file even if the timestamp has not changed?
  8. MrPete

    How to restore files on top of what's there?

    Well, there's a lot of overlap I guess (Sorry for delays... Real Life kicks in for me a lot... I'm now the proud owner of a big boot on my right foot, after some extensive ankle surgery. Looking forward to long walks for the first time in many years ) David H is correct: what I played with was the GE 635... later renamed Honeywell 635. As for the SA1004... OK, I peeked in my 1970-90 archive box. Yep, I worked on those as well as the SA 1002 (5MB version) ... I no longer remember all of the names, but vaguely recall strong long-term friendship relationships among Shugart founders (possibly Al himself) and others I worked with/for back then: Dr David H Chung (also formerly w/ Fairchild, inventor of F8 micro, and of the first retail microcomputer...); And other names -- Dash Chang and more... I was a firmware dev/performance/test/consultant for Qubex, which made one of the first HDD testers - the QA2000, with both ST506 and SA100x interfaces. I can't find info on who led Qubex. (The only hint I see is about a guy I vaguely remember, Mike Juliff, formerly of Memorex... oh well. The mists of time do tend to fade things out!)
  9. No argument on "more reliable" ... I'm "old school" as well. However, something is seriously wrong if your new system, with several times the "disk" throughput (m.2 NVMe does multiple GB/sec) is not seriously quicker. If your SW builds didn't radically improve, something is simply wrong. The most common performance killer I've seen, by far, is to have a crazy number of temp files. Or a crazy number of files in ANY important folder. Get more than 1-2,000 files in the temp folder and you will seriously feel it. (Free version of CCleaner handles this and more quite nicely. Yes, there's a built in tool that kinda-sorta gets there but not as comprehensively...) Next thing: check Windows Disk Write Caching (assuming you have a UPS attached)... that makes a huge difference, particularly for SSD (seek time is zero, but you want directory info updates cached...) I would suggest SysInternals Process Explorer to examine what's eating up your performance... possibly combined with SMARTmonTools / smartctl (or HDD Sentinel Pro) if a drive isn't giving you what it should. I'm sitting at two computers right now: My 2012 "mainframe", (i7-3930k, 32GB, gobs of SSD including a RAID0 pair used for high speed video capture) 2019 Surface Pro 6 (i5-8350U, 8GB RAM, PCIe SSD 0.9-1.4GB/sec) Except for certain functions (video processing), the newer computer is noticably quicker. Just for example: My Surface Pro can cold boot to login prompt in about ten seconds. Not even close on the older computer. In general, nothing takes a long time. All software starts more or less immediately unless it has a TON of preprocessing to do. YES, lots of processes. But a few things about modern architecture make the overhead pretty much negligible these days. Multicore+incredibly fast context switching means those extra processes use VERY little CPU. As in: I have 193 processes "running" on my Surface right now. CPU usage: 2-3 percent. And I've done literally zip to make it more efficient... in fact, I've got several security and convenience apps running.
  10. I don't have exactly the same experience... however, I do have a caution to share: Microsoft OneDrive has a similar feature. Last I checked, it's not exactly compatible with ANY backup software, in the following sense: Backups work fine However, when you go to restore, it restores the EMPTY local folder(s) Which causes the cloud copies to get cleared out Which means you lose all of your cloud files Unfortunately, I can't place the details on this with a quick google search ...
  11. A very late response What did you end up doing? Assuming you have everything properly defined for your bare metal DRD restores, then yes it "ought" to be ok. If you followed my Admin Guide for DRD, you would have saved a copy of the BCD Store separately... but then what's the fun in that? 😄
  12. Windows 10 doesn't reliably pay attention to the startup folder anymore. Instead: use task scheduler to open apps when you log in. Works like a charm.
  13. AFAIK, R16 is essentially the same with respect to disaster recovery... although I admit I have not dug in as intensely yet...
  14. MrPete

    Yet another -530 client not found error

    Interesting. I just finished discovering a specific set of bugs in Retrospect, and challenges in our router(s) and local network apps, that directly lead to the above anomalies in finding and/or connecting to clients. (Yes, all of the following has been submitted as a bug report.) I'm running a more Windows-centric network, with a little OSx tossed in, so my tools are a bit different. Tools: WireShark: shows packets actually traveling. Most useful is a filter of udp.port==497 || tcp.port==497 tcpdump (command line in linux and/or osx) - monitoring port 497 TCPview (from SysInternals, now owned by Microsoft) - sort on port. Look at what is listening to 497 and on what IP address(es) (command line) ipconfig and also "route print" In Retrospect: go into Clients -> choose a client-> access tab -> Interface -> Configure Interfaces ... and see what default interface IP address is. Things to watch for: Are UDP broadcast packets being received by clients? (eg 192.168.x.255, port 497) For multicast, are packets getting to clients? (eg -- Retrospect uses UDP port 497) Are clients responding to those packets (UDP broadcast or multicast) (initially to UDP port 497 on the backup system) If crossing subnets, is TTL set high enough to reach the client? What could possibly go wrong? Heh. Here are anomalies I've seen: Often, some app will create virtual interfaces. This includes npcap (for Wireshark etc), VMware, TAP-Windows (comes with Windows?), etc... This has led to: On some of my clients, some virtual interfaces have APIPA addresses (169.254.*) -- which makes it obvious when retrospect chooses the wrong interface to listen on! (Workaround: I uninstalled the TAP-Windows adapter as I don't need it. And I temporarily disabled npcap on the one workstation where that got in the way.) On my retrospect backup desktop, retrospect chose one of the VMware virtual adapters as the default adapter! This even though the real gig adapter has higher priority etc etc. (Workaround: create another adapter in Retrospect) The result in either case: I can't see the clients, even though ping works. I have a network security system. It regularly scans all ports on all subnets. Some (but not all) clients get confused by this, with the retroclient app hung up on an IP connection in CLOSE_WAIT status. The result: the client is never available for backups. Yet it is visible to subnet or multicast. We switched to a pfSense firewall/router. I just discovered that multicast forwarding is badly broken.(Workaround: manually installed pimd.) Similarly, UDP broadcast is often blocked by firewalls. Make sure the packets are getting through! Having fixed and/or worked around ALL of the above, and rebooted everything... I can now reliably use either multicast or subnet broadcast to connect with clients.