Jump to content


  • Content count

  • Joined

  • Last visited

  • Days Won


MrPete last won the day on November 5

MrPete had the most liked content!

Community Reputation

8 Neutral

About MrPete

  • Rank
    frequent Poster

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. MrPete

    How to restore files on top of what's there?

    Well, there's a lot of overlap I guess (Sorry for delays... Real Life kicks in for me a lot... I'm now the proud owner of a big boot on my right foot, after some extensive ankle surgery. Looking forward to long walks for the first time in many years ) David H is correct: what I played with was the GE 635... later renamed Honeywell 635. As for the SA1004... OK, I peeked in my 1970-90 archive box. Yep, I worked on those as well as the SA 1002 (5MB version) ... I no longer remember all of the names, but vaguely recall strong long-term friendship relationships among Shugart founders (possibly Al himself) and others I worked with/for back then: Dr David H Chung (also formerly w/ Fairchild, inventor of F8 micro, and of the first retail microcomputer...); And other names -- Dash Chang and more... I was a firmware dev/performance/test/consultant for Qubex, which made one of the first HDD testers - the QA2000, with both ST506 and SA100x interfaces. I can't find info on who led Qubex. (The only hint I see is about a guy I vaguely remember, Mike Juliff, formerly of Memorex... oh well. The mists of time do tend to fade things out!)
  2. No argument on "more reliable" ... I'm "old school" as well. However, something is seriously wrong if your new system, with several times the "disk" throughput (m.2 NVMe does multiple GB/sec) is not seriously quicker. If your SW builds didn't radically improve, something is simply wrong. The most common performance killer I've seen, by far, is to have a crazy number of temp files. Or a crazy number of files in ANY important folder. Get more than 1-2,000 files in the temp folder and you will seriously feel it. (Free version of CCleaner handles this and more quite nicely. Yes, there's a built in tool that kinda-sorta gets there but not as comprehensively...) Next thing: check Windows Disk Write Caching (assuming you have a UPS attached)... that makes a huge difference, particularly for SSD (seek time is zero, but you want directory info updates cached...) I would suggest SysInternals Process Explorer to examine what's eating up your performance... possibly combined with SMARTmonTools / smartctl (or HDD Sentinel Pro) if a drive isn't giving you what it should. I'm sitting at two computers right now: My 2012 "mainframe", (i7-3930k, 32GB, gobs of SSD including a RAID0 pair used for high speed video capture) 2019 Surface Pro 6 (i5-8350U, 8GB RAM, PCIe SSD 0.9-1.4GB/sec) Except for certain functions (video processing), the newer computer is noticably quicker. Just for example: My Surface Pro can cold boot to login prompt in about ten seconds. Not even close on the older computer. In general, nothing takes a long time. All software starts more or less immediately unless it has a TON of preprocessing to do. YES, lots of processes. But a few things about modern architecture make the overhead pretty much negligible these days. Multicore+incredibly fast context switching means those extra processes use VERY little CPU. As in: I have 193 processes "running" on my Surface right now. CPU usage: 2-3 percent. And I've done literally zip to make it more efficient... in fact, I've got several security and convenience apps running.
  3. I don't have exactly the same experience... however, I do have a caution to share: Microsoft OneDrive has a similar feature. Last I checked, it's not exactly compatible with ANY backup software, in the following sense: Backups work fine However, when you go to restore, it restores the EMPTY local folder(s) Which causes the cloud copies to get cleared out Which means you lose all of your cloud files Unfortunately, I can't place the details on this with a quick google search ...
  4. A very late response What did you end up doing? Assuming you have everything properly defined for your bare metal DRD restores, then yes it "ought" to be ok. If you followed my Admin Guide for DRD, you would have saved a copy of the BCD Store separately... but then what's the fun in that? 😄
  5. Windows 10 doesn't reliably pay attention to the startup folder anymore. Instead: use task scheduler to open apps when you log in. Works like a charm.
  6. AFAIK, R16 is essentially the same with respect to disaster recovery... although I admit I have not dug in as intensely yet...
  7. MrPete

    Yet another -530 client not found error

    Interesting. I just finished discovering a specific set of bugs in Retrospect, and challenges in our router(s) and local network apps, that directly lead to the above anomalies in finding and/or connecting to clients. (Yes, all of the following has been submitted as a bug report.) I'm running a more Windows-centric network, with a little OSx tossed in, so my tools are a bit different. Tools: WireShark: shows packets actually traveling. Most useful is a filter of udp.port==497 || tcp.port==497 tcpdump (command line in linux and/or osx) - monitoring port 497 TCPview (from SysInternals, now owned by Microsoft) - sort on port. Look at what is listening to 497 and on what IP address(es) (command line) ipconfig and also "route print" In Retrospect: go into Clients -> choose a client-> access tab -> Interface -> Configure Interfaces ... and see what default interface IP address is. Things to watch for: Are UDP broadcast packets being received by clients? (eg 192.168.x.255, port 497) For multicast, are packets getting to clients? (eg -- Retrospect uses UDP port 497) Are clients responding to those packets (UDP broadcast or multicast) (initially to UDP port 497 on the backup system) If crossing subnets, is TTL set high enough to reach the client? What could possibly go wrong? Heh. Here are anomalies I've seen: Often, some app will create virtual interfaces. This includes npcap (for Wireshark etc), VMware, TAP-Windows (comes with Windows?), etc... This has led to: On some of my clients, some virtual interfaces have APIPA addresses (169.254.*) -- which makes it obvious when retrospect chooses the wrong interface to listen on! (Workaround: I uninstalled the TAP-Windows adapter as I don't need it. And I temporarily disabled npcap on the one workstation where that got in the way.) On my retrospect backup desktop, retrospect chose one of the VMware virtual adapters as the default adapter! This even though the real gig adapter has higher priority etc etc. (Workaround: create another adapter in Retrospect) The result in either case: I can't see the clients, even though ping works. I have a network security system. It regularly scans all ports on all subnets. Some (but not all) clients get confused by this, with the retroclient app hung up on an IP connection in CLOSE_WAIT status. The result: the client is never available for backups. Yet it is visible to subnet or multicast. We switched to a pfSense firewall/router. I just discovered that multicast forwarding is badly broken.(Workaround: manually installed pimd.) Similarly, UDP broadcast is often blocked by firewalls. Make sure the packets are getting through! Having fixed and/or worked around ALL of the above, and rebooted everything... I can now reliably use either multicast or subnet broadcast to connect with clients.
  8. MrPete

    How to restore files on top of what's there?

    Such fun I basically ended up accomplishing the purpose manually: 1) Restore all data to a separate drive 2) Overwrite all files on the drive of interest The files with mangled data can't be detected except by comparing actual content. The metadata is unchanged, because the changes were made lower than OS level. As for computer history... I'm a bit younger than y'all 😄 * I played with IBM 026 card punches while in elementary school at Stanford (my dad was a grad student.) (Amazing class! One friend's dad had the first desktop calculator, HP9100a. Another brought her dad for Show & Tell: Arthur Schawlow, coinventor of the LASER. His hour long demo changed my life...) https://news.stanford.edu/news/1999/photos/schawlow200.jpg * Dad then became a research scientist for GE. I had access at home to the GE 635 mainframe (same computer used for Dartmouth Time Sharing System)... MD-20 drum storage and all. We had a teletype, then a TI Silent 700. In our house! Whoo! I was probably one of the first kid-hackers ... all for the good. I even got a letter of commendation from the R&D Center head, and a gift of a week-long professional simulation course -- a whole week out of high school (https://upload.wikimedia.org/wikipedia/commons/2/28/Silent-700.jpg) * In college I helped build our student computing center, based on DECSystem20 (never forget the JFFO instruction... and the almost-apocryphal HCF ) * Ultimately, I spent years as a SiValley consultant, including early HDD test equipment etc. Nope, never worked on IBM drives. My first professional HDD work was on the Shugart ST-506. My home computer in 1981 had somebody's 14" hard drive. Sounded like a jet engine... I wasn't allowed to turn it on if our baby daughter needed a nap!
  9. MrPete

    How to restore files on top of what's there?

    Remember, it's not that the SSD fails. It's simply not an archival storage medium. Not so sure we can say HDD's are *that* much better today. My work on early drives (yes I've been doing HDD's since the 1970's around that long) ) was simpler, because the stored bits were monstrously big compared to today. (Anyone here remember the technique of literally using a pencil eraser tip to shove an HDD into spinning? They were truly not sealed!!! Meanwhile, today we no longer store bits in the HDD magnetic wiggles. We don't even store encoded bits. Today, it's more like a Douglas Adams sci fi joke! Adams talked about the "improbability drive"... well, how about a "probability curve"?! To compress the data, we calculate an exponential function that represents the bits in a sector, and store the parameters of the function. If one bit is wrong, you just lost an entire sector. That's "PRML" (Partial Response, Maximum Likelihood") ... such fun 😄
  10. MrPete

    How to restore files on top of what's there?

    Sigh. Mark, thanks for confirming what I sensed. This really is a bug in Retrospect. I'm about to report it. Let's say you choose "Replace Corresponding Files" from the dropdown. My expectation: it will replace the corresponding files. The documentation is clear about this. Unfortunately, it doesn't work. Retrospect ASSUMES that the data content matches, if the metadata matches. Boo. I guarantee that the "already copied" files aren't the only ones... and in fact, if I prepare a *backup* of the bad data, it's going to copy way more than a few hundred files. Time to report this bug. ======== A few quick comments on Mark's other suggestions: Preserving Data: In my situation, I've already replicated the drive. Nothing will be lost. SpinRite: I know exactly how/why the drive was partially zero'd. (It was my mistake )... no need for SpinRite this time. (YES I highly recommend SpinRite... Look for Pete H here - https://web.archive.org/web/20040825043909/https://www.grc.com/sr/testimonials.htm Bit Rot and Drive Replacement: In reality, bit rot can happen on ANY drive. In no way does bit rot imply a drive that needs to be replaced! Just for example: Consumer SSD drives retain data "perfectly" for approximately one year (if you've not overwritten a sector) before bit rot begins to show. It's worse for enterprise SSD's (because they are focused on performance over data retention.) This is not exactly advertised by the industry. Dig into the latest research and you'll discover the reality..) NOTE: some mfg's MAY be compensating for this and taking care of it in firmware... but I have yet to see such workarounds documented. And YES I have been harmed by this. Consider: what is the one sector on a drive or partition that NEVER is overwritten?... (How to avoid this: regularly rewrite every sector of an SSD. I do it about 3-4 times a year. I could point you at the (very!) technical research work that underlies what I wrote above. I agree that it's more than a little surprising.
  11. You might find this post from last year helpful...
  12. I am surprised this is not an obvious functionality. Scenario: - Existing drive, good backups - For !@#$% reasons, a chunk of an NTFS partition was literally zero'd out at a low level - Result: files are all in place but I guarantee their contents have changed. Consider it bit rot on a massive scale What I need: - Restore essentially all backed-up files, ignoring the fact that the metadata matches perfectly - In other words, I want to do a restore-with-overwrite. - Important note: I do NOT want to simply restore the volume if at all possible, because there are many new/changed files not in the backup. I can't find a way to do this. What am I doing wrong? Mostly, it auto-deselects all the files that appear to already be there (matching in size and date stamps). Any ideas much appreciated. I'm about ready to do the obvious/pain-in-the-neck workaround... - Find all new files and copy elsewhere and/or - Delete all files that aren't new THANKS! Pete
  13. Very interesting... * The AA*.rdb files are from mid- May 2018 and today * The AB*.rdb files are from June 2018 all the way through until today * Somehow they were both in the same folder and Retrospect was not upset... but it sure was upset now! AA rebuild finished AB rebuild under way...
  14. I solved it, and learned a few things along the way. I finally examined to see what was so special about 262 files... and discovered something I probably should have noticed before: the folder contains TWO sets of *.RDB files! AA*.rdb and AB*.rdb -- AA*.rdb is 262 files. I am guessing it is what had been built during the failing groom, **even though** most of the files have a create and modify date that's quite old! So: * I split out the AA*.rdb files into a separate folder and am rebuilding a catalog on those just to see what's there * Then i'll rebuild the AB*.rdb files