Jump to content

DavidHertzberg

Members
  • Content count

    1,378
  • Joined

  • Last visited

  • Days Won

    49

DavidHertzberg last won the day on March 25

DavidHertzberg had the most liked content!

Community Reputation

79 Excellent

About DavidHertzberg

  • Rank
    Occasional Forum Poster

Profile Information

  • Gender
    Male
  • Location
    New York, NY
  • Interests
    Retired applications programmer, with a few Macs at home.

Recent Profile Visitors

2,494 profile views
  1. (Disclaimer: Anything I may say about the intentions of Retrospect "Inc." in this or any other post is merely the result of "reading the tea leaves", the "tea leaves" being documentation and public announcements supplemented by an occasional morsel from Retrospect Sales. I have never been paid a cent by Retrospect "Inc." or its predecessors, and I pay for my upgrades. Any judgements expressed are—obviously—mine alone. The same is true of Retrospect's history, especially with references to here.) roobieroo, If another administrator doesn't immediately come up with an answer, I strongly suggest you file a Support Case for a feature—here's how to do that. The first paragraph below the quote in this April 2021 post in another thread suggests following that up with an e-mail to Werner Walter, Retrospect "Inc"'s Director of Sales Worldwide, describing your problem and giving him the Support Case number. The last paragraph in that same post explains the reasoning behind my e-mail suggestion. The first sentence in that paragraph says that, because StorCentric's previous products—before they bought Retrospect Inc.—have been NASes including Drobo, StorCentric has required "Retrospect developers—with help from other StorCentric developers—to produce a version of the 'backup server' program that runs on a beefed-up Drobo [NAS] device (and probably on other Linux-based NASes)." The latest I hear is that that "backup server" version may miss the Retrospect 18.0 release, which is expected by the end of June 2021. However IMHO StorCentric top management won't be pleased to hear that the "backup server" for Retrospect Mac 16 (unless you've recently upgraded to Retrospect Mac 17—whose cumulative Release Notes list fixes for bugs #8597 and #8600 that possibly might fix your problem) doesn't back up "Applications/installers on the NAS that generate this error message or if a user copied items to the NAS that would normally require full disk access such as their Library folder." Politics.😎
  2. DavidHertzberg

    Cloud Backup Strategy

    dvenardos, My re-reading of your OP and reading of your second post made me realize that you started this thread for the sole purpose of showing your boss: "Look, I told you Retrospect couldn't do the off-site backup the way we want to do it—and here's proof that Retrospect users (not only their Tech Support) admit that." That makes me angry enough that I'm going to spend a few more minutes casting further doubt on the way you want to do off-site backup.😡 First, here's a 2018 thread from this very Forum that discusses blm14's desire to use an Amazon VTL. Starting with the post below blm14's OP, I and Nigel Smith pretty definitively shot that down as a bad idea. The only thing I have to add to that is Retrospect as of 2016 supports 28 Virtual Tape Libraries with either iSCSI or Fibre Channel interfaces. You insist on using the only two VTLs that interface over Ethernet; I find it difficult to believe that in your new colocation facility you can't connect a VTL except over Ethernet. Besides, as the second paragraph in my first post in that linked-to thread says, "what blm14 would be getting for that cost is 'low-latency access to data through transparent local caching'." You don't need local caching; you're already doing that with the Ola Hallengren SQL Server Maintenance Solution. That's probably why Retrospect doesn't support Ethernet VTLs; one can do local caching using Retrospect scripts. IMHO, for you VTLs are just another gimmick for giving non-cloud-destination backup applications cloud destinations. Second, you admit that the cloud solution you want isn't going to have a monthly cost appreciably less than sending tapes to Iron Mountain. Pardon me for boring everyone else on these Forums with a repeat of how I do off-site backup in my itty-bitty home installation: Every Saturday morning I do a Recycle backup of all my drives to a rotated-in portable storage device. Then, early every other morning of the week, I do an incremental backup to that same portable storage device. Every Friday mid-morning I carry the current-week portable storage device off-site to my bank branch two short blocks away and put it in my US$90/year safety deposit box, after removing the portable storage device I put there last Friday. Since my bank branch isn't open 24/7/365 for immediate retrieval, I actually don't immediately re-use the portable storage device I retrieved—but instead store it just inside my apartment door for a week before re-using it. Thus my 3 (only 2 needed given 24/7/365 retrieval) weekly-rotated backup sets are designated Red, White, and Blue. And here's tape news; I now actually use pairs of portable storage devices; the device for my new computers is now a portable HDD, but the device for my late ex-wife's 2001 G4 Mac is a DAT tape cartridge (because two years ago Retrospect Inc. eliminated the ability to backup Macs with really old versions of macOS, so I run an old version of Retrospect on that old machine to backup using the DAT tape drive I used for years). Last week I temporarily took home everything else stored in my bank safety deposit box, so I can tell you immediately (after measuring a folded document that was also stored in my box) that my US$90/year safety deposit box is just wide enough to hold an LTO cartridge—and long enough and high enough to hold a dozen stacked. Do you really mean to tell us that, if you sent a pogue to your colocation facility every three months carrying an LTO-8 tape drive and several LTO-8 cartridges, the other people at that facility would not permit him/her to plug in the tape drive and sit there changing cartridges until your every-three-month "archiving" job is finished—after which he/she would carry the newly-used cartridges to a bank safety deposit box? 🙄 I decided years ago that if Kim Jong Un H-bombed New York City my little retirement business as well as my personal affairs would be ended, so I don't store my portable storage backup devices at Iron Mountain. If your organization needs to do off-site backup that can survive a nuclear attack, may The Auditor bless it. 🤣 P.S.: Mellowing slightly to consider how to do what you want with Retrospect: If each of your SQL Server backups won't exceed 25TB, you should back them up to the cloud using four 3-months-rotated destination backup sets. You'd create a separate schedule for each destination backup set by clicking the Schedule button, scheduling it at 3-month intervals per pages 232–234 of the Retrospect Windows 16 UG. For each of these 4 schedules you'd designate the destination backup set per the To: pop-up shown at the right bottom of the second screenshot on page 233; of course you'd have to have pre-defined all 4 destination backup sets per page 177. You'd also specify Recycle Backup in the Action: pop-up to the left of the To: pop-up, so that each backup set would be reused once a year. So long as you use cloud destination backup sets this isn't going to decrease your cost—but it would avoid the danger Retrospect Tech Support warned you about. You could greatly reduce costs by instead using 4 tape destination backup sets, per the third paragraph of this post, which wouldn't limit the size of each destination backup set to 25TB. AFAICT your RClone solution would run into the same 100TB total limitation at the remote site, unless the remote site isn't subject to the same limitation as cloud providers.
  3. DavidHertzberg

    Cloud Backup Strategy

    (Disclaimer: Anything I may say about the intentions of Retrospect "Inc." in this or any other post is merely the result of "reading the tea leaves", the "tea leaves" being documentation and public announcements supplemented by an occasional morsel from Retrospect Sales. I have never been paid a cent by Retrospect "Inc." or its predecessors, and I pay for my upgrades. Any judgements expressed are—obviously—mine alone. The same is true of Retrospect's history, especially with references to here.) dvenardos, I assume (you didn't say) you intend to use or are using Retrospect Windows, and that it will be the latest version (you didn't say)—rather than some ancient version you inherited from your predecessor. However the page references I may give you are to the Retrospect Windows 16 and Mac 16 User's Guides, because an employee I call the StorCentric Slasher has robotically deleted screenshots and explanatory paragraphs from the version 17 UGs. Don't worry, the features I may talk about in this thread haven't changed in years, and Retrospect Inc.'s Product Management refused to update anything besides the "What's New" chapters of the UGs from 2015–2019. I'll further assume that your objectives in "archiving"—in your sense of the term rather than Retrospect's— this data offsite is either [1] to guard against a disaster—such as flooding—happening to your server room or [2] to enable not-so-rapid recovery of old transactions you must legally retain without tying up disk space in your server room. I agree with Retrospect Tech Support that both these reasons argue against your using Cloud backup sets. As to objective [1], IIRC most cloud providers effectively limit a customer's downloads to about 20TB per day. So it would take 1.5 days to do disaster recovery from even one of your proposed 30TB Cloud backup sets. I don't think your upper management would be happy about that kind of delay.🙄 Amazon Infrequent Access will cost your organization US$1250/month for storing 100TB, although you may be able to cut that price to US$400/month using Archive Access Tier or with cheaper suppliers such as Backblaze B2. IMHO you'd be better off doing your "archiving" to LTO tapes, at an initial cost of around US$3700 for a tape drive plus 8 LTO-8 tapes (assuming you can't do further compression). LTO tape cartridges measure 102.0 × 105.4 × 21.5 mm; if your organization can't find an off-site location to securely store a dozen of those for physical access within 3 days, it has other problems.🤣 As to objective [2], if I understand you correctly, your Retrospect Cloud backup sets would only contain one file for each 3 months of backups of a SQL Instance. So you'd have to say "the transactions we want must have been backed up from this SQL Instance sometime in that 3-month period", download and restore with Retrospect the entire SQL Instance, and then use SQL Server to to pick out in the SQL Instance the desired transactions. That doesn't sound like a speedy process. 🤣 However if you were backing up at the transaction level—which I understand is possible with some non-free SQL Server backup software—then Retrospect backup sets could be used in a more-precise manner. P.S.: Based on hurried first reading of your OP, I thought there'd be a question about Retrospect doing backups at 3-month intervals. So—before reading the OP carefully—I'd already looked it up on pages 232–233 of the Retrospect Windows 16 UG and pages 133–134 of the Retrospect Mac 16 UG (ver. 16 because—as I expected—the StorCentric Slasher has deleted the associated screenshots in the ver. 17 UGs). I also did an experiment on my Mac 16 "backup server"; the dialog label line starting with "every" does indeed change to "month(s)" when the "repeat" line above has "monthly" in the pop-up.
  4. (Disclaimer: Anything I may say about the intentions of Retrospect "Inc." in this or any other post is merely the result of "reading the tea leaves", the "tea leaves" being documentation and public announcements supplemented by an occasional morsel from Retrospect Sales. I have never been paid a cent by Retrospect "Inc." or its predecessors, and I pay for my upgrades. Any judgements expressed are—obviously—mine alone. The same is true of Retrospect's history, especially with references to here.) SHAB, I did get an answer from the Worldwide Head of Retrospect Sales, the following day—16 April 2021. It was IMHO the first thing you should do is to file a Support Case for a feature request. Here's how to do that. I can't do it myself because [1] I use Retrospect Mac—not Retrospect Windows and [2] I don't use Docker in my simple home installation. My personal experience is that Retrospect Support will ask the person who files a Support Case to be a beta tester for the requested feature or bug fix. The second thing you should do is to e-mail Werner Walter the number of your Support Case—which you should write down when you submit it. I'm sorry to subject you to the deficiencies of Retrospect "Inc."'s Support Case software. They've rented the software starting years ago from another company (who also rented it to the Taiwanese hardware company ATEN, which made my 5-year-old KVM switch), and—as I've noted in the linked-to post—it's somewhat primitive. It's designed so that—for Retrospect customers rather than employees (such as Werner)—only the customer who submits a Support Case can read it. Be aware that what's going on behind the scenes—see my disclaimer above—focuses on the management of StorCentric (which "merged" Retrospect Inc. into itself on 25 June 2019) requiring Retrospect developers—with help from other StorCentric developers—to produce a version of the "backup server" program that runs on a beefed-up Drobo device (and probably on other Linux-based NASes). I dare to say this much on these Forums only because Mihir Shah, the CEO of StorCentric, publicly announced in 2019 that he wants to do it. Since NASes don't have their own monitors and keyboards and mice, this means that non-Management Administration Console programs must be developed that control the "backup server" from another machine on the same LAN/WAN. Retrospect Mac has had such an Administration Console since 2009. However—for reasons explained in this section of the old version of the Wikipedia article—Retrospect Inc. had to leave the Retrospect Windows "backup server" with a multi-threaded implementation of the same built-in GUI it's had since the early 1990s. So Retrospect "Inc." has now brought in a GUI development expert, but from what I hear there are a lot of meetings going on—presumably about what the new GUI should look like etc.. That's why the release of Retrospect 18.0 has been delayed far beyond the early-March date customary for x.0 releases of new Retrospect versions. I've been told that the release of Retrospect 18.0 is expected before the end of June 2021.
  5. SHAB, Yesterday I phoned the Worldwide Head of Retrospect Sales, and left a message asking whether the soon-forthcoming Retrospect 18.x would handle Docker containers. He phoned back (because they've got several people out sick and he was in meetings) while I was out to dinner , and left a message that is a bit muddy on my answering machine. When I listened to it again tonight, I understood him to be saying that Retrospect "Inc." has no plans to handle Docker in Retrospect 18.x. However his message indicates that he believes Docker is a Linux distribution, and I shudder to think about how he re-phrased my question when talking to the engineers. 🙄 I'm merely an ancient home user of Retrospect Mac (although one of my "clients" in 2002–2004 was a Windows 95 machine forced on me for Work From Home by my bosses' boss). The only time I've ever encountered a VM was briefly as a remote user in 1969 of what was later to become IBM's mainframe VM/370, but I do read enough Ars Technica posts to be somewhat aware of what Docker is and its current importance. I'll write him an e-mail tomorrow, containing links to that Wikipedia article as well as to this thread—and stressing that Bacula will handle Docker containers. Salespeople worry about competitor's capabilities, so that should perk up his ears. 🤣 P.S.: E-mail sent 21:37 on 15 April 2021. In the first paragraph it also links to the Ars Technica front-page article saying Docker now runs natively on the Apple Silicon M1 chip, as requested by many developers. In the second paragraph it also links to a YouTube video on Bacula Enterprise Principles, a Web page showing Bacula's "backup server" only runs on Linux or FreeBSD or Solaris, and the Web page on Bacula and Docker you linked to below. I then said "Bacula—rather than just Synology’s Hyperbackup or OWC's maybe-back-from-the-dead BRU—looks like the competition StorCentric will run into for the Retrospect 18.x 'backup server' running on Linux [publicly predicted by StorCentric management]." How's that for motivating Product Management? 😎
  6. SHAB, Unfortunately this post in a January 2020 Forums thread says However that was for Retrospect Mac 16.6. You might consider upgrading your "backup server" to the latest 17.5.2 release, if you haven't already done so. For that release, the cumulative Release Notes for Retrospect Windows include
  7. DavidHertzberg

    Disaster Recovery doesn't support high DPI screens

    IMHO cgtyoder—because he's definitely got a license for Retrospect Windows 17—should create a Support Case at https://www.retrospect.com/en/rscustomers/sign_in?locale=en . If he isn't already signed-up he should click that link to do so. He should be aware that, if his problem statement runs much over 2000 characters (16 lines of a 9-inch-wide inner window), it will spill over into an Additional Note. He should also be aware that Retrospect "Inc"'s highly-advanced Support Case system doesn't allow linking or underlining; I use before-and-after underline characters, and I paste-in links with a space afterward to facilitate copying into a browser link window. Having done that, cgtyoder should put the Support Case number into new a post in this thread. x509 can then create another Support Case, with a Problem Statement mention of cgtyoder's Support Case number. That will get around the wonderful feature that limits access of a Support Case to the administrator who submitted it. Be aware that these problems may turn out to be a result of limitations in Microsoft's WinPE underpinnings for Disaster Recovery.
  8. backy, After some more belated thought and one little experiment, I'd like to revise my recommendation in the last paragraph of this up-thread post. If you want to use a Storage Group defined on a Retrospect Windows "backup server" as a Destination, you may be able to get away with it—but I'd advise against it. My belated thought was that you could define a Grooming policy for the Storage Group. My experiment showed I can do this even on a Retrospect Mac 16 "backup server". Presumably the Grooming policy is applied to each component Backup Set as it is automatically created—when a new machine-volume is added as a Source for a script whose destination is the Storage Group, but I can't confirm this because of the so-far-incomplete Storage Group GUI in Retrospect Mac. Also presumably in Retrospect Windows you could modify the Grooming policy and the initial Member size for a particular component Backup Set, but again I can't confirm this. Ability to do those modifications depends on the capability of using the Retrospect Windows GUI to directly access a component Backup Set; there's currently no such capability in Retrospect Mac's GUI. The combination of these two capabilities—if they exist in Retrospect Windows—would allow you to tailor the maximum initial Member size for a particular component Backup Set. This—done carefully—would enable you to ensure that the sum of all components' initial Member sizes would never actually exceed the size of the Storage Set's defined Destination disk. Therefore, if you ran Transfer scripts frequently enough, you could make sure that all files from components had been Transferred to tape before they were groomed out of existence. So you wouldn't have to run any Recycle scripts having as a Destination the Storage Group; you could rely on the components' Grooming policies. If you can add additional Members to a particular component Backup Set, that would provide an additional safety factor. I can't do this either, again because the so-far-incomplete Retrospect Mac Storage Group GUI won't let me directly access a Storage Group's component Media Sets. Of course your Transfer scripts wouldn't be copying files simultaneously backed up by your Proactive scripts (because you couldn't make them use one Execution Unit)—pending enhancement per my Support Case #54601 (case# in P.P.S.). And it'd take substantial effort for you to explain this strategy to another employee of your company. Undoubtedly my recommendation in that up-thread post that you not use a Storage Group is still the wise choice.
  9. What Lennart_T says may "always" be true nowadays—especially for LTO "tape stations", but it wasn't true in the past. IIRC my first DAT drive, from DAT Technologies, did not have hardware compression—which I could have used because I was at one point backing up 4 machines in my and my then-wife's home installation. I was creating at least 2 DAT tapes from my 7-hour Saturday Recycle runs, but I couldn't use software compression because my "backup server" machine was slow. I had hopes when I got the HP StorageWorks DAT72 drive, but it turned out Retrospect Mac 6.1 didn't support its hardware compression feature. backy, make sure for your Transfer scripts that you don't click the More Choices button shown in the dialog on page 213 of the Retrospect Windows 16 User's Guide. Those lead to the options shown on pages 360–361, but you want those options to default to Match source volumes to Catalog File and Don't add duplicates to Backup Set. That will make sure newly-backed-up-to-disk files are copied to tape once—and only once—so long as their contents don't change, allowing emergency retrieval despite later grooming of your disk Backup Sets.
  10. backy, Consider using the Data Compression (in software) option (page 357 of the Retrospect Windows 16 User's Guide) on your Transfer scripts. That'll save tape space. OTOH the option may slow down your Transfer scripts if you don't have a powerful "backup server" machine; the ancient HP DAT 72 tape drive that I use for backing up my (now-deceased) ex-wife's old Digital Audio G4 Mac has a hardware compression capability, but ancient Retrospect Mac 6.1 doesn't support it. I learned about Storage Groups to fully answer other administrators' questions, starting with this March 2019 post in a thread whose OP asked about running multiple Backup jobs to the same Backup Set. I was curious enough to run a couple of experiments on my own home installation, which is how I learned about how Storage Groups really work but also about the limitations of their current Retrospect Mac GUI. If you liked my "amazing" post that much, you could click the "Like" heart icon at its bottom right. The head of Retrospect Tech Support runs a contest every few days; I enjoy competing for "most liked content". Lennart_T's second post in this thread is also pretty helpful, so maybe you should "Like" that post too; competition is good.😁 P.S.: If you're going to give the Backup/Proactive script and the Transfer script for a particular Source the same Execution Unit, I wouldn't use the New Backup Set Backup Action. I haven't used it, but it sounds like a potential complication.
  11. backy, I was going to make the same suggestion as Lennart_T yesterday afternoon in an additional paragraph in this preceding post—but I had to leave for a dental cleaning appointment. The screenshot at the top of page 176 in the Retrospect Windows 16 User's Guide (I'm referring to that because the Retrospect 17 User's Guides have been subject to the attentions of the StorCentric Slasher—e.g. in the last paragraph of that linked-to-post) shows where to specify the Execution Unit for a Backup script. The screenshot on page 210 shows the same thing for a Transfer Backup Sets script. However you can't set the Execution Unit in a Proactive script that uses a Storage Group as a destination. That's because—as briefly explained in the first three sentences of the last paragraph of this post in another thread—a Storage Group is a magnificent hack (IMHO) for enabling interleaved backups of different machine-drive Sources using a single Proactive script, rather than forcing the administrator to create a separate Proactive script for each machine Source; the enabling is done by using the multi-threading capability (expressed as Execution Units) of the Retrospect "backup server" Engine. There are two tradeoffs, however. The first is that, when the Knowledge Base article uses the term "volume", it means volume on a particular Source machine. If your 12 Source machines have only one volume each, they would just fit within the limit of 15 Execution Units your "backup server" could—given around 20GB RAM—run simultaneously. But the Proactive script will create a separate Backup Set component of the Storage Group for each machine-volume combination; I've tested this on Retrospect Mac—doing so because the KB article seemed unclear. The second tradeoff is that all the initial Members of a Storage Group's component Backup Sets must fit on a single Destination drive. At least—using Retrospect Windows—the KB article says you can designate an individual one of those component Backup Sets as the Source for a Transfer script. (As the KB article article also says, you can't do that designation using Retrospect Mac—IMHO because the StorCentric acquisition in June 2019 prevented the engineers from fully completing the Retrospect Mac GUI for Storage Groups. But I've tested using a Rule—Retrospect Mac name for a Selector—for restricting Transfer to a component.) Unless you can add additional Members to an individual Backup Set component of a Storage Group (I couldn't test this, because I have to work within the inadequate limits of the Retrospect Mac GUI), you'll have to—after successfully running all your Transfer Backups script(s)—run a Backup script with the Recycle Media Action—specifying the No Files Selector—in order to re-initialize the component Backup Sets of your Storage Group before any initial Member of a component Backup Set exceeds its space on the Storage Group's designated initial Member drive. My personal suggestion is that you abandon the idea of using a Storage Group as a Proactive script Destination, and instead create individual scripts with individual Backup Sets as Destinations for at least each of your "Remote" Sources. It'll be more work to set up, but give you fewer long-run problems.
  12. Lennart_T, Unfortunately Retrospect won't wait for the Transfer to finish before running the Backup. And that has the unfortunate consequence discussed beginning with the second substantial paragraph of this OP in a January 2017 thread. Since that post is phrased in terms of the Retrospect Mac terminology, here's another post that backy can use for translation to Retrospect Windows terminology. My Support Case giving a product suggestion for overcoming the consequence was ignored. Therefore I'd suggest that backy use your "New Backup Set" suggestion, even though that would create a complication.
  13. DavidHertzberg

    Replacing External RAID

    francisbrand, I think on the Forums we're now supposed to say "Get a Drobo", since Retrospect Inc. was acquired in June 2019 by StorCentric—which is also the parent company of Drobo (genuflects in the direction of San Jose CA 🤣 ). However at the moment you can't get a Drobo—because of some kind of supply chain hangup, which—I have it on excellent recent insider authority—is a reason the release of Retrospect 18.0 has been postponed until 2Q 2021. So here's the most recent thread in the Retrospect Forums discussing backing up to a Synology NAS—model number not specified. Its OP says backing up to it is working fine; the thread concerns a problem with getting a -2265 error in connection with Grooming. The last post in the thread, by the OP on 9 February, says "I guess at this point I need to upgrade to [Retrospect Mac] v17.5.x and hope that fixes this issue unless the group has any other suggestions." Note that the cumulative Retrospect Windows Release Notes say, for the 17.5.0 Engine, "Grooming: Fixed issue where grooming fails under certain scenarios (#8700)"—which probably also applies to Retrospect Mac since the Engine is the same under the GUI hood for both variants. Here's the Knowledge Base article for NAS backup, re-titled in 2020 to "How to Set Up Drobo for Retrospect Backup". However the article existed in much the current form—but discussing the NAS in brand-independent terms—before the acquisition; it's now merely had some Drobo-specific information added within it, and the YouTube video "Retrospect for Mac: Setting up a NAS as a Backup Destination" linked to within it has now merely had the audio edited to add the term "Drobo". (That illustrates the new "get a Drobo" mindset of Retrospect "Inc.".) If your "backup server" is booting Mojave or Catalina or Big Sur, pay attention to the "Full Disk Access on macOS Mojave and Catalina" section of the KB article. Since it probably was some years since you set up a NAS as a destination, you may want to review a 2020 thread about that starting with this post by me. The main take-away from that thread (you should, per my P.S.s, ignore anything I said about the uniqueness of the "marker" file) is that your backups need to be—directly or hierarchically—stored inside a folder named "Retrospect". In the third paragraph of this post further down that same thread, I quote another Forums expert as saying "The Retrospect folder can exist as a top level folder on any HDD connect [sic] directly to the Backup Server either internally or externally. On an SMB share (e.g. Samba on a NAS) the Retrospect folder must reside in a share folder ....". That means you can't rely merely on the NAS share folder being named "Retrospect", as one administrator did. And use SMB 2 or 3; SMB 1 isn't supported anymore.
  14. A long-existing facility of Retrospect that I'd never used turned out to be key to implementing Phase 1 of my project of converting my late ex-wife's 1990–2005 literary and artistic files for future use by any of her friends or relatives. Phase 1 is the literary files; Phase 2—the artistic files—may require some programming. Retrospect solved the second of two Phase 1 problems, but IMHO other administrators should also know about the first problem. These files are on a HDD in a Digital Audio G4 that my ex-wife had given me in 2005 after I copied the HDD contents onto a PowerBook she'd bought. The literary files were written 1990–2005 using Mac MS Word 5.1a under OS 9.1, so they're in .doc format (no file extension on Classic Mac OS)—which I'm not sure will be readable by a future word processing app. When she complained in 2015 that she couldn't open the old files in Mac Word 2011, I dragged her G4 out of the back of my bedroom closet, took a thumb drive containing several hundred files over to her apartment, and demonstrated that she could with significant difficulty convert them to .docx format. However I was occupied with the ultimately-fatal illness of my longtime guitar teacher and friend , so I didn't then volunteer to do the several days of conversion work for her. She died 13 January, but her executor can't yet get me into her apartment to see if she later did the time-consuming conversion herself—which I doubt she did except for a few selected files. So four weeks ago I decided to do the conversion myself, using LibreOffice Writer on my 2016 MacBook Pro. I copied the files in her old "Microsoft Word" HDD folder onto a "Microsoft Word etc." folder on a thumb drive, copied the files from that folder onto a "Converted ..." folder on the same thumb drive, and then plugged the thumb drive into my MBP so I could convert them using LibreOffice Writer. I was able to convert 800 of the 900 files in either 20 seconds per file if LibreOffice recognized the file as .doc, or 30 seconds per file if it didn't. Note that the thumb drive, which I pulled out of its Best Buy 2015 wrapper, is formatted for Windows—which turned out to be significant as per two paragraphs down. The first problem was that, for another ~40 filenames, LibreOffice wouldn't do a name-preserving Save As. In 2015 I had encountered a filename for which Word 2004 replaced everything after "#2." as the the first three characters with "docx'". I then though this was a Microsoft stupidity, but 3 weeks ago I realized that the stupidity is a consequence of a Windows-friendly refinement in the way Apple handled the transition from Classic Mac OS to OS X. Classic Mac OS prohibited only the character ':' in filenames, but Windows NTFS prohibits basically all the characters listed in this Wikipedia article section (note '.' is specially treated). Apple's refinement—to avoid making users go through an editing process while upgrading—was to allow those characters in existing filenames, but to require that they be eliminated in every Save As to a Windows-formatted drive. In that case Apple's Save As code—used in both Microsoft Word 2004 onward and in LibreOffice—assumes the rightmost '.' in any filename is the beginning of a file extension, which wouldn't be true for a filename still in its Classic Mac OS format—which is still OK on OS-X-formatted drives. That meant I had to pre-replace filename characters banned by by Windows NTFS, of which '.' and '/' and '?' and straight-double-quote were the most common in my ex-wife's files. I did that by copying-and-pasting each file whose filename contained such characters from the G4 HDD to the G4 desktop, renaming the desktop copy with '_' or '-' or " quest" replacing its illegal characters, copying the desktop copy to its proper hierarchical place in the "Microsoft Word etc." folder on the thumb drive, and then deleting the desktop copy. LibreOffice on my MBP can convert a file with such a pre-replaced filename to .docx format with a name-preserving Save As. This is OK, because the friend/relative may use Windows. The second problem was that ~100 of the files converted in LibreOffice Writer as simply rows of hash symbols—rather than their contents when I opened them in Word 2004 on the HDD under OS X 10.3 Panther. I decided OS X 10.3 on the G4 had somehow messed up those files when I copied them from the HDD to the "Microsoft Word etc." folder on the thumb drive, and that this was probably because I'd used the Finder's ability to copy nested folders. At this point I remembered that Retrospect has long had a facility named Duplicate in the version 6.1 I had on an OS X 10.3 drive on the G4. Duplicate (still named that in Retrospect Windows, but renamed Copy in Retrospect Mac 8—distinct from Copy Backup or Copy Media Set) copies entire volumes or defined-to-Retrospect subvolumes (renamed Favorite Folders in Retrospect Mac 8 ) between a Source and a Destination. It goes beyond the Finder in having an option to compare the copied files and show the results in a log, and another option to delete successfully-copied files from the Source. What from fixed my hash row is that Duplicate's directory traversal is independent from that of the Finder. But Duplicate also logged two files it couldn't copy; I'd not spotted the character '/' in their filenames. LibreOffice on the MBP successfully converted most of the files I re-copied from the HDD to the thumb drive's "Converted ..." folder using Duplicate on the G4. Then I Finder-copied one by one the dozen files LibreOffice still couldn't convert. One file ultimately converted as a garbage line; it was garbage on the G4 HDD. IMHO the fact that Finder-copy didn't mess up a file when it was copied individually means that there's a bug in hierarchical Finder-copy, at least in OS X 10.3.
  15. 😧😧 I originally tried to test this out early this morning, but ran into a problem that may be peculiar to my installation—rather than Retrospect Mac 16.6. I first Removed and Added—afterward changing the Options—on the "backup server" Source definition of my MacBook Pro "client" to allow Wake-on-LAN, and did the same Options-changing on my two daily No Media Action Backup scripts (the "sacrificial" script and the "real" script—after re-checkmarking the MBP on all my scripts that use it. I then tried putting my MBP—booting macOS 10.13 High Sierra—to sleep via the Apple menu; but except for one time it wouldn't stay asleep for more than a few seconds. However tonight I experimented further, and I can get my MBP "client" to stay asleep—initiating that via the Sleep item on the Apple Menu—if in System Preferences->Energy Saver->Power Adapter I change the Turn display off after slider from Never to 3 hours. Then, provided Wake for network access is check-marked on the pane below the slider, putting the MBP to sleep while running my "sacrificial" script (it uses the No Files Rule—note Rule is the Retrospect Mac term for Selector, but scans a while because I've now got a thumb drive plugged into my MBP) results in its waking up in a few seconds. I have no idea what the Windows "client" equivalent of the System Preferences->Energy Saver->Power Adapter settings would be. Nigel Smith will know. And yes, Nigel Smith, I knew about require password on wake; I wondered if some of your cleaning people might moonlight for Chinese Intelligence. 😧 P.S.: Forget what I said in the third paragraph of this post. I stayed awake long enough to try this on my scheduled 3:00 a.m. "sacrificial" and 3:05 a.m. "real" Backup scripts, and things didn't work out as I'd hoped. I woke up at 6:45 a.m., and remembered that since Spring 2015 I've been using an external keyboard and a mouse connected to a bus-powered KVM switch—so that I can switch both of them them back and forth between my MacBook Pro and an old Digital Audio G4. Bus-powered means the KVM switch shuts down unless either the MBP or the G4 is awake, so the spacebar on the external keyboard doesn't wake up the MBP. Pushing the spacebar on the MBP's built-in keyboard does wake it up, but that doesn't get an already-running Backup script out of a frozen state after I've slept the MBP while the script is running. In short: Wake-on-LAN still doesn't work for scheduled scripts.
×