Jump to content

Upgrade v.11 > v.13 / Incorporating Cloud Backup


Recommended Posts

Hi, we received the notice today about Cloud Backup being a feature of Retrospect v.13. We're currently using v.11.5.2 on a Mac Mini server. It backs up our server and a few client machines.

 

Curious to know a couple things:

 

A) How smooth is an upgrade from v.11 to v.13? We would hate to lose any of the configuration or setup info, as we've had issues before keeping Retrospect running smoothly.

 

B) Retrospect is painfully slow on our server when opening and trying to check things (we have 2Ghz Core i7, 16GB Ram, OS X Server 10.9.5). Just opening the application takes about 3-5 minutes before I can access all of the areas, and if it's backing up, it can often crash. Would v.13 make any improvements here for our machine?

 

C) Concerning Cloud Backup, we'd love to augment our existing backup with maybe a weekly data transfer to a cloud storage as a backup of our onsite backup (we'd want our local HD backup to still be primary and work as-is).

 

Is there anything we can look at in our current Retrospect setup to get a rough idea of the amount of storage and get/put requests we would need, so we can see how expensive the cloud storage would be?? We currently have been backing up all devices incrementally to a media set comprised of multiple hard drives. The total space used for ALL backups (which goes back 4 years now) is about 5TB, spread across 4 hard drives. But we don't necessarily need to go all the way back to the beginning. We might be fine transferring either just current data forward, or maybe everything back to a year or so ago, if that's possible and would be more efficient.

 

D) Do we really need a specific data storage service (e.g., Amazon or Google) for it to work simply enough? Or can we use a hosted dedicated server we use for other purposes (typical LAMP/WHM/cPanel server)?

 

Thanks for any insight here. Offline redundancy is intriguing if simple and affordable enough.

Link to comment
Share on other sites

B. It shouldn't take minutes opening the application, so there must be something wrong with your setup.

What else is running on the server?

What is the typical CPU load?

How much free RAM?

How much free space on the boot volume?

Is there lots of disk activity, so the disk(s) becomes a bottleneck?

 

By the way, the application you launch is just the console (the GUI). It is the Retrospect engine that does all the work. 

Try using the Console on another Mac.

Link to comment
Share on other sites

One question I have relating to cloud backup in Ver. 13 concerns "seeding" followed by incremental backups.

 

"Seeding for Cloud Storage" in Chapter 1 of the new Mac User's Guide states: "Seeding allows customers to back up their data to a local hard drive and then send that hard drive to the cloud storage provider. The cloud storage provider then imports that data into the customer's storage location. This process enables customers to send a large amount of data to the provider faster than it would take to upload it. After the data is imported, the customer can change the set's location in Retrospect to the cloud location and begin incremental backups."

 

If the user backs up his/her data to a local hard drive, that would presumably be done after creating a Disk type Media Set for the Recycle Media Set backup—complete with catalog in the usual default location.  After the data is imported into the customer's storage location in the cloud, can the user now "change the set's location in Retrospect to the cloud location" by changing the type of that same Media Set from Disk to Cloud—and then plug in all the necessary cloud access information into the change dialog?

 

If that is not possible, can the user create a new Cloud type Media Set, and then copy the catalog of the Disk type Media Set as the catalog of the Cloud type Media Set?  I am not familiar with many of the more esoteric capabilities of Retrospect; how would the user do this?  By following the "Adding a Media Set’s Catalog" procedure in Chapter 8 of the Mac User's Guide, but changing the Media Set name to that of the Cloud type Media Set?

 

If that too is not possible, would the user have to Recatalog the Cloud type Media Set after it has been imported?  That sounds as if it would require the Retrospect server to effectively download some subset of the information for every file stored in the Cloud type Media Set.

 

I now see that there is a Tutorial video entitled "Backup Set transfer to the Cloud for Windows".  However I have searched the Mac User's Guide, and there doesn't seem to be any such facility for Retrospect for Macintosh.  Version 13.5, perhaps?

Link to comment
Share on other sites

David,

 

If you plan to use a media set with the cloud, you must use a Cloud Media set, which has options for local storage and cloud storage. https://www.youtube.com/watch?v=a6yWtxDpYks. (See 1:12 mark in the video). When you write locally, you configure the media set options for local storage. Once you move it to the cloud, you reset the path and change it to cloud based storage.

 

Disk media sets can not be directly used in the cloud. If you have a disk media set, you can use the Copy Media Set script to copy the entire contents into a Cloud media set (local or cloud storage).

 

This video shows the process of using the copy scripts in Retrospect. The source would be the disk media set and the destination would be a cloud media set.

Link to comment
Share on other sites

Thank you for replying, Robin (but it took a phonecall to Retrospect Inc. Sales to get you to do so).  :)

 

My error was in not looking far down enough in the list of Tutorials to see the Cloud Backup ones under Retrospect for Mac.   However the Tutorial you link to is not the correct one for fully answering my question.  I see now that the Retrospect Tutorial

briefly explains how to "change the set's location in Retrospect to the cloud location" after "seeding".  There is another Tutorial
that briefly recapitulates that explanation, but goes on to explain how to restore files from a Cloud Media Set that has been copied to a hard drive by the cloud media provider—and the drive then shipped to the user (similar to CP Home's "restore to door" service) to be installed as a local drive.  The key concept in both Tutorials is that the top dropdown in an apparently-new Member Type dialog invoked for a Cloud Media Set in the first Tutorial—now also invoked by Media Sets->mediaset->Members->edit-pencil in the second Tutorial—has been enhanced in Version 13 to allow switching between defining a true Cloud Media Set member and a local Media Set member for the same Cloud Media Set.  I think this is brilliant.  :wub:

 

However, I have one rather large quibble.  In the words of Dr. Strangelove, "Why didn't you tell anyone?"!  Specifically, Retrospect Inc. could IMHO have added that Member Type dialog key concept to "Cloud Backup" in Chapter 1: "What's New" in the Version 13 Mac User's Guide in about three sentences.  If they didn't want to bother to do that, they could have added a sentence "Written words cannot explain the beauty of how we do this!", followed by a link to the relevant Tutorials. ;)

 

It's bad enough when a competing cloud backup product (its initials are CP) does its documentation as a set of web pages with links only to the next step down the garden path, so that a potential user has to use Google etc. as the table of contents for the documentation.  It's even worse when Retrospect Inc. starts to do its documentation only in videos—without bothering to provide links from the User's Guide to the videos.  Some of us older potential users can read text faster than we can watch videos. :rolleyes:

 

 

P.S.: Revised next-to-last sentence in second paragraph, to clarify the key concept.

 

P.P.S.: Further revised next-to-last sentence in second paragraph, to clarify the key concept.

Link to comment
Share on other sites

Hi,

Wondering if anyone has any insight to my original questions, especially C & D regarding cloud backup?? I've tried to follow the additional question added, but it appears to be more geared towards MOVING a backup to the cloud. We only want to copy our backups to the cloud as a safety net for our local HD backups.

 

** Mayoff states - "If you have a disk media set, you can use the Copy Media Set script to copy the entire contents into a Cloud media set (local or cloud storage)." This seems like what we want to do, and the video makes it look pretty straight forward, but it also mentions that it will only copy NEW or CHANGED media to the cloud account. Still wondering about needing to get the entire disk media set up or not.

 

Thanks!

Link to comment
Share on other sites

On 3/2/2016 at 9:43 PM, jethro said:

Hi, we received the notice today about Cloud Backup being a feature of Retrospect v.13. We're currently using v.11.5.2 on a Mac Mini server. It backs up our server and a few client machines.

 

Curious to know a couple things:

 

....

C) Concerning Cloud Backup, we'd love to augment our existing backup with maybe a weekly data transfer to a cloud storage as a backup of our onsite backup (we'd want our local HD backup to still be primary and work as-is).

 

Is there anything we can look at in our current Retrospect setup to get a rough idea of the amount of storage and get/put requests we would need, so we can see how expensive the cloud storage would be?? We currently have been backing up all devices incrementally to a media set comprised of multiple hard drives. The total space used for ALL backups (which goes back 4 years now) is about 5TB, spread across 4 hard drives. But we don't necessarily need to go all the way back to the beginning. We might be fine transferring either just current data forward, or maybe everything back to a year or so ago, if that's possible and would be more efficient.

 

D) Do we really need a specific data storage service (e.g., Amazon or Google) for it to work simply enough? Or can we use a hosted dedicated server we use for other purposes (typical LAMP/WHM/cPanel server)?

....

 

 

On 3/7/2016 at 7:21 PM, jethro said:

Hi,

Wondering if anyone has any insight to my original questions, especially C & D regarding cloud backup?? I've tried to follow the additional question added, but it appears to be more geared towards MOVING a backup to the cloud. We only want to copy our backups to the cloud as a safety net for our local HD backups.

 

** Mayoff states - "If you have a disk media set, you can use the Copy Media Set script to copy the entire contents into a Cloud media set (local or cloud storage)." This seems like what we want to do, and the video makes it look pretty straight forward, but it also mentions that it will only copy NEW or CHANGED media to the cloud account. Still wondering about needing to get the entire disk media set up or not.

 

....

 

 

C)  I think you would want to do the following things:

 

1)  Create a new Cloud Media Set, specifying "Groom to Retrospect defined policy" with  Months to keep = 12.  However, in the Member Type dialog, setup a new local Media Set member on a separate shippable disk drive,

 

2)  Run a Copy Media Set script to copy your regular Media Set to the new Cloud Media Set local disk member, with the option Match Source Media Set to destination Media Set unchecked and the option Copy Backups checked  The resultant size—after grooming—should give you a fairly good estimate of the amount of storage you would need in the cloud; get/put requests I don't know about.  If the size of the new Cloud Media Set local disk member is the same as your regular Media set, you will have to Groom the new Media Set afterwards—or else use a Rule in the Copy Media Set to do the equivalent of grooming.

 

3)  Ship the drive containing the new Media Set member to your cloud provider.  Meanwhile make the arrangements to setup your cloud account, and keep the account  parameters handy.

 

4) For the new Media Set, use the edit-pencil to switch the new Media Set Member Type to Cloud, and type in the account parameters you have kept handy.

 

5) Have your cloud provider copy the contents of the disk drive you shipped into your cloud account.

 

6) Setup a new Copy Backup script to copy from your regular Media Set to the new Cloud media set, this time with the option Match Source Media Set to destination Media Set checked.  Check No Verification.  Schedule the new Copy Backup weekly, at whatever time you want.

 

D) You'd have to setup Basho Riak S2 on the dedicated server, because that database software is compatible with Amazon S3.  The software is free, but how much would it cost in people time to set it up?  Here's the Retrospect Inc. White paper on how to do it.

 

Disclaimer: I've never done any of the above stuff; all I did was to read the Mac User's Guide and look at a couple of Tutorials.  ;) 

 

P.S.: Added caution about Groom to my answer step C2.  Added caution about No Verification to my answer step C6.

 

P.P.S.: In my answer to D, corrected name of Basho software and added link to White Paper about it.  Also made my answer step C2 more precise.

 

P.P.P.S: Enhanced step C1 to give an alternative "Groom to keep this number of backups" specification.

 

P.P.P.P.S: Deleted the "Groom to keep this number of backups" alternative specification step in step C1, because Mayoff says it wouldn't delete much of anything.

 

P.P.P.P.P.S: Changed my answer step C6 to say Copy Backup instead of Copy Media Set, per Tutorial that Mayoff embedded.  Also changed my answer step C2 to say checkmark Copy Backups.  See my post below for further explanation.

P.P.P.P.P.P.S: Changed my answer to step C2 to say that Copy Media Set can use a Rule as an alternative to grooming per this post.

Edited by DavidHertzberg
Changed my answer to step C2 to say that a Rule can be used in Copy Media Set as an alternative to grooming
Link to comment
Share on other sites

OK, thanks for the rundown. I guess we'd have to read up a bit on Grooming, as we haven't done that with current backups. Then I suppose we'd need a fairly large drive for the initial copy/test.

 

Guess I was hoping there might be a more definitive way to 'guesstimate' both current and average ongoing storage requirements so we can see if the costs are even feasible for us. Don't want to go through all the trouble of setting something up to only find out it is cost prohibitive.

 

Would I be correct assuming that if we wanted a 'Complete' backup, going all the way back to our first backup, we'd look at the current size of our media set, which is over 5TB?? Is it a 1:1 relationship here??

 

Thanks!

Link to comment
Share on other sites

jethro,

 

I made this post to Mayoff's thread on Mac Ver. 13 Grooming, specifically linking to your requirements for cloud backup and my suggested procedure.  I (1) asked if it is true that  Retrospect will not do Grooming on a Copy Media Set run, and (2)—assuming that is true—requested that Retrospect Inc. implement an age subset of the "Groom to Retrospect defined policy" option in Copy Media Set with a Cloud Media Set as destination for Retrospect Mac Ver. 13.5.

 

You shouldn't have to buy a 6TB disk drive for your "seeding" when you won't need most of that capacity after grooming, and you shouldn't have to make a an extra multi-hour Groom run when all you want to do is pare your copied Cloud Media Set local disk member down to the past year of backups.

P.S.: Ignore this post; Copy Media Set can use a Rule as an alternative to grooming per this post

Link to comment
Share on other sites

jethro,

 

Based on Mayoff's reply to the post I linked to in my post immediately above this, I posted my interpretation of his reply—which IMHO is just a "this is the way Retrospect grooming works" recital.  My interpretation is that  you could initially define the local disk member of your Cloud Media Set as smaller than 5TB, but that your Copy Media Set script would then invoke a repeating cycle of progressive copying followed by progressive grooming.

 

If that approach worked, it wouldn't require you to obtain a shippable disk as large as 6TB.  However, as I said in the post, that approach might end up taking longer than the copy-and-then-groom approach I suggested in step C2 in my third post in this thread, and that approach might even cause Retrospect to crash.

 

Note also that I have enhanced step C1 in my third post to give an alternative "Groom to keep this number of backups" specification.  That alternative assumes you know approximately how many backups you have done per week to your regular Media Set.

Link to comment
Share on other sites

jethro,

 

Here is the (first?) reply of Robin Mayoff, head of support at Retrospect Inc..  Note that he doesn't specifically reply to my copying-grooming cycling hypothesis.

 

In any case, based on what he says, don't specify  the "Groom to keep this number of backups" option I suggested in step C1 of my third post in this thread; specify "Groom to Retrospect defined policy" with  Months to keep = 12 instead.

 

P.S.: Changed "Not" to "Note" at beginning of second sentence in first paragraph.  That makes a big difference in the meaning.  :rolleyes: 

Link to comment
Share on other sites

Hi David,

Thank you for your diligent assistance and advocation here on our part! Much appreciated. I've read through everything briefly, but will have to go back when I get a chance to try to thoroughly understand all that's stated. It appears, however, that we would still do a 'test' run to a new blank media set, with some grooming options enabled, to determine the INITIAL size of storage we'd need to afford.

 

I'll try to see (unless you or someone knows off the top of your head) if there are reports or other ways to determine an 'average' backup size, either per backup or per time period (e.g., how much space on average per month). This would help not only determine how far back we could/should reasonably go initially, but also how much additional space we might need on average for future backups.

 

If it's helpful, we run our master backup on our server and a couple clients 4 nights per week, then there are a couple mobile users who are backed up when on the network (up to daily).

 

We still like the idea of redundant, automated cloud backup. But it's still unclear how much this would cost, both initially and ongoing.

 

Thanks!

Link to comment
Share on other sites

I think that since you have not been grooming your media sets, that you might be able to get a crude estimate of space needs by looking at the dates of the .rdb files within the media set. New ones would have been created whenever more backups were done, so by selecting a cut-off (1 year ago, 2 years ago...) you might be able to see what your ongoing needs are. A couple of caveats about this-if you've been doing incremental backups then the initial one likely captured a bunch of stuff that if not changed would never have been backed up again so it would be expected to be much larger than subsequent bu's. Looking at incremental backups (later ones) may give you an idea of how much data changes on an ongoing basis to judge storage or data transfer needs.

 

 

This wouldn't work very well if you've been grooming, as older versions (maybe from that original bu) would have been purged by grooming, so the old .rdb files will be much smaller than they were originally.

Link to comment
Share on other sites

I think that since you have not been grooming your media sets, that you might be able to get a crude estimate of space needs by looking at the dates of the .rdb files within the media set. New ones would have been created whenever more backups were done, so by selecting a cut-off (1 year ago, 2 years ago...) you might be able to see what your ongoing needs are. A couple of caveats about this-if you've been doing incremental backups then the initial one likely captured a bunch of stuff that if not changed would never have been backed up again so it would be expected to be much larger than subsequent bu's. Looking at incremental backups (later ones) may give you an idea of how much data changes on an ongoing basis to judge storage or data transfer needs.

 

 

This wouldn't work very well if you've been grooming, as older versions (maybe from that original bu) would have been purged by grooming, so the old .rdb files will be much smaller than they were originally.

 

That's a good tip.

 

In your last paragraph, I like to add that when using grooming, the modified date on the .rdb files are set to the date of the (last) groom. So they no longer looks like they are part of the initial backup.

Link to comment
Share on other sites

Ahh, that's a good (and rather obvious) idea. So is it the raw .rdb files that would be copied 'as-is' to the cloud storage? Or would there be some intermediary or modified file(s) that would get sent?

 

I did check through our current HD (member 4 of 4), and got some decent stats. It looks like for the year 2015-2016, there were over 18,000 .rdb files, totaling about 685GB in space. So from that I come up with a monthly average of roughly 1545 files at 57GB.

 

For a small (4-person) graphic design office, backing up incrementally 4 days a week, does this sound like reasonable figures? Just want to make sure we're not WAY off, indicating some sort of issues.

 

We'd have to determine how much space we'd want to start with, and then we'd have an idea of how much it might grow on a monthly basis.

 

It appears the main cloud storage providers offer different tiers for their storage pricing, depending on frequency of usage. Anyone know what type of 'tier' we would need for weekly copies of files, which would rarely (if ever) need to be accessed??

 

Thanks again for the help here! Hopefully it will be helpful to others as well.

Link to comment
Share on other sites

AFAIK it's the raw .rdb files that would be copied 'as-is' to the cloud storage, because that's what's on the disk you'd be "seeding" to your provider.  

 

Off the top of my head, if you did that and then Groomed your cloud member of your Cloud Media Set every 3 months, you'd probably stay under 1TB.  

 

​One thing that worries me: If—as Lennart says—"the modified date on the .rdb files are set to the date of the (last) groom", would that mess up a subsequent "Groom to Retrospect defined policy" with  Months to keep = 12?  Would the subsequent Groom still be able to delete the oldest .rdb files?  Lennart?  Mayoff and his merry engineers?

Link to comment
Share on other sites

No, what you see in the Finder is basically different from what Retrospect sees.

 

Say that one .rdb file contains about ten 60 MB files. Retrospect should groom out one of them, because the original has been deleted a long time ago. During the groom, the .rdb becomes 60 MB smaller and gets a new modified date in the Finder. That does not affect Retrospect's ability to perform more grooming later.

 

Say that another .rdb file contains some files that were deleted a long time ago (or that were replaced with newer versions a long time ago). Since all contents of this .rdb file is now obsolete, there is no need for Retrospect to keep it after the groom. So it will be completely removed.

Link to comment
Share on other sites

On 3/16/2016 at 4:19 PM, Lennart Thelander said:

No, what you see in the Finder is basically different from what Retrospect sees.

 

Say that one .rdb file contains about ten 60 MB files. Retrospect should groom out one of them, because the original has been deleted a long time ago. During the groom, the .rdb becomes 60 MB smaller and gets a new modified date in the Finder. That does not affect Retrospect's ability to perform more grooming later.

 

Say that another .rdb file contains some files that were deleted a long time ago (or that were replaced with newer versions a long time ago). Since all contents of this .rdb file is now obsolete, there is no need for Retrospect to keep it after the groom. So it will be completely removed.

 

 

I'm sure what you say in your second paragraph is true for what is now called "storage-optimized grooming".  However, according to the appropriate section in "Chapter 1 • What's New" in the  Retrospect Mac User's Guide for Version 13, it doesn't appear to be true for the new "performance-optimized grooming".  "Performance-optimized grooming makes certain decisions about what data to remove, only removing outdated backup information that it can quickly delete. On a technical level, this faster grooming mode only deletes whole RDB files instead of rewriting them. For this reason, cloud backup sets/media sets only support performance-optimized grooming [my emphasis]. This requirement ensures customers can efficiently groom out old backup information without rewriting their cloud data set."

 

This would probably be OK for jethro's new cut-down Media Set ("backup set" is an old term that is still used in Retrospect for Windows) once it is "seeded" and has a true Cloud member.  Given what jethro says about his installation being a graphic design office, I doubt that whether it deletes or splits a .rdb file that is 12 months old would make much difference.

 

However I wonder whether the same would be OK for jethro's new cut-down Media Set while it still is a local member on his shippable hard drive.  I'm not sure  whether the prohibition on "storage-optimized grooming" applies to that local member as well.  Even more important, I'm not sure whether "performance-optimized grooming" changes the modification dates of .rdb files that are left after grooming.  If it does, it would likely mean that .rdb files that are left after a Groom on that local member specifying "Groom to Retrospect defined policy" with Months to keep = 12  could not be later groomed out—say 3 months later—in the true cloud member without deleting all of them or none of them.

 

This is an important-enough question that I'm going to try to cross-post this entire post in the "Retrospect 13 and Grooming" thread.

P.S.: Ignore this post; Copy Media Set can use a Rule as an alternative to grooming per this post.

Link to comment
Share on other sites

Changed Copy Media Set to Copy Backup in my step C6 in this post, because Mayoff says that is what to use for staging to tape in the video embedded in his preceding post.  

 

The difference is that the Retrospect Mac v.13 User's Guide says, on page 161, "Copy Backup scripts are different from Copy Media Sets scripts in a number of ways:  • They copy only active backups; Copy Media Sets scripts copy all backups. • They provide different methods for selecting which backups get copied, such as the most recent backup for each source contained in the source Media Set; Copy Media Sets scripts always copy all backups."  [my emphasis]

 

The question is: What is an "active backup"?  That term is used only that one time in the entire Mac UG; the same is true for the Windows UG.  Presumably it means the last backup for a particular Source.  But then, why does Copy Backup offer "Copy all backups" as a choice in the drop-down? 

 

Also added Copy Backups as an option to be checkmarked in my step C2.  This seems to make sense, because "Deselecting this option will only copy the files contained in the source Media Set, and the destination Media Set will lack the necessary file/folder listings and metadata to perform complete point-in-time restores."

 

I'm a little out of my depth here, because of lack of documentation and discussion (I've searched) on these matters.  Paging Lennart or Mayoff or bdunagan. 

Link to comment
Share on other sites

 

The question is: What is an "active backup"?

 

Your active backups depend on your current grooming settings. If you have grooming disabled then your active backups will be the most recent backup of each source. If you have grooming enabled, we will keep the number of snapshots (backups) in the catalog necessary to carry out your grooming policy. For example, if you have "Groom to keep this number of backups" set to 3, we will keep 3 copies of your recent backups in your catalog, and consider them active.

 

 

But then, why does Copy Backup offer "Copy all backups" as a choice in the drop-down?

 

While I admit this is a bit confusing, and I actually had to test the behavior to be certain, the user's guide is indeed correct. Copy backup scripts will only copy active backups, meaning that with this drop-down option selected, we will copy all of your active backups.  I'll preemptively agree with you that this is misleading, and have already logged a bug to correct the wording here.

Link to comment
Share on other sites

Your active backups depend on your current grooming settings. If you have grooming disabled then your active backups will be the most recent backup of each source. If you have grooming enabled, we will keep the number of snapshots (backups) in the catalog necessary to carry out your grooming policy. For example, if you have "Groom to keep this number of backups" set to 3, we will keep 3 copies of your recent backups in your catalog, and consider them active.

 

 

While I admit this is a bit confusing, and I actually had to test the behavior to be certain, the user's guide is indeed correct. Copy backup scripts will only copy active backups, meaning that with this drop-down option selected, we will copy all of your active backups.  I'll preemptively agree with you that this is misleading, and have already logged a bug to correct the wording here.

 

 

But what about jethro?  From what he has said, he has not been doing any grooming on his regular on-site Media Set—which presumably means that he has grooming disabled on that Media Set.  On the other hand, he has said (third paragraph) he runs 4 incremental backups a week plus some proactive (?) backups of laptops.  Does this mean that, once he "seeds" his Cloud Media Set to to a real cloud, he should schedule his Copy Backup script to run after each backup to his regular Media Set?

Link to comment
Share on other sites

  • 1 month later...

I imagine all of you who are U.S. (Canadian?) Retrospect users received a personalized 21 April e-mail from Retrospect Inc. entitled "... a special offer on DreamHost Cloud Storage from Retrospect".  Unfortunately the list of Certified Cloud Storage Providers  still says of DreamHost "• Seeding and Large Scale Recovery – Not available."

 

Given the number of views of this very thread, I assume many people are interested in doing what jethro wants to do—seed a cut-down version of a local backup to cloud storage for rapid recovery if the local backup is destroyed.  Why, then, is Retrospect Inc. touting a provider that does not provide these facilities?  DreamHost's motto should be "Cheaper because our service is untouched by human hands (including its own sales—try figuring out how to phone us)."

 

IMHO Retrospect Inc., in the process of deciding whether to spend six months of engineering time developing a cloud backup facility, should have done a little Jobs-style marketing of its own.  That means going out and finding out what your customers want that you can uniquely provide, not just looking at what competitors are doing that may look like it's "eating your lunch".

 

P.S.: This afternoon [25 April], around 1:30 p.m. PDT (DreamHost's time), I used DreamHost's website to ask their Sales Department whether they now offer "seeding" and/or "large scale recovery".  As of an hour after their 6 p.m. PDT closing time, I have not yet received an e-mail reply.  Over the weekend I also discovered, using a Google search that gave me a third-party website, that DreamHost has a phone number at (714) 706-4182.  I left a message shortly before 9 a.m. PDT today (their website says their messaging opens at 6 a.m. PDT) asking the same question; I have not yet received a call-back message. 

 

P.P.S.: This evening [28 April] has been more than 3 working days since I asked DreamHost, both via their Sales Department website and via telephone to their secret number, whether they now offer "seeding" and/or "large scale recovery".  IMHO it's safe to assume that they still don't offer those optional services.   That is further reinforced by the lack of results from my 24 April e-mail to Mayoff, in response to his formulaic reply to my response to Retrospect Inc.'s 21 mass April e-mail, in which I suggested that he phone DreamHost to find out whether he should update its information in the list of Certified Cloud Storage Providers.  I think this whole episode is sad.

Link to comment
Share on other sites

  • 2 weeks later...

....

 

Given the number of views of this very thread, I assume many people are interested in doing what jethro wants to do—seed a cut-down version of a local backup to cloud storage for rapid recovery if the local backup is destroyed. ....

 

....

 

....

 

....

The other day I got curious as to why there is no equivalent of this thread over in the Windows Products-Retrospect section of the forums.  I found out there is one.  It currently consists of a single post by rforgaard, and it was started almost 3 weeks after jethro started this thread.  His/her post consists of "step-by-step instructions for using cloud storage for Retrospect Desktop v.11 for Windows using Amazon S3 services.  These instructions should be identical for other versions of Retrospect v.11 for Windows.  If you are running the Mac v.13 (or later) version of Retrospect, or if you are using a cloud storage provider that is not Amazon S3, some of the instructions below might still be useful to you."

 

I must assume these instructions are correct, since I don't personally use Amazon S3 services—or other cloud storage provider's services—for Retrospect (I can't because of my limited Internet upload speed, as I have mentioned up-thread).  I would also need a translator such as Lennart Thelander—not from Swedish to English but from Retrospect for Windows to Retrospect for Mac (I would love to know the inside history of why Retrospect Inc. chose some years ago to modernize the terminology of Retrospect Mac but not Retrospect Windows).

 

What particularly interests me is the section at the bottom of the post beginning "Create a transfer (incremental backup) script in Retrospect to copy your disk-based backup set to your cloud-based backup set (e.g., to your Amazon S3 account)".  That seems to be equivalent to what I wrote in step C6 of this post and in following posts, but I'm too lazy to compare Windows vs. Mac videos or User's Guides and translate the terminology in order to verify it.

 

One question that occurs to me is why there have been so few views of that thread—and no further posts—compared to this one. Possibly it's because rforgaard, being a total newbie on these forums, put his/her thread in the Device and Hardware Compatibility-Windows forum instead of in the Professional forum.

 

The other question that occurs to me is why that thread contains none of the discussion of grooming that has lately occupied much of this thread.  Is that because rforgaard seems not to be concerned with "seeding" a large existing local Retrospect backup to the cloud?  Or do all Retrospect Windows users work at installations where there is so much money available that they don't have to worry about grooming their cloud backups, or is that true only of rforgaard?

Link to comment
Share on other sites

  • 2 years later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...