Jump to content

jethro

Members
  • Posts

    59
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jethro's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Thanks for the additional replies here. I haven't had a chance to try testing this yet, and may not for a couple days. But I'll check in if I have success (or not). I think I was just going on the statement made on the 'How to Set Up Remote Backup' page regarding the 2 different scenarios that allowed Remote Backup, one being ProactiveAI scripts and the other being 'On-demand Backup and Restore', as noted: So that led me to believe that my client wouldn't have to be included in any kind of ProactiveAI script if I wanted to just do a manual 'on-demand' backup. But maybe I'm not understanding that correctly (and maybe it wouldn't work on v.15.6 anyway)...
  2. Hi, thanks for the quick & thorough responses here. Much appreciated! It looks like the bosses laptops, which are on the ProactiveAI script, are backed up locally when they're at home using Time Machine and/or Carbon Copy Cloner. So it's probably not critical to get them working through the VPN tunnel when remote, especially with the extra complexity and strain it might put on the network (our new office has cable internet, so slow upload speeds). For my computer, which is NOT on a ProactiveAI script, I may see if I can get a manually initiated Remote Backup going late at night every now & then. Our network admin added the port forwards to our firewall, so it 'might' be ready to go. And it would only be incremental backups, and not on the entire computer – just certain directories. I think the only issue may be that I uninstalled my client software, and reinstalled it, so I could get it setup with public/private key security (it had been password). I now would have to get Retrospect server to recognize & update the connection to the client, correct? I may look into upgrading to v17 when back in the office. Thanks again!
  3. Hi. We've been using Retrospect for many, many years. We currently have v15.6 running on a Mac Mini server, and have 3-4 client machines (running Mac Mojave or Big Sur) we backup in addition to the server itself. Two of the clients are laptops that are on a ProactiveAI scripts, the rest are traditional. The office has a WatchGuard firewall/security appliance operating as the router and firewall. So we, like many, have been working remotely more lately. I just have a couple questions regarding continuing backup when client machines aren't in office on the local network. 1) Is backing up when client machines are connected via VPN 'possible'? I know there's probably a LOT more details needed to fully get into this, but I just want to know if it 'should' be possible before delving into the specifics about 'how'. 2) Is backing up over a VPN connection different than what Retrospect describes as 'Remote Backup' (as seen here)? 2b) If Remote Backup is different than VPN-connected backup, and potentially 'easier' & more reliable, do these instructions apply to v.15.6 as well? I have the v15 manual, and it states Remote Backup as a new feature, but then doesn't describe how to set it up like that article does. Just want to know if the process is basically the same. 3) We do have a license for Retrospect v17 that we could upgrade the server to if there are significant improvements to the ability to backup clients when not on local network. We haven't installed it since v15.6 has been decently stable otherwise, and the upgrade process can sometimes come with risks or downsides. Would upgrading to v17 offer a fair amount of benefits to us here? Thanks for any tips or direction here! We just wanted to start with the basics before spending a lot of time getting into specifics about what needs to be done.
  4. Hi, Just a quick follow up. I'll have to jump back on this after the break at the beginning of the year. But we do have a Mac Mini server for which we purchased additional RAM (16GB - high quality). It is running Mac OS X Server 10.9.5. And we just purchased an upgrade to Retrospect 13.5 Server Edition with 10 client licenses. Everything is completely legit. We only have the typical server software running along with Retrospect, nothing else. CPU usage is really not being tasked, surprisingly. I watched it for a bit. And concerning RAM, Retrospect & RetrospectEngine are at 500MB each, RetrospectInstantScan is at 300MB. Apart from that, it's only system resources. And we're not hitting the swap drive, so we're really OK here too. Not even at 30% of RAM resources. Concerning long startup times, it always hangs on a message like 'Syncing catalogs' or something. Takes a very long time. Lastly, we are now on to drive 4 of 5, which is a 2TB drive. BUT, the overall size of our media set I took from the Media Sets section in Retrospect, which stated that it was about 5.6TB over 5 members. IN REALITY, it's going to be well over 6TB, probably about 7TB, when completed. Wondering why Retrospect's calculation didn't match reality?!? We're going to have to now get another destination drive to finish this thing out. And our weekly offsite backups SHOULD be only what's new from week-to-week, right?!? Only the first initial backup will be huge. I'll check-in here after the New Year. Thanks for the help!
  5. Thanks for the responses. It's a lot to read through right now, so I may have to go back through it when I get a bit more time. But I did check Activity Monitor, and surprisingly, Retrospect wasn't maxing out the CPU (Core i7 2Ghz). It ranged from 25-65% mostly. We see Retrospect freeze up and max the CPU when just doing normal tasks or even opening the program (takes 5-6 minutes when starting just to be usable). So we're scared to pause or try to stop the backup, as I'd be surprised if it wouldn't completely freeze Retrospect. And RAM isn't an issue, we have 16GB in our server (Retrospect using under 1GB). At this point, we're on to the 3rd HD of 5, which is 1TB, and it's running just as slow. We're at about 400GB after 24hrs. So it looks like it wasn't due to a faulty drive (drive #2 was the one we thought we'd have to have repaired). As I'm heading out for the Holidays tomorrow, and have a ton to get done before then, we'll have to just let this process finish out (hopefully by the end of the week), and look into doing it differently in 2017 when we are going to start a completely new set to alleviate some of the issues with our 5-year-old 6TB set. When we get our new Media Set going at the beginning of the year, we are going to do a weekly Copy Media Set for an offsite HD (not the same one we're running now). So I'll have to look more into exactly what "Match Source Media Set to destination Media Set" and "Don’t add duplicate files to the Media Set" do to determine if they can be safely left off without losing the flexibility that's important to us. We'll just need the new Copy Media Set script to be able to complete in less than a work day, as it will be brought in just for that, then taken back home. Thanks for the help!
  6. Hi, we're running a 'Copy Media Set' script to copy our entire current media set onto an external 6TB drive to store offsite as a backup-backup. But it seems to be taking unusually long and wanted to see what to expect. Our Media Set is comprised of 5 hard drives taking up almost 6TB total space, and the catalog file is 123GB. We have Retrospect 13 on a Mac Mini Server. All hard drives are good quality 7200rpm drives connected with a Firewire 800 dock/interface. While we're running the Copy Media Set script, both the source & destination HD's are daisy-chained with identical FW800 docks. In the Copy Media Set, we did NOT set encryption or software compression, as we though this would tax the processor too much. :: The first drive in the set was 1TB. This took about 17hrs to complete. :: But are second drive in the set, which is 2TB, is at 76hrs & counting. It still has 500GB to go! - Is this unusually long for just copying files over, or can it take this long depending upon hardware?!? It is still churning away and hasn't frozen up, which Retrospect can often do for us. So I don't want to touch it after this long, unless there's something I can adjust between this hard drive and the next. In Retrospect, I see the status repeatedly rotating between 'Copying' & 'Updating Catalog File'. It doesn't seem to copy many files at a time from what I can tell. When the 'Performance' shows anything, most of the time it is: 102.4KB/m. If this is 102 Kilobytes per minute, we're in trouble! - Any insight here for how to: A) proceed; and modify anything so it's not so painful in the future', would be welcome.
  7. Hi, We have a Media Set that is comprised of 5 Hard Drives that span about 5 years of backups. Unfortunately, one of the Media Member HD's appears to have gone bad (spins but won't mount). It is a 2TB drive, that is member #2 of 5, covering data from 2013-14. 1) If we can send the drive out and recover some/most/all of the data, it will come back on a new drive. How do we 'merge' this drive back into our existing Media Set so it takes the place of the former HD?? Is there a difference if all of the data cannot be recovered? 2) If we cannot recover the data (or determine that it's too expensive), what would happen if we tried to run a 'Copy Media Set' function to back up this Media Set when it came to asking for member #2?? We ultimately want to create a duplicate back-up of this Media Set in case this happens again, but are unsure of how to handle it when only partial data would be available. Thanks! We're moving to having more redundancy so this won't happen again. Frustrating thing is that the drive was sitting unused in a safe, and it hadn't been used that much for a relatively robust HD (WD Black). It wasn't dropped or anything as far as I'm aware...
  8. Thanks for the ongoing help. It's becoming a bit clearer what we need to do. To answer David about why we never followed through with cloud (and are considering an alternative now), it honestly was getting too complicated to figure out how to properly groom, seed, and estimate both current and future storage needs, JUST so we could figure out how much it was going to cost. We didn't want to dive in only to figure out that in the end the monthly costs were just not reasonable or it was too time-consuming. I think we could get one big 6TB drive now ($200-300 total) and just copy our entire media set over, then keep it offsite in a safe environment. We would then only need to update that once a week with what's changed. That, plus maybe another 3-6TB drive when it fills up, would probably last us a few years at our current data rates. And I would hope that with a robust enterprise drive, only being used once a week, that it would be reliable enough (it IS just the backup-backup anyway). ----- A) But to get back to original questions if possible, can I get pros/cons of using 2 separate media sets that rotate every other week vs. our current single media set plus a 'Copy Backup' set once a week?? And still a hair fuzzy on whether to use 'Copy Media Set' vs. 'Copy Backup' script. I believe we would want the main onsite backup and the offsite 'backup-backup' to be identical so one could be swapped for the other in the case of emergency. C) Our current set is almost 6TB and goes back 5+ years. The catalog file for it is 120GB. At what point to we REALLY need to consider doing a 'New Media' backup to 'start over'? We do not use compression, and we really need to have as many iterations of a file, and as far back as possible, available to restore at any time. Thanks!
  9. OK, thanks. The drive was about $130, so not a complete waste. Was just thinking that it wouldn't be hard to migrate it over. I guess I can try to do a direct clone of the current media set drive to the new drive, after partitioning & labeling the new drive the same as the old, and see if Retrospect saw it correctly. Not sure if it would recognize the suddenly larger partition, though.
  10. Hi, thanks for the reply and example. I can see how this might be a very complete way of backing up, not only for recovering old/overwritten files, but for doing a complete restore as well. In the years of using Retrospect, we have only ever used it to recover files that were overwritten, corrupted, deleted, or lost in some way. But we are aware that catastrophes can happen, and know our setup might be a bit cumbersome or time-consuming to restore from (our file storage drive is a 4-member RAID 5, so 'fairly' safe, though). We'll consider your example, but have to just consider the costs (additional media needed and additional time to mange the system - adding/removing drives at the right time, etc.). - When you say you use a 'Copy Backups' script to copy C to A or B, is there a reason to choose that over a 'Copy Media Set' script? We would want to be sure to have all past backups of files in case they are overwritten or corrupted at a point in time. - When you do a 'New Media' backup for A or B, does that just continue the current incremental backups onto new media? Or does it do a 'fresh start', backing up all of the current files even if versions of them exist on the previous Media Set? We haven't done a new set due to how large the first backup would be (all current files), and concerns about how to search for a specific version of a file which may exist across either of the Media Sets. More insight here might be helpful. - Any other replies on our initial questions??
  11. Thanks for the reply. I just want to be sure that this process would only copy over the data from the current media MEMBER, not the entire media SET. Is this correct?? The current media member (external HD) has about 1TB of backup data on it. I just wanted to transfer that to the new larger HD and have that one take over as the current member in the media set. No way to have Retrospect think that the new larger HD is the current media member (which is disk 5 of 5)? Thanks!
  12. Hi, we have a media set with the latest/current member being a 2TB HD. It is less then half full so far. Sorry if this is an obvious answer, but have a question about effectively transferring this member to a larger drive. - How do we 'migrate' this current media set member to a larger HD we'll be purchasing (we're getting a 3-6TB RAID 1 external), without 'capping' the current drive and adding a new member? We'd just like to repurpose the existing 2TB drive for something else while not throwing the current backups off at all. I would assume I could just copy the contents of the current drive to the new, but just don't know if that throws Retrospect off for any reason. Thanks!
  13. Hi, We are currently backing up 4-5 machines to a single HD media set. Backups run M-Thurs each week at night (we turn the drive off on weekends to save some wear & tear). The media set is almost 6TB now (across 5 hard drives), and spans 5+ years of incremental backups. But we now would like to have an offsite copy just to be safe (one of the previous drives from our current set went bad while in storage, losing a year's worth of older backups . We're thinking we could take a drive/media set home once a week realistically. So just trying to understand the best or most efficient way to set this up. 1) Should we create an entirely new media set and set the current scripts to backup to each set (e.g., A & for one week, then rotate to the next? This would mean we'd lose at worst a week's worth of backups if something happened. 1a) If so, would this new media set just start backing up from the current state and then be incremental going forward, thus not being in-synch with our 5-6 year-old media set? Or is there a way to 'synch' the sets first, so they are at least covering the same backup data & dates? 2) Or would it be more efficient to copy the existing media set to a new media set once a week? I can imagine we'd bring the other media set drive(s) in on a Friday and hope it could complete all copying from the week while we were here, then it would go back offsite. 2a) What exactly is this type of operation called, and where would I find specifics about setting it up? 2b) I'm guessing we'd need enough space to copy the entire media set (6TB) initially, then it would just do incremental, correct? Could this destination set span multiple drives like our current media set?? Thanks a lot. Let me know if there are better ways to accomplish what we're after that I'm not aware of.
  14. Ahh, that's a good (and rather obvious) idea. So is it the raw .rdb files that would be copied 'as-is' to the cloud storage? Or would there be some intermediary or modified file(s) that would get sent? I did check through our current HD (member 4 of 4), and got some decent stats. It looks like for the year 2015-2016, there were over 18,000 .rdb files, totaling about 685GB in space. So from that I come up with a monthly average of roughly 1545 files at 57GB. For a small (4-person) graphic design office, backing up incrementally 4 days a week, does this sound like reasonable figures? Just want to make sure we're not WAY off, indicating some sort of issues. We'd have to determine how much space we'd want to start with, and then we'd have an idea of how much it might grow on a monthly basis. It appears the main cloud storage providers offer different tiers for their storage pricing, depending on frequency of usage. Anyone know what type of 'tier' we would need for weekly copies of files, which would rarely (if ever) need to be accessed?? Thanks again for the help here! Hopefully it will be helpful to others as well.
  15. Hi David, Thank you for your diligent assistance and advocation here on our part! Much appreciated. I've read through everything briefly, but will have to go back when I get a chance to try to thoroughly understand all that's stated. It appears, however, that we would still do a 'test' run to a new blank media set, with some grooming options enabled, to determine the INITIAL size of storage we'd need to afford. I'll try to see (unless you or someone knows off the top of your head) if there are reports or other ways to determine an 'average' backup size, either per backup or per time period (e.g., how much space on average per month). This would help not only determine how far back we could/should reasonably go initially, but also how much additional space we might need on average for future backups. If it's helpful, we run our master backup on our server and a couple clients 4 nights per week, then there are a couple mobile users who are backed up when on the network (up to daily). We still like the idea of redundant, automated cloud backup. But it's still unclear how much this would cost, both initially and ongoing. Thanks!
×
×
  • Create New...