Jump to content

derek500

Members
  • Content count

    102
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by derek500

  1. derek500

    SCSI through thunderbolt?

    HI Bill, Thanks. That's the approach I'm probably going to take. Our backups have always been intended as 'disaster recover' only with offsite redundancy, so I shouldn't need to keep the aging hardware around. As I have been evaluating new hardware, it's still tough to find something as cost effective as tape for the long term. We want to add some cloud based solution and archival ability going forward, we may end up with a mutli-tiered backup plan, but I don't want to complicate things too much. My biggest problem with most of the cloud solutions is cost! I may just look at weekly replication of our Retrospect disk storage, even that is pricey. Physical tapes being handled by a local secure storage vendor are still extremely cheap! Of course, in a true DR situation, recreating the existing hardware is getting harder and harder. I'm open to ideas though.
  2. I was going to update some of our systems to Mavericks... then stopped. What about Retrospect? So, what about Retrospect? OS X 10.9 is out, and free, and we expect wide adoption, as quickly as it can be downloaded. The client? The Console? The Engine? We are using Multi Server 10.5. Thanks!
  3. derek500

    Mavericks support?

    Thanks! I missed that part of the update announcement.
  4. Yes, we do that as well. It's still a good sized backup set. I'm just trying to figure out why my copy script is now trying to copy the entire disk set, whereas previously I'm pretty certain it only copied the missing portion. -Derek
  5. Hi folks, I'm struggling with an offsite strategy that _used_ to work (I think) as expected, but lately has been giving me a bit of headache. I have 3 important backup scripts - scheduled nightly server and desktop (for different rule sets) and an always on proactive laptop backup script. I have a large RAID volume attached directly to the backup server and a tape library for offsite rotations (we send them offsite biweekly). The strategy that I though was working was that the proactive script would backup to the RAID, and nightly scripts would backup directly to tape. (saving the occasionally slow laptops from wasting tape). Once a week I would also back up the desktop and server scripts to the hard drive, so I would have some onsite 'reach' if I needed to restore an older file. Daily I would run a copy script transferring what's new on the RAID to tape, so I would also have my laptops in my offsite media. The problem I'm having is that lately things don't seem to be working as I originally thought. When I start a new tape set, then my nightly scripts run and back up my servers and desktops to tape. This works as expected. Laptops back up neatly to the disk set. But when I copy the disk set to the tape set, it seems like WAY more than necessary is being copied. The disk set is 3.5 TB, and it seems like the entire 3.5 TB is being copied to tape, when I would expect only about 2 TB to be copied (just the laptops and some older files). So my offsite media has jumped from 5-6 tapes to 9-10 tapes recently. My backup sets haven't really changed size much. I'm thinking the change has come along with 10.2, but not entirely sure. (hoping I didn't make a mistake in a script somewhere) Has anyone else noticed a similar issue recently? Any suggestions on whether I'm doing something unusual? Thanks -Derek
  6. Hi Lennart, We started out doing that, but I found the Disk to Tape was taking too long, and the hard drive really seems to struggle after a few months of this. The nightly (incremental?) backups to tape happen quickly (6-8 hours to finish). Doing the same nightly backups to Disk start out fast, but as the disk set grows and grooms, they take longer and longer, sometimes running well into the next day (12-14 hours to finish). So I found this approach to be a reasonable 'blend' of speed and longevity. At least, I did for a while. The disk RAID is a Lacie 4BIG connected through eSATA, and otherwise performs very very well (if I test it with a simple file copy etc). So I'm not sure how much of our problem is the hard drive, and how much is the catalog file becoming unwieldy, etc. After upgrading to 10.2 I did do a full catalog rebuild of the disk set, and a full verify. Both went very well, completing in about 10-12 hours each. Thanks -Derek
  7. derek500

    confused about scheduling

    Just a quick update - alphabetizing my script names definitely made a positive improvement. I'm not confident about all of it yet but many scripts started happening in the order I'm expecting. Thanks a lot for the suggestion Lennart!
  8. Hi, I'm having trouble setting up scheduled events the way I want. Instead of running events in the order I scheduled them it seems to have some arbitrary priority that doesn't pay any attention to what order past due events were scheduled in. (I believe it's actually based on 'oldest previous script occurance' instead of 'scheduled first' which really becomes seemingly random after a few waiting events start to stack up) For example - Nightly I run 3 scheduled backup scripts, each one taking more time than I've separated the schedules by. The first one always starts first, but the 2nd and 3rd ones seemingly randomly choose which will start next. Not a big deal because it's nightly. However, I have a few weekly scripts like groom and transfer, and these really need to happen in the right order. Unfortunately they are usually stacked up behind some of my nightly schedules, and when these things run out of order it can really goof things up. for example, I have a nightly scripts that full backup from client to a new tape set, then a disk groom and transfer from disk to tape. The disk to tape transfer will duplicate some of the data that happens with the nightly, so as long as the nightly backups complete before the transfer, the transfer isn't so large. Unfortunately, the schedules sit waiting for each other, and then the groom and transfer scripts run before some of the nightly tape scripts, because of the whacked out order of events. Is there any way to make events happen in the order which I scheduled them, instead of the 'last occurance' priority? I want MY schedule to dictate the priorities. Thanks -Derek
  9. Your breakthrough makes sense. Disk media sets have only been avalable since Retrospect 8. Removable media sets are what 6.1 and back used. I kept my old G5 PowerMac running 6.1 on OS X 10.4.8 so that I can back up my last remaining OS 9 client with a duplicate script. I then back up THAT backup volume to RS10. So my G5 Mac has 6.1 Server and 6.2 Client installed.
  10. derek500

    confused about scheduling

    Thanks Lennart, I get it about scheduling things 2-4-6 hours apart. But sometimes I have a script that will run for 5 hours, and other times (to a new media set for example) for 2 days. It's when something runs for days that things get stacked up and then complete out of order that I'm having a problem with. And when I'm doing a new media set, it's important to me to have the backup scripts finish first, grooming next, and transfer last. But that's also the time when things have gotten stacked up, out of order. I'll try renaming them to an alphanumeric naming scheme to see if that helps. Thanks for the suggestion. -Derek
  11. I have a similar machine (although a Mac Pro instead of an XServe) backing up 50 clients and do not see SPODs on the local or remote console. The console is a bit sluggish (understatement), but is not hanging - once it's completed the initial sync it works reasonably well. How much RAM does it have? (and did 10.2 resolve the issue for you?)
  12. With Retrospect 9.0.2(107) server running a proactive backup script to disk: Sometimes Windows 7 clients will back up and then show the status "Building Snapshot" until the end user restarts their machine or pulls it off the network, at which Retrospect will say in the log the backup compeleted succesfully with a -519 error at the end. The problem is that "Building Snapshot" may never end. The windows clients that do this are Windows 7 with client 7.7.114 installed. Stopping the client (either turning the Retrospect client off, or rebooting the system, or removing it from the network, etc), or stopping the backup from the server console will free up the Proactive script on the server. The laptop user has no idea there is anything amiss. It's only because I looked at the server Activities page I know there is an issue. The next backup of that client usually proceeds and ends normally. Has anyone else run into this and/or solved it? Any tips on what I should look for? Thanks -Derek
  13. I guess I spoke too soon. After having no problems for several months, I now find myself with another Windows client stuck on Building Snapshot - for 48 hours. I know that the client has shut down and left the building but the server is still stuck here. I have contacted support about this issue in the past and the solution is to stop the engine on the server, which usually means I need to rebuild the catalog. This is where things get frustrating. Is there any way to stop one job on the engine? (10.1.0.221 on Mac OS X 10.6) BTW this thread was always intended (at least by me) to be about never-ending 'Building Snapshot' - not ones that finish in 4 hours. Thanks -Derek
  14. Recently I noticed in the logs that a few of my clients groomed some data during a proactive backup. I've never seen anything like it before. Is this a "new feature" or is something unusual? I do groom the media set during a weekly script. This is on a proactive backup script in multi-server 10.1.0.221 backing up a windows client to a multi-member disk media set with 'groom to keep this number of backups: 10' selected under Options. The only thing I can think of is that at the time the groom took place the 2 member disk set was nearly full. The one that happened this morning requested I add a 3rd member after the groom completed (but during the client backup). It happened during two recent backups. I'm attaching the logs from the most recent in a txt file. The other thing I have noticed since the update is that some of my proactive clients now go through copy and verify multiple times during the backup session. They only have one volume selected to back up. Some of the clients go through the entire backup with only one copy and verify. Is this also because the backup set was low on space? Thanks for any thoughts. -Derek Proactive grooming.txt
  15. Thanks. I had no idea it would groom off-schedule. It's a good plan though - and it must have been succesful the previous time, as it completed that backup without asking for a new member.
  16. Even though a groom is not scheduled? This is the first occasion where I've had a multi-disk set and it's members were nearly full. In the past I was always using one very large volume that never approached full.
  17. I am using this scenario in one case - I have 3 500GB external hard drives connected through daisy chained FW800. (Lacie Quadras if that matters to you) Grooming is working, I'm not sure exactly how it's choosing to manage the free space, but it took much longer than I expected to request the 3rd member. I'm using 10.1.0.221.
  18. Hi David, Thanks for the news and update. I'd just like to mention that my issue with Building Snapshots never ending is when using Windows Clients and a Mac server. I'm not sure if that should get posted under the Windows or Mac subforum. Also, I haven't had this happen yet on the latest release of Mac server (10.1.0(221)) so I'm not sure if that issue was addressed or not. Thanks -Derek Cunningham
  19. Hi David, Somehow I missed your December posts on this thread. Thank you (and the team) for your attention to these flaws! Our frustration level was getting very high, but the most recent update seems to have smoothed out most of our biggest complaints. The loss of log info definitely still needs to be addressed (maybe a dedicated log cache for each process that gets checked on engine launch and posted if not empty?) but we are definintely pleased to see some reliability updates coming along. Thanks -Derek
  20. Right off the bat I'm thinking that it's the difference between 1000* vs 1024* size reporting. I thought Apple started reporting volume sizes at 1000* recently, maybe Retrospect is still reporting with 1024* sizes. Sorry if my naming is crude - here's an article that talks about the difference: http://www.howtogeek.com/123268/windows-hard-drive-wrong-capacity/
  21. Just a small thing - on email notifications for a Proactive script it would be very helpful to add the client name to the notification. Right now I get a whole bunch of these: Script: Laptop Proactive Date: 2/1/2013 Script "Laptop Proactive" completed with 60 errors but never a client name unless it was terminated unexpectedly (laptop leaving the network etc) Is it possible for me to modify the notifications somewhere? (Retro Multi Server 10 Mac on OS X 10.6 etc etc) Thanks -Derek
  22. We just installed Retrospect 10 (clean install, not an 'upgrade'). I noticed that the server is now backing up 'Using Instant Scan' according to the log and it's idle/loading/preparing time is about 40 seconds, much less than it used to be. I upgraded one of our Windows 7 clients last week to the new 8.0.0 (165) client and it's idle/loading/preparing time hasn't changed a bit, if anything it increased a minute or so - it was always around 15 minutes, and it's still about the same minutes of idle/loading/preparing. Similar enough to the rest of our clients that I don't see any difference. Does Instant Scan work across platforms? I upgraded the Retrospect client from the Mac Retro 10 server, should I do a full reinstall of the Windows client instead? Should my Windows client have dropped to a very short idle/loading/preparing amount? What is anyone else seeing? Thanks -Derek edit - I have been thinking about this and wondering if the length of time is related to the System State backups taking that long, whereas Instant Scan may well be doing it's job just fine... any way to tell the difference, or start the System State backups in advance?
  23. I had to stop the engine to get the backup stopped, trying to stop the backup through the console was not working. I did view the log in the activities pane before I stopped the engine but no -519 error (which is what I'm used to seeing as well). After restarting the engine the log of that backup was not captured in the overall Retrospect log (a huge flaw IMO) so I can't confirm how it ended.
  24. I don't think it's a client issue - here's a new but important update. I had a client backing up on Friday, and it was stuck in the 'Building Snapshot' phase. I left it alone knowing that when he left the server would move on by itself, but it did not! I come in Monday morning to find it still trying to build a snapshot, and the client is long gone! (I know he took his laptop home Friday night). So NONE of my weekend backup/grooming scripts ran. Argh. I actually had some other issues with Retro, and recently rebuilt my whole system, re-adding all clients and recreating all scripts about two weeks ago, so needless to say I'm getting pretty frustrated.
  25. I am looking for a utility to defragment our main backup volume. We use a D2D2T strategy. We have a Lacie 4BIG attached by eSata to a 2007 Mac Pro, dual 2.66 Xeon 6GB RAM. It is a 6TB volume in a RAID 5 hardware config with ~3TB in use by Retrospect and ~3TB available (retrospect is the only thing using this volume). D2D/regular client backups seem fine and run at normal/network speeds. At first the D2T part ran well, fast speeds and completed steadily, but lately transferring the 3TB to tape/SCSI has slowed from 1-2 days to 3-4 days. It has been in use for 6 months with weekly grooming to keep the most recent 10 backups. I have tested smaller test volumes/backups on firewire drives to tape with Retrospect and the transfer speed seems normal. I think it's time to try defragmenting the backup volume. I did have to rebuild the main catalog file for the disk volume a few weeks ago, and that didn't have any positive or negative affect on my D2T transfer. (I was hoping that would bring a positive change!) I start a new catalog for each tape set every two weeks, and it's just the initial transfer that is dragging things out. What OS X defrag tools have you used, any major or minor pros/cons? I assume "Optimizing" isn't going to have any kind of positive affect on this volume? Any of these have a free trial that actually works that I can check out? (I'm not necessarily looking for a free utility, just don't want to be forced to purchase to try something that may or may not do what I need) Some Unix command line I've forgotten about? How frequently do you find you need to defragment your backup volume? Should I stop the Retrospect engine while defragmenting? Please mention your experiences with the tool as it relates to this purpose, beyond just the name or that it worked great on your macbook pro etc... How long has your backup volume been in use, how frequently do you groom, etc. Also please mention if you have been using/grooming the same backup disk set without defragmenting since v8 came out and don't have any issues transferring... Thanks -Derek
×