Jump to content

derek500

Members
  • Content count

    99
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by derek500

  1. I recently migrated from Retrospect Server Mac to Retrospect Server Windows. I am running Retrospect MultiServer on a Windows Server machine that is normally logged out. I dug around and found the steps to allow Retrospect to run as a service and work while no user is logged into the computer, but I have one issue - when a backup job is running and I log into the system and open the Retrospect console, Retrospect quits and re-launches, ending any running backup jobs. How can I avoid this (besides waiting until Retrospect is idle)? I set up a 'service' user for Retrospect to run as, and when I open Retrospect it re-launches as the currently logged in user. I tried logging onto the machine as that service user but it still quits and re-launches (as the same user). Is there a way to open Retrospect without making it quit first? On the Mac version I considered the 'Engine' and 'Console' two completely separate items and you could open a Console from any computer without stopping the Engine. On the Windows version this doesn't seem to be the case. Otherwise my shift from Mac to Windows for Retrospect has been smooth. Thanks!
  2. Thanks to both of you for your replies. The only solution I see at this time is to leave the console open and never log off but I don't like the idea of leaving disconnected RDP sessions. We will consider our options for now. We are running 12.5 - is 12.6 any different?
  3. iOS app update?

    I just installed the iOS app on an ipad. It's working fine but I got a warning message from Apple that this app may slow down my device because it's an old app (paraphrasing, I can't recall the exact message, maybe that it was designed for an older iOS?). Just curious if you guys were aware that the app was being flagged by Apple. I heard through some other apple news that they are trying to reign in their App Store and starting to clean out non-updated or problematic apps. Thanks -Derek
  4. iOS app update?

    Wow, that's a pleasant surprise! Thanks for posting here. I'll check it out...
  5. iOS app update?

    Thanks for the reply. If that's really the case they should ask for it to be pulled from the app store or else bring it up to date. I'll be wary of it.
  6. I'm always a little intimidated when Rules get complicated, so here's a mutliple part question. 1) Is there a way to 'test' a rule (I'm using Retrospect 13.0.1) 2) I have several exclude desires and it's getting complicated, so I though I would make some rules to go together in a Nested rule (see here https://www.retrospect.com/en/documentation/user_guide/mac/management#working-with-rules and go to the end of this section - 2nd to last paragraph) something like this: What I want to do is exclude a few different types of things. I created a rule excluding some of these things, and another rule excluding some others of these things. Now I want them to work together, so I'm thinking that I should create a new rule something like this: Rule: Exclude Rule 1 Include files based on Exclude files based on Any of the following: Any folder named 'my antivirus cache' folder named 'some other stuff I don't want' etc Rule: Exclude Rule 2 Include files based on Exclude files based on All of the following: client name is XYZ File name ends in .xyz etc Rule: My Ultimate Rule Include files based on Any of the following: Saved Rule includes All Files Exclude files based on Any of the following: Saved Rule includes Exclude Rule 1 Saved Rule includes Exclude Rule 2 Saved Rule includes Exclude Rule 3 I was also studying this post in the Windows forum: http://forums.retrospect.com/index.php?/topic/151365-how-to-nesting-selectors/ and think it's kind of the same arrangement, but wanted to make sure. Thanks, -Derek
  7. I'm not seeing the same behavior. When I drill into a folder I've excluded the contents of by rule, they are still checked. Everything in the preview is checked. When I review the backup after it's complete, the rule did work. So in the end I do have my rules working, but I still can't figure out how to examine their effect while I'm developing them.
  8. My understanding of Proactive Backup with multiple destinations is that it will back up each client to the destination with the oldest backup. So from client to client it will show a different destination depending where the oldest backup lives. In the Media Set tab of the proactive script, what's checked off indicates to Retrospect what media you will be backing up to. If the "media" isn't physically available, Retrospect will wait for you to add another member to the media set, because that's what you've told it you want to do. If you have multiple backup destinations, but they aren't connected, you should uncheck them in the Media Sets tab of the script. When you reconnect a destination, make sure that destination is checked in the Media Sets tab. Having the media set checked off indicates to Retrospect that you want to back up there, so if the media isn't physically available it will wait for you to reattach it or add another member... if you don't want it to back up to that set, just uncheck the box. Lets say you have 3 media sets and want to take one offsite - just uncheck it but leave the other two checked in Media Sets and the Proactive script will automatically rotate each client between the two available destinations. If you only want it to use one of the two, you'll need to uncheck the other one. It will decide client by client where the oldest backup is, and which media set needs to be next for that client. I only have one destination available at a time, and with my Proactive script running backups start pretty much as soon as a machine hits the network without any interaction from the console - unless the script is busy backing up a different client. FWIW I don't have multiple destinations available at one time, so it's possible I've misstated how things work - this is just my understanding from being familiar with Retrospect for many years.
  9. Hi, Stumped again - the rule I created seems to work, but I'm not able to preview it using 'copy' or 'backup'. When I browse what is to be backed up, the rule I have selected doesn't seem to apply - all files and folders on the client are checked off in the preview. But in the real backup script, the rule is definitely applied. I can tell this when I try to restore from a backup to a new media set - the files I wanted to exclude are, in fact, not available to restore, which is great. But I would like to know how to test my rules instead of winging it to see what works... Any advice? Thanks, -Derek
  10. twickland, Thanks, doing an immediate copy and Preview seems to be a great way to test rules. For the moment it seems like I'm not making any of my rules correctly, but at least I have a method to test with now! -Derek
  11. twickland, thanks. So, if I want to include all files on all clients but exclude filetype .xyz ONLY on client XYZ (but still back up everything else on that client and my other clients), I need to put all of these rules in the 'Include' section of my "ultimate rule"? Rule: My Ultimate Rule Include files based on Any of the following: Saved Rule includes All Files Saved Rule includes Exclude Rule 1 Saved Rule includes Exclude Rule 2 Saved Rule includes Exclude Rule 3 Exclude files based on Any of the following: So basically what I'm doing in my nesting rule is to include the RULES, more than excluding the FILES defined by the rules. Does this sound like an almost better description? I recall in Retro 6 I had to do something similar but it seemed kind of backwards - create an 'include' rule to define the files I wanted to exclude, then add that rule to the 'exclude' portion of my 'Ultimate Rule'. But it sounds like things are a little different in 8+. Or should I be building them how I used to build them in the old days?
  12. Hi David, Thank you for the thorough reply. I read jethro's thread and your replies there. The reason I posted here in the grooming thread was because I'm not worried about seeding. I know the initial upload may take some time. I'm more curious about how Retrospect grooms, trying to figure out what our storage buckets will end up like over time. Based on the attempts to be conservative with gets and puts, and the very very low cost associated with them, I'm not too worried about that either. I just want to have a better understanding of how the grooming works. Is it going to rewrite the rdb file with a smaller one, (one delete, one put as I see it), or just keep track of the groomed info locally until it can delete that rdb entirely? II'm also trying to figure out if we can have a local copy of the data and keep it in sync with the cloud copy of data. I'm thinking that the backup strategy goes something like this. - Backup script A backs up clients to media set Local Disk A. - Transfer Script A transfers Local Disk A to cloud based media set 'Cloud B'. - Repeat several times. - Grooming script grooms Local Disk A and Cloud B. - Backup script A and Transfer Script A resume their normal operations and both media sets are the same size (assuming the same grooming parameters)? I'm also trying to figure out a similar scenario but using slightly different parameters, where locally we have 6 months or 12 months grooming policy, but the cloud media has 'Groom to 3' or something like it. I think this is what you are describing earlier where you said Also, if you go to 'more reply options' which opens the full reply editor, there is a checkbox on the right for "enable emoticons". Uncheck that and works normally! Being a tech forum, I think these should be turned off by default! Also there is no way to set your standard 'Post Options' defaults anywhere I can find in the user settings here.
  13. I've found Retrospect 13's local 'storage optimized' grooming process to be a massive improvement over 12.5. It's faster, and it works! So far I'm very pleased with the R13 upgrade. Question 1) I'm trying to figure out exactly how cloud based media gets groomed though - with Performance Optimized grooming, the description says it deletes whole RDB files. Does this mean - a) In order to reduce the size of the media set (bucket) during the groom, a new RDB file is uploaded containing all but the groomed information, then the old RDB file is deleted, resulting in a smaller bucket at the end, as opposed to local storage where the RDB file gets trimmed of the groomed data? Or - does an RDB file just sit there untouched as more and more internal pieces of it are marked as 'useless' until everything in the RDB file is useless and it gets deleted? Just trying to wrap my head around how much storage we will be using over time. Should it be the same as what's on our local drive? Question 2) I desire storing everything locally for fast large restores. If we use a D2D2C model with local backup first, then transfer to cloud backup later, do we need to groom both media sets separately, or will grooming the local set push that grooming reduction along to the cloud media set? Or should we maintain two separate backup scripts to backup locally and to the cloud set? How does this scenario work? Thanks, -Derek (edited to disable emoticons. turns into a smiley)
  14. Hello all, I just upgraded 12.0 to 12.0.2 yesterday, and I noticed that the logging is very different - instead of showing all of the execution errors the log truncates them with "and 19 others" etc. I review my logs daily and sometimes take action when I see certain files listed there. How can I view the entire log? Also, there is no summary line with "33 execution errors" etc. at the end of a client backup. Can I get that back somehow? I don't see any new options in the console for 'view full log' etc? Thanks -Derek
  15. I just realized in my last post that starting with "Yes" is misleading. Yes, I use cmd-L to view the logs, it's the only way to view them all in one window that I know of. No, viewing the full logs with cmd-L does NOT display the full logs. I recently upgraded to 12.5 and it's still like this. I understand your point about backups running clean, but many of the windows clients I back up spit off a bunch of common warnings. It's great that Retrospect differentiates between "warnings" and "errors" now, so I do what I can to keep them 'clean' from errors but still there's something about backing up a Windows client with Mac server that spews a good number of warnings. It's still a big issue to me that I have no way to view the 'full' logs. It makes no sense that there's no 'advanced view' or 'expanded view' or some other option available to view all of the log.
  16. I might be misunderstanding your issue, but... in my scheduled backup scripts I can go to the 'Summary' tab and rearrange the sources, and that's the order used for backups when the script runs. Can you do that? I find that rearranging them in that Summary tab can be 'finicky' at times but it does control the script order.
  17. Yes, I always review the logs with Cmd-L. A lot of the windows clients produce a list of warnings for system files etc. on the Mac server. I think it's a big issue, but nobody else seems to notice.
  18. So, does nobody else read their logs the way I did? This doesn't affect anybody else? Is there any way to see the full details of the log instead of "and xx others"? Thanks -Derek
  19. I just wanted to say thanks, and offer a few words of advice. I have been having a lot of issues with Retrospect since moving from v6 to 8/9 and newer. I back up about 75 machines with a mix of Mac and Windows, clients and server OSs on both platforms, about 3TB overall. At the same time we "upgraded" the server hardware from a G5 PowerMac to a 2007 Xeon Mac Pro. Still used an ATTO ULD5 SCSI card to AIT-5 tapes on both systems. When I decided to use the Mac Pro we had (it was a spare unit) I "generously" increased the RAM to 6GB. No retrospect system I'd ever run needed this much RAM, and watching the sometimes deceptive Activity Monitor, it never seemed to use all of the RAM at once, so I figured we were in good shape there. Yet performance was challenging to say the least. Retrospect has been slow, buggy, crashy, somewhat unreliable for years since I upgraded to v9 (I waited out v8 I think) and the big Mac Pro. Recently I had been about to give up completely, but decided for a few hundred bucks I'd try throwing 16GB of RAM at the system to see what happened, sort of a last ditch effort. I also installed the v12 update at the same time (mistake, I know). HOLY ****. Retrospect is fast, responsive, error free, ... I can't say for sure if it was the RAM or the v12 update, but I do wish I had upgraded the RAM a long time ago to find out. I recently had to restore some files from our biggest file server (millions of files), and instead of 45 minutes to run the search it took about 3. The restore of 80 files took under a minute. I also switched to block level backups. Windows client backups now take on the order of 4-5 minutes instead of 30-40 minutes. This I had been attributing to the new block level backups, and maybe the 'new faster creating snapshot' of v12, but this is amazing. Our backups fly, restores and searches snap, opening the management console is no longer a drag... So thanks, Retrospect team, for helping me through so many error messages over the years, and continuing to improve your product. To anyone who isn't using v12 yet, don't wait. If you think you might not have enough RAM, by all means GET MORE RAM. LOTS MORE. If you aren't using the "block level backups" option, USE IT! Thanks for listening to me rant and praise! I am really glad things are working so much better now. I do wish somebody had suggested a RAM upgrade for us a long time ago. -Derek
  20. Indeed, this is what I've been telling myself for years. But what a difference, I wish I had upgraded sooner. I know there were a lot of 'performance improvements' in v12, but that much? I find that the system keeps 'active' 9-11GB of the 16GB I made available to it, so clearly it's happy to have the extra RAM (the system only runs Retrospect). When the catalog files are 4-8GB or more, it seems obvious to me now that the system will work much better having at least the same amount of RAM to hold it, especially if there is any grooming happening. Of course I can't speak to exactly how Retrospect has been designed to use RAM vs disk when it comes to working with catalog files, but I'd love to hear from a Retrospect engineer on the topic. I'm sure there is a good mix of disk/RAM usage since catalog files can often exceed the size of system memory. I've been using Retrospect for over a decade and yes, a good strategy is crucial to any business (or home). The only 'flaw' in our backup strategy is that the hardware is a bit outdated and in a true disaster I would probably have to source it from ebay.
  21. Our tape library was a big investment, and I still have no problems with it's performance or a desire to change our library of tapes. However, our Retrospect server is showing it's age (old Mac Pro, can't upgrade to Mavericks, etc). I'm curious if anybody has experiences using a thunderbolt PCIe expansion box to house a SCSI card, or if there is another way to continue using SCSI on a new Mac? (Preferably a Retrospect supported way!) I realize of course that backups are valuable, and if we are to continue using tapes going forward we should consider investing in a more modern library, but I hate throwing away a perfectly good bit of hardware and a substantial investment in the library and tapes. Any ideas? (Get a last-gen Mac Pro?) Is SCSI supported in 10.9? We have an ATTO UL5D card, Qualstar 4212 Library and use AIT-5 tapes. Thanks all -Derek
  22. SCSI through thunderbolt?

    Thanks ggirao, I'm sure our setup would work well on a new powerful Mini with thunderbolt storage. Almost - The issue is the 'tape' part of it. The library is the most expensive part of the package. I've seen plenty of thunderbolt to fiber adapters, but no path from thunderbolt to SCSI.
  23. SCSI through thunderbolt?

    HI Bill, Thanks. That's the approach I'm probably going to take. Our backups have always been intended as 'disaster recover' only with offsite redundancy, so I shouldn't need to keep the aging hardware around. As I have been evaluating new hardware, it's still tough to find something as cost effective as tape for the long term. We want to add some cloud based solution and archival ability going forward, we may end up with a mutli-tiered backup plan, but I don't want to complicate things too much. My biggest problem with most of the cloud solutions is cost! I may just look at weekly replication of our Retrospect disk storage, even that is pricey. Physical tapes being handled by a local secure storage vendor are still extremely cheap! Of course, in a true DR situation, recreating the existing hardware is getting harder and harder. I'm open to ideas though.
  24. I was going to update some of our systems to Mavericks... then stopped. What about Retrospect? So, what about Retrospect? OS X 10.9 is out, and free, and we expect wide adoption, as quickly as it can be downloaded. The client? The Console? The Engine? We are using Multi Server 10.5. Thanks!
  25. Mavericks support?

    Thanks! I missed that part of the update announcement.
×