Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by derek500

  1. Hi, thanks for the reply. Interesting to hear that. We aren't having any other issues, overall 17.5 has been performing very well for us. This is the only issue I found. We upgraded to 17.5 from 15 which had it's share of issues, so far 17.5 has been a pleasant experience overall. It sounds like sticking with 17.0.2 is a good course of action for many until this issue is fixed.
  2. Hello all, This is an informative post, I don't have a question. I wanted to share an issue I discovered with Multi Server (17.5). We recently upgraded to this version and when I tried to restore files to one of my clients we received an error we hadn't seen before. The console appeared to complete the restore normally, and showed progress while retrieving our files, but then every restored file was 0KB in size. The Retrospect log showed an error of "File (filename) appears incomplete". I was able to reproduce this error from different backup sets and to different clients. I panicked, then tested a few different approaches to restoring and discovered that restores would only complete IF you restore to the server's local hard drive. This error only happens when restoring directly to a client. I contacted Retrospect support and opened an official case. Robin Mayoff replied quickly to let me know that they were aware of this issue and it was a high priority for them to fix. My options were to downgrade to 17.0.2 or wait for their fix to be released. I opted to wait, now knowing that this was the issue. Fortunately we don't need to do a lot of restores, so having to manually move my restored files isn't a dealbreaker. But if you are trying to do a restore directly to a client, you may be impacted by this bug in v17.5! Just putting it out there in case other people are searching for this issue like I was. Hope it's helpful! -Derek
  3. Hi David, Thanks, yes it seems like a late addition to see the light of day in v13. But if they are moving towards a web driven interface anyway, that would most likely resolve this entire issue. I'll submit it for the idea of it, and for us for now the 'remember not to close the console or log off' workaround will have to do. Thanks, -Derek
  4. I'm coming from the Mac side originally, our migration to Windows was recent. Yes, the dashboard shows me minimal observations of any running activities without clicking 'Launch'. It looks a lot like the Mac version of Retrospect's "dashboard" page. However, there are several things I would like to do, like reviewing how a script was set up, or rescheduling an upcoming script, or grabbing a status list (usually I print the backup status for my own log notebooks, old school I know). I appreciate being able to see that jobs are running, but I can't interact with the system at all from the dashboard, and that's not very useful. I'd rather just go straight into the console and see the 'activities' tab. An 'easier' workaround I could envision would be a button to "tell all unstarted/pending jobs to delay start but let the actively running jobs keep running" so I can launch the system sometime today to follow up on a previous error message, or do some other task without killing the running job.
  5. Hi, Are you saying that the 'bug fix' is that the warning message is more worthwhile, but doesn't solve the underlying issue of opening the console killing any running jobs? I'm not sure what you mean by being in the 'look-forward' timeframe, unless you just mean that it is showing the 'next jobs' and there is no current activity. I'm looking forward to the web interface version! Thanks, -Derek
  6. Thanks to both of you for your replies. The only solution I see at this time is to leave the console open and never log off but I don't like the idea of leaving disconnected RDP sessions. We will consider our options for now. We are running 12.5 - is 12.6 any different?
  7. I recently migrated from Retrospect Server Mac to Retrospect Server Windows. I am running Retrospect MultiServer on a Windows Server machine that is normally logged out. I dug around and found the steps to allow Retrospect to run as a service and work while no user is logged into the computer, but I have one issue - when a backup job is running and I log into the system and open the Retrospect console, Retrospect quits and re-launches, ending any running backup jobs. How can I avoid this (besides waiting until Retrospect is idle)? I set up a 'service' user for Retrospect to run as, and when I open Retrospect it re-launches as the currently logged in user. I tried logging onto the machine as that service user but it still quits and re-launches (as the same user). Is there a way to open Retrospect without making it quit first? On the Mac version I considered the 'Engine' and 'Console' two completely separate items and you could open a Console from any computer without stopping the Engine. On the Windows version this doesn't seem to be the case. Otherwise my shift from Mac to Windows for Retrospect has been smooth. Thanks!
  8. Wow, that's a pleasant surprise! Thanks for posting here. I'll check it out...
  9. Thanks for the reply. If that's really the case they should ask for it to be pulled from the app store or else bring it up to date. I'll be wary of it.
  10. I just installed the iOS app on an ipad. It's working fine but I got a warning message from Apple that this app may slow down my device because it's an old app (paraphrasing, I can't recall the exact message, maybe that it was designed for an older iOS?). Just curious if you guys were aware that the app was being flagged by Apple. I heard through some other apple news that they are trying to reign in their App Store and starting to clean out non-updated or problematic apps. Thanks -Derek
  11. I'm not seeing the same behavior. When I drill into a folder I've excluded the contents of by rule, they are still checked. Everything in the preview is checked. When I review the backup after it's complete, the rule did work. So in the end I do have my rules working, but I still can't figure out how to examine their effect while I'm developing them.
  12. My understanding of Proactive Backup with multiple destinations is that it will back up each client to the destination with the oldest backup. So from client to client it will show a different destination depending where the oldest backup lives. In the Media Set tab of the proactive script, what's checked off indicates to Retrospect what media you will be backing up to. If the "media" isn't physically available, Retrospect will wait for you to add another member to the media set, because that's what you've told it you want to do. If you have multiple backup destinations, but they aren't connected, you should uncheck them in the Media Sets tab of the script. When you reconnect a destination, make sure that destination is checked in the Media Sets tab. Having the media set checked off indicates to Retrospect that you want to back up there, so if the media isn't physically available it will wait for you to reattach it or add another member... if you don't want it to back up to that set, just uncheck the box. Lets say you have 3 media sets and want to take one offsite - just uncheck it but leave the other two checked in Media Sets and the Proactive script will automatically rotate each client between the two available destinations. If you only want it to use one of the two, you'll need to uncheck the other one. It will decide client by client where the oldest backup is, and which media set needs to be next for that client. I only have one destination available at a time, and with my Proactive script running backups start pretty much as soon as a machine hits the network without any interaction from the console - unless the script is busy backing up a different client. FWIW I don't have multiple destinations available at one time, so it's possible I've misstated how things work - this is just my understanding from being familiar with Retrospect for many years.
  13. Hi, Stumped again - the rule I created seems to work, but I'm not able to preview it using 'copy' or 'backup'. When I browse what is to be backed up, the rule I have selected doesn't seem to apply - all files and folders on the client are checked off in the preview. But in the real backup script, the rule is definitely applied. I can tell this when I try to restore from a backup to a new media set - the files I wanted to exclude are, in fact, not available to restore, which is great. But I would like to know how to test my rules instead of winging it to see what works... Any advice? Thanks, -Derek
  14. twickland, Thanks, doing an immediate copy and Preview seems to be a great way to test rules. For the moment it seems like I'm not making any of my rules correctly, but at least I have a method to test with now! -Derek
  15. twickland, thanks. So, if I want to include all files on all clients but exclude filetype .xyz ONLY on client XYZ (but still back up everything else on that client and my other clients), I need to put all of these rules in the 'Include' section of my "ultimate rule"? Rule: My Ultimate Rule Include files based on Any of the following: Saved Rule includes All Files Saved Rule includes Exclude Rule 1 Saved Rule includes Exclude Rule 2 Saved Rule includes Exclude Rule 3 Exclude files based on Any of the following: So basically what I'm doing in my nesting rule is to include the RULES, more than excluding the FILES defined by the rules. Does this sound like an almost better description? I recall in Retro 6 I had to do something similar but it seemed kind of backwards - create an 'include' rule to define the files I wanted to exclude, then add that rule to the 'exclude' portion of my 'Ultimate Rule'. But it sounds like things are a little different in 8+. Or should I be building them how I used to build them in the old days?
  16. I'm always a little intimidated when Rules get complicated, so here's a mutliple part question. 1) Is there a way to 'test' a rule (I'm using Retrospect 13.0.1) 2) I have several exclude desires and it's getting complicated, so I though I would make some rules to go together in a Nested rule (see here https://www.retrospect.com/en/documentation/user_guide/mac/management#working-with-rules and go to the end of this section - 2nd to last paragraph) something like this: What I want to do is exclude a few different types of things. I created a rule excluding some of these things, and another rule excluding some others of these things. Now I want them to work together, so I'm thinking that I should create a new rule something like this: Rule: Exclude Rule 1 Include files based on Exclude files based on Any of the following: Any folder named 'my antivirus cache' folder named 'some other stuff I don't want' etc Rule: Exclude Rule 2 Include files based on Exclude files based on All of the following: client name is XYZ File name ends in .xyz etc Rule: My Ultimate Rule Include files based on Any of the following: Saved Rule includes All Files Exclude files based on Any of the following: Saved Rule includes Exclude Rule 1 Saved Rule includes Exclude Rule 2 Saved Rule includes Exclude Rule 3 I was also studying this post in the Windows forum: http://forums.retrospect.com/index.php?/topic/151365-how-to-nesting-selectors/ and think it's kind of the same arrangement, but wanted to make sure. Thanks, -Derek
  17. Hi David, Thank you for the thorough reply. I read jethro's thread and your replies there. The reason I posted here in the grooming thread was because I'm not worried about seeding. I know the initial upload may take some time. I'm more curious about how Retrospect grooms, trying to figure out what our storage buckets will end up like over time. Based on the attempts to be conservative with gets and puts, and the very very low cost associated with them, I'm not too worried about that either. I just want to have a better understanding of how the grooming works. Is it going to rewrite the rdb file with a smaller one, (one delete, one put as I see it), or just keep track of the groomed info locally until it can delete that rdb entirely? II'm also trying to figure out if we can have a local copy of the data and keep it in sync with the cloud copy of data. I'm thinking that the backup strategy goes something like this. - Backup script A backs up clients to media set Local Disk A. - Transfer Script A transfers Local Disk A to cloud based media set 'Cloud B'. - Repeat several times. - Grooming script grooms Local Disk A and Cloud B. - Backup script A and Transfer Script A resume their normal operations and both media sets are the same size (assuming the same grooming parameters)? I'm also trying to figure out a similar scenario but using slightly different parameters, where locally we have 6 months or 12 months grooming policy, but the cloud media has 'Groom to 3' or something like it. I think this is what you are describing earlier where you said Also, if you go to 'more reply options' which opens the full reply editor, there is a checkbox on the right for "enable emoticons". Uncheck that and works normally! Being a tech forum, I think these should be turned off by default! Also there is no way to set your standard 'Post Options' defaults anywhere I can find in the user settings here.
  18. I've found Retrospect 13's local 'storage optimized' grooming process to be a massive improvement over 12.5. It's faster, and it works! So far I'm very pleased with the R13 upgrade. Question 1) I'm trying to figure out exactly how cloud based media gets groomed though - with Performance Optimized grooming, the description says it deletes whole RDB files. Does this mean - a) In order to reduce the size of the media set (bucket) during the groom, a new RDB file is uploaded containing all but the groomed information, then the old RDB file is deleted, resulting in a smaller bucket at the end, as opposed to local storage where the RDB file gets trimmed of the groomed data? Or - does an RDB file just sit there untouched as more and more internal pieces of it are marked as 'useless' until everything in the RDB file is useless and it gets deleted? Just trying to wrap my head around how much storage we will be using over time. Should it be the same as what's on our local drive? Question 2) I desire storing everything locally for fast large restores. If we use a D2D2C model with local backup first, then transfer to cloud backup later, do we need to groom both media sets separately, or will grooming the local set push that grooming reduction along to the cloud media set? Or should we maintain two separate backup scripts to backup locally and to the cloud set? How does this scenario work? Thanks, -Derek (edited to disable emoticons. turns into a smiley)
  19. I just realized in my last post that starting with "Yes" is misleading. Yes, I use cmd-L to view the logs, it's the only way to view them all in one window that I know of. No, viewing the full logs with cmd-L does NOT display the full logs. I recently upgraded to 12.5 and it's still like this. I understand your point about backups running clean, but many of the windows clients I back up spit off a bunch of common warnings. It's great that Retrospect differentiates between "warnings" and "errors" now, so I do what I can to keep them 'clean' from errors but still there's something about backing up a Windows client with Mac server that spews a good number of warnings. It's still a big issue to me that I have no way to view the 'full' logs. It makes no sense that there's no 'advanced view' or 'expanded view' or some other option available to view all of the log.
  20. I might be misunderstanding your issue, but... in my scheduled backup scripts I can go to the 'Summary' tab and rearrange the sources, and that's the order used for backups when the script runs. Can you do that? I find that rearranging them in that Summary tab can be 'finicky' at times but it does control the script order.
  21. Yes, I always review the logs with Cmd-L. A lot of the windows clients produce a list of warnings for system files etc. on the Mac server. I think it's a big issue, but nobody else seems to notice.
  22. So, does nobody else read their logs the way I did? This doesn't affect anybody else? Is there any way to see the full details of the log instead of "and xx others"? Thanks -Derek
  23. Hello all, I just upgraded 12.0 to 12.0.2 yesterday, and I noticed that the logging is very different - instead of showing all of the execution errors the log truncates them with "and 19 others" etc. I review my logs daily and sometimes take action when I see certain files listed there. How can I view the entire log? Also, there is no summary line with "33 execution errors" etc. at the end of a client backup. Can I get that back somehow? I don't see any new options in the console for 'view full log' etc? Thanks -Derek
  24. Indeed, this is what I've been telling myself for years. But what a difference, I wish I had upgraded sooner. I know there were a lot of 'performance improvements' in v12, but that much? I find that the system keeps 'active' 9-11GB of the 16GB I made available to it, so clearly it's happy to have the extra RAM (the system only runs Retrospect). When the catalog files are 4-8GB or more, it seems obvious to me now that the system will work much better having at least the same amount of RAM to hold it, especially if there is any grooming happening. Of course I can't speak to exactly how Retrospect has been designed to use RAM vs disk when it comes to working with catalog files, but I'd love to hear from a Retrospect engineer on the topic. I'm sure there is a good mix of disk/RAM usage since catalog files can often exceed the size of system memory. I've been using Retrospect for over a decade and yes, a good strategy is crucial to any business (or home). The only 'flaw' in our backup strategy is that the hardware is a bit outdated and in a true disaster I would probably have to source it from ebay.
  • Create New...