Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by 7e7ded5c-1194-4c60-bde8-107d31a92b2b

  1. It makes sense that Server 8 would require Client 8, but I'm wondering if Server 7.7 with Client 8 is supported.
  2. We have the business requirement to have a Windows 2012 box added to our Retrospect backup architecture. Windows 2012 is only compatible if Retrospect Client 8 is used. However we are still using Retrospect Server 7.7. Is this setup, having clients in versions 7.7 and 8 being administrated by a Server 7.7 machine, compatible or allowable? We don't want to be forced to upgrade the server version, and all the client versions, if Server 8 is not compatible with Client 7.7, if not absolutely necessary. I cannot find any documentation to support or condone this type of setup.
  3. I've worked around this issue. I've got one machine per retrospect server that's set to backup after the bulk of all backups. These trailing machines aren't monitored (development machines and the like), so they act as a log pusher for the important/monitored backups. They backup last, pushing the logs to file, making the other machines report OK, while not caring if their own logs are pushed out. This is not the best solution going forward, given this process will break if any backups finish after them, but it's the best I can do without reverting retrospect versions (really don't want to). Hopefully this bug will be addressed soon.
  4. I still have logging on. It's just annoying, because it appears to be a "push text buffer to file" option that's been removed. When a new item is logged to file, the buffer's pushed to file (the previous action is logged to file), then the new report is generated, but not pushed. It's also continually happening, I have 100% completed backups again today, but one per server are showing as not completed in my monitoring.
  5. I upgraded to 7.7.630 on my fleet of retrospect backup servers (all Windows Server 2008 R2 DC, Dell R510's) a few days ago, and since then I've noted missing backup report log items. This is important because, given there's no other way in retrospect to do this, I parse the backup log to confirm that my clients have been backed up successfully. For the past two days, I've been missing 1 backup report log item per Retrospect server. The client in question has been backed up, I can confirm that by looking at the operations log. However, the backup report log (Backup Report.log) doesn't show the latest entry for the client being backed up. Has anyone else noticed this bug, and is there a way around it? edit: I believe it may be related to the buffering of output. I did some work on one of the servers, which generated some output to file, which also dumped the last backup to file, and my check returned ok. I'm currently performing another backup on another server, to see if the currently-pending backup is dumped to file. edit2: yes, definitely confirmed. I back up clients A, B and C, and A and B dump to file, client C, finishing last, is not dumped to file. After all backups completed, backing up client D causes client C to dump to file, but client D's logs are still 404. The Operations log is definitely right though. Attempting to force the dump by running a known quick action, e.g. a verify of a backup, doesn't fix the dumping, though.
  6. Thanks for all your answers. I'll see if I can't put a few like-machines together to get retrospect to show me it's superpowers.
  7. Backup sets = ~70 (same number as clients) 8 execution threads, backups run from midnight to ~5am. The repair-catalog process did work, no grooms failed over the weekend; but I fear it will be a few weeks before they fail again. Is it usual in your experience to have to repair broken catalogs on a monthly basis? Also, do you know of any Performance Tuning documentation for Retrospect?
  8. Grooming is weekly, Saturday morning, after the midnight -> 6am backups. Both OS drives have over 200GB of free space, so the catalogs have a fair bit of space to grow. There is a great number of backups being done though, might that cause an issue? Not sure of the exact data volumes, but these machines are backing up about ~70 servers, ~60 on one, ~10 on the other. Would it be prudent to load balance these backups, so that each server handles ~35 each? edit: to give a picture of how much these servers are processing, the backups from Monday morning (over the weekend) generated 523GB of new backups on one server, and 105GB on the other (going by the Completed: XX files, YY GB on the "Normal backup using <script>" part of the backup report.log file), grooming out about ~3TB of data between both machines. Also, I have rebuilt all the broken files (5) on one server; will see in the morning if any of the grooms fail.
  9. Hardware RAID10 SATA drives, the newer machine has seperate RAID10 SAS for the OS drive; both machines are experiencing these issues, the older one more commonly (more backups on it, older, etc). First errors shown on machines around 2-3 months after they went into production.
  10. I have recreated the catalogs multiple times, and the error still returns on the next groom. The errors I'm getting are along the lines of: Bad Backup Set header found (0xa600b000 at 875,243,022) Bad Backup Set header found (0x1b14fee0 at 875,265,796) Backup Set format inconsistency (7 at 1032409029) Bad Backup Set header found (0x5f6ef45b at 875,288,570) Bad Backup Set header found (0x40404040 at 875,290,194) Bad Backup Set header found (0x33577a4e at 875,291,814) Bad Backup Set header found (0xffffffff at 875,293,510)
  11. I'm encountering a number of backup set grooming failures, in the form of: Grooming Backup Set blerg... Groomed zero KB from Backup Set blerg. Grooming Backup Set blerg failed, error -1101 ( file/directory not found) You must recreate the Backup Set's Catalog File. See the Retrospect User's Guide or online help for details on recreating Catalog Files. Can't compress Catalog File for Backup Set blerg, error -1 ( unknown) In the past, to correct these issues, I've done the following: * create a backup set called blerg_copy * change the nightly script to use blerg_copy, not blerg * after a period of retention, remove blerg. However, these issues keep appearing. I believe they may be hardware related (issues with incomplete backups, forced hardware restarts), but I'm unsure how best to address these grooming failures. Is my above approach accurate? I have attempted a backup set transfer, but on a 500GB backup set, I encounter in excess of 11,000 errors in 6 hours of processing, so I cancelled the execution.
  12. Ok, I'm part way through my next groom (issues with windows updates meant this hasn't been running, because the retrospect launcher isn't working to restart retrospect on reboot, sad panda), and I'm getting a heap of -1101 errors: "Grooming Backup Set [xyz] failed, error -1101 ( file/directory not found)". I'm guessing this is related to the files I removed that weren't being updated, etc. Removing .rdb files from explorer is bad, mmkay? I'm hoping a re-catelog might fix it, but I"m thinking I might need to do a full backup of all the failed sets to get them back in a usable state, amiright?
  13. I've upped the log retention to ensure I can see things from the weekend. (Should see stuff from 48 hours before still) As far as I know, we started with Retrospect 7.7 in January, with no previous versions.
  14. I just ran a one off groom, and I got a similar log file. It groomed 41.2GB out of a 214.5GB backup set, so that's ok. Checking the file makeup, it's got a full set of .rdb 618MB files from day-0, and most files since then are partials, so that looks ok. I can't check the main log from the main groom, because retrospect lost that log (more than 100 events ago, or something), but it's weird that the groom doesn't seem to be taking much effect when it's run as per usual. Might have to go along the 'more space' route, after checking how much we're actually backing up. Over 5 days, we have 1.1TB more disk space in use, which doesn't make sense though :/
  15. We have a weekly grooming script, but all backup sets are set to 99% capacity (the default). Would it be an idea to set a limit of (retention) x (max storage size) per backup set to reduce filesize? I know the sum of all the maximum backup sets will exceed our current storage, but then at least the files should reuse themselves, yes? I think I tried this on a test backup, but it was asking me for new media from the overnight run. :/ The logs usually say 'grooming was successful', but comparing the before and after drive usage for the collection of backup sets, barely any data has been removed, if any. I'm going to have to also check how much we are trying to backup vs disk usage, current storage capacity of backed up devices, and compare that to the storage we have on the backup server. As a side note, do you know an easy way to export any of the information collected from the retrospect clients? e.g. the data in the volumes, clients, backup set listings? I can manually collected this data, but it would be far simpler if I could export it from Retrospect. Thanks for all your help so far
  16. From memory, I have full 618MB .rdb files going back to the start of retrospect time. There are some in there that have less, but most are huge. Is it expected that I should have a full 'set' of the original backup, then only the diffs from then on (hopefully less than the 618mb per snapshot?) The actual snapshot listing only has the last 7, but I have sessions running back to the start of backups, regardless of how much I try and groom.
  17. Also, given you're saying if a file doesn't change, it stays put; but I'm finding that I'm getting data outputs of the daily backups that match the used space on my system. That is, instead of 1 full backup and increments, I get full backups every day. Unless there's a setting for that to happen, I don't understand why its happening
  18. Most of the .rdb files (say 80-90%) have not been edited in the last day, and there are ~%60 that have not been edited in the last week. There are files going back to the start of retrospect backups that aren't being removed. Isn't there a way of trying to get Retrospect to realise that files AA00001 through AA00XXX are 'old tapes' and to re-use them? I haven't done the maths, but it's a lot of data. We've already got tens of terabytes of hard drives already (RAID 10), and I've been told that we can get more hardware if required, but the amount of data being backed up is so large, that it's not reasonable to purchase enough drives to store it all. As in, the storage capacity of the servers we are backing up is larger than the backup server itself. So, given that retrospect can't work out that if I want 7 x "Current Drive Usage" as my limit for storage, then I've got to give it the total size of the drive, and hope it re-uses the .rdb files? Also, re: the entire process itself: I'm to understand that grooming is supposed to just remove the files inside the .rdb files, but it doesn't actually delete the files until what condition? if any .rdb file contains a file that still exists on the system -- so if I have a full system being backed up, the original series of .rdb files won't be groomed until every file has been edited? removed?
  19. I've got a weekly Groom process that is *supposed* to groom all my backup sets (stored to harddrive), and remove anything older than 7 days (nightly backups) However, I'm seeing that the Snapshots for the backup sets show 7 days of data, but the files still exist on the hard drive. From what I can tell, 75% of the drive's contents is being taken up by .rdb files older than 7 days. I have tried to manually remove the files to free up space on the drive, and for the subset of backups I've done this for, Retrospect hasn't complained. Two questions on this part: A ) Is grooming supposed to remove the snapshot from the backup listing, and remove the .rdb files from the drive? B ) Is there any other way to force the removal of the .rdb files from disk through the application? In addition, how does recycling work when working with .rdb files on a harddisk? Do I need to be using the recycling method to force override of the 7-day-old media to increase free space on the drive?
  • Create New...