Jump to content

mlenny

Members
  • Content count

    36
  • Joined

  • Last visited

Community Reputation

0 Neutral

About mlenny

  • Rank
    Occasional Forum Poster
  1. First off, my setup: * Retrospect Server 7.6.123 (RDU 7.6.2.101) * 2GHz Xeon 5130, 4GB RAM. * WSS2003 SP2 * Infortrend external disk array, connected directly to backup server (also our NAS head) via fibre to QLogic QLA2460, RAID6, 6TB usuable space, 3.1TB free. * Dell PV124T LTO3 autoloader (qualified for Retrospect) connected via Adaptec 39160 SCSI card. When we migrated all our data to this external RAID in February 2008, our backup speed to LTO-3 was around 4300MB/min. When I looked at new backup sets started in August, the backup speed was averaging around 2500MB/min. This is also the average we are getting with more recent full backups. I'm now trying to determine how we got such high performance in Feb and now have dropped almost 50%. All our testing so far has happened outside of business hours so there has been no competing I/O from users on the data being backed up. First off, used HDTach to test the read performance of the RAID. It scored around 200MB/s across multiple samples so the raw performance of the hardware didn't seem an issue. We then re-enabled the Windows device drivers for the autoloader and ran a test backup against 1TB of files using NT Backup. Approximately same results on tape speed. My next thought was file fragmentation being so bad it affected read performance of existing data, as we generally have never defragmented our server drives. We ran a full defrag pass using PerfectDisk, consolidating free space, etc. We also ran it again prior to a test backup so we were backing up as "pristine" a set of files as possible, but no difference on speed. Our anti-virus software on the server only does nightly scans and no active scanning, but to eliminate any bottleneck it might be causing, we uninstalled anti-virus software and ran the test again. No difference. Racking our brains for differences between now and then, we weren't sure whether volume shadow-copies were active at the initial backups, so we disabled VSS on the drive and tried again. No difference. So now I'm posting to the forum to seek further advice on how to proceed, or explanations for what we're seeing. One further point: the LTO3 autoloader was brand new and used pretty much for the first time on those initial backups in Feb 2008. Anyone know whether device malfunction or abnormal wear could account for the speed drop? It has been regularly cleaned (in fact we have activated its auto-clean function and left a tape in one of the magazine slots). Thanks.
  2. mlenny

    Files do not compare

    I have run Retrospect 7.6 for three nights now with thorough verification turned back on. Since March, we've been comparing the MD5 digests instead to circumvent the 'compare error' problem. In the last three nights, we are still getting up to 15 compare errors per run, but that is waaaaay down from the up-to-250 compare errors we were getting back in March. What I can't say is whether we would get the same reduction on 7.5 anyway, because of some other external factor that may have changed at source since we last tried thorough verification. Kicking myself for not trying a backup using thorough verification again prior to 7.6 upgrade to ensure the problem hadn't changed in nature/disappeared altogether. I could down-grade to 7.5 again to try it, if that's useful? Will this cause problems with my config files, catalogs, etc though? Don't know if 7.6 alters their structure in any way that complicates down-grading to an earlier version.
  3. mlenny

    Files do not compare

    Verifying the MD5 digests has been successful for us, so I'm not getting any errors anymore. Thorough verification will still bring errors back, though. Is it possible to get an answer regarding how "bullet-proof" it is to verify just against the digests as opposed to thorough verification against the original files? Trying to decide whether we need to prioritise resolving the problem with thorough verification. If you still need example files, I can try and dig up some by referencing the older logs when we were getting the errors on thorough verification. Let me know and I'll see whether we can release them (confidentiality, and all that). Thanks.
  4. mlenny

    Files do not compare

    Hi, I'm following this thread too as I have the same problems with graphic files (ai, eps, psd, jpg, etc, although also with the occasional doc and other non-graphic file). I found the suggestion to turn off MD5 digests didn't work and I still got all the "didn't compare" errors (between 100-250 each night). However, turning MD5 digests back on and then going to the scripts and changing from Thorough Verification to Media Verification removed all errors completely on last night's backups, save the usual few I'd expect to do with sharing violations on active OS files on our Windows system drives. So it seems to prove the MD5 digests are a true reflection of the contents on the tapes, but a direct comparison with the original files fails. Obviously successful comparison with the digests doesn't necessarily mean the tapes hold the correct data - it could just mean that the tapes and digests have both recorded the original data incorrectly. If anything, you'd always expect direct comparison to be the most definitive check on the data integrity of what's been written to tape - and that's what's failing. Anyway, hope this gives some additional clues for your engineers.
  5. That's what I'll do then. However, this then throws up the heavy administrative load that I discussed previously (2nd message in the thread). Any suggestions/tips on reducing this?
  6. Using version 6.5 Multi Server. What's the official line in terms of implementing what I'm trying to do? The way I'm trying to do this is to keep running New Media backups until I have the number of sets I require to rotate (e.g. 5 sets for a script that is on a quarterly rotation, requiring a minimum of 1 year's retrieval at any one time), and then to start doing Recycle Backups onto the existing media (starting with the first set again) once that number is reached. Is this how Dantz/EMC envisage this being accomplished? Or would you suggest I just keep doing New Media backups, but erase and reuse the media from the oldest sets as we exceed our maximum retrieval limit: although I don't see how you could ever each an end on the numbering sequence for the sets being created (potentially going into the 000's over enough time, especially on those sets on weekly/monthly rotation). Advice would be gratefully received, as it will be easier for me to alter this backup regime now whilst it's fresh, rather than further down the line. Kind regards.
  7. Where are the OS X Retrospect client configuration files stored on the workstation? In particular, the file that stores the name of the client itself. I know you set the name for the client remotely from the Retrospect Server application, but I am assuming the name for the Retrospect client is also stored locally on the workstation in a preference file somewhere. We will be managing our workstations with a product called Radmind that can be used for rolling out updates, new configurations, etc. It works by ensuring all workstations match a "master client" at the file-system level. A bit like Norton Ghost on Windows. I will need to exclude this file(s) so I doesn't get managed by Radmind, or else it will get replaced with a generic file that I assume would mean it loses its assigned name and won't be found by Retrospect Server. Can anyone confirm where the file(s) is/are kept? Also, whether the problem I forsee above would actually happen? Would the server simply synchronise the client's name again when it next connects after a Radmind update? Kind regards.
  8. Okay. I've done some lateral thinking and some experimenting. I can get around the binding problem by creating all the backup sets in advance, manually naming them [001], [002], etc. That way, I can use the binding tab to force them onto the correct drive. So, in my example, I have 4 groups of 3 sets, and those groups rotate every quarter: Q1: Backup Set A [001], Backup Set B [001], Backup Set C [001] rotating daily Q2: Backup Set A [002], Backup Set B [002], Backup Set C [002] rotating daily Q3: Backup Set A [003], Backup Set B [003], Backup Set C [003] rotating daily Q4: Backup Set A [004], Backup Set B [004], Backup Set C [004] rotating daily I can add all 12 sets to the Backup Script, and start it off backing up on [001] group. Now the new problem... ------------------------------------ I was hoping that a Recycle Backup would automatically recycle the media of the next incremented backup set: e.g. if the [001] set is active, it would reset the members in the [002] and start using that. If the last set was active (in my example, this would be [004]), then it returns to the [001] set again. It would then adjust the script so that all future backups are going to the newly-cycled set. In other words, I'd hoped that it would do everything a New Media backup does, except that it cycles through existing sets, rather than starting a brand new one. This, to my mind, would be sensible behaviour. However, it appears that if I run a Recycle Media backup against a set, although it will reset and use that set for that specific backup, it doesn't then adjust the existing script to use that set for all future backups. This leaves me with two problems: * I can schedule the Recycle Backups within Retrospect, although it is a little clunky as I have to schedule it specifically for each of the [001] to [004] groups. So for the 4 groups of A-C sets on quarterly rotation, I schedule 4 x 3 Recycle Backups, each offset by 13 weeks (one quarter) on 52 weekly occurences, moving each A-C set from the [001] through to [004] sets. This means I have to do 12 entries for this backup alone, as opposed to just 3 entries on 13 week occurences, if Recycle Backup was a more intelligent feature. * I also have to manually edit the scripts after each Recycle Media event to move each of the scripts onto the newly recycled set. This will be time-consuming as it will involve editing upwards of forty separate entries across all our scripts. ------------------------------------ So I suppose the short version of this post is that I can resolve my original problem with binding, but it's a clunky workaround that requires a lot of constant editing and administration. Any better ideas out there?
  9. I may be missing something, but can you only bind existing tape sets to drives? The problem I have is that we have multiple Backup Scripts, all of which are scheduled to automatically start a New Media set every week/month/quarter as appropriate to our backup policy. We have two tape drives: a fast LTO for priority backups and a slow DLT1 for low-priority backups. Inevitably, the New Media backups will happen on weekends as the amount of data being backed up can be time-consuming. Now the problem... I can't see a way to instruct which drive to use when a particular New Media backup happens. One would have hoped that, as the previous backup set in a series was bound to a drive, that the next incremented backup set would use that same drive, but it does not appear so. Hence, we come in and find that sometimes a new set was started on the wrong kind of tapes. Can this be avoided? Scheduling the New Media backups differently won't work, as I always have blank tapes in both drives at all times anyway, as spill-over tapes for active scripts. Plus, as the New Media backups happen at weekends, it would inevitably mean someone having to travel in just to put blank tapes in the other drive - not an option for us. I don't want to have to remove the automatic New Media backup schedules, and resort to creating a New Media set manually each time, as this requires someone to remember to do it on the right date, and we have about lots of scripts, scheduled on different rotations: at the moment, we can see when a new set is coming up by looking in the Schedule tab for a New Media Backup entry. All we have to do is just make sure we have enough blank tapes in there. Any suggestions? ------------------------------------ Next follow-on point: This is a new-ish implementation. Eventually we will hit the maximum number of backup sets we require for a script in accordance to its backup policy (for example, 3 x monthly rotated sets). At this point, we will change the New Media backups to Recycle Media backups and start cycling the existing media. Now, when the backup sets are first started off, the first backup of each set is a New Media backup, resulting in the first set in a series that contains any data being called, for example, "Backup Set A [001]". The original backup set "Backup Set A" never gets any tapes added to it - it is created simply to allow us to add the series to Backup Scripts. We delete the original tape-less backup set once the first backup has run. This is our preference, as it confuses people having the second set in a series called "[001]": if we state "we start backing up to the first set again tonight", they would expect [001] to be the first set, not a set with no suffix at all. So here's the question... When we start recycling media, will Retrospect go back to the "Backup Set A [001]" set, or will it expect there to be a "Backup Set A" set there and throw an error? ------------------------------------ These are my "thinkers" for today. Kind regards.
  10. The main use for Proactive Backup in our company is to back up POP3 mail stores on Mac laptops whenever they come back onto the network. However, this means the source for everyone's backup is called "Microsoft User Data" (we use Entourage on the Macs). The problem comes when I want to see who hasn't been backed up on a given day, by perusing the Proactive Backup window and seeing who is still ASAP for scheduling. All I can see is a list of "Microsoft User Data" entries in Source, without any easy way to distinguish which entry relates to which client. I then end up trawling through the History tab, ticking people off when I see them in the log. Adding the client name to the entry in the Source column (or a separate Client column) would help provide a comprehensive overview. ------------------------------------------ This leads onto a related suggestion. If someone is still listed as ASAP, it would be really useful to see the date of their last backup listed, as well as when their next backup is scheduled. This would provide, at-a-glance, how stale that client's backup is. For our laptop backups, I wouldn't been too concerned if it was a day or two old, but could chase the user to allow a backup if it was any older. It would also help towards the issue that only the user gets the "X-days since last backup" warning, but the administrator of the Retrospect Server at the console is not made aware unless the user contacts them. Thanks.
  11. Uh oh! Now maybe this is partly my fault. Here's what has happened this morning. I noticed a log entry saying the catalog is out-of-sync on one of the tape sets that run on the 122T LTO. I go to the server and load up the 3 tapes that make up this set into the 122T. Of course, when exiting the 122T menu system, the slots do not reappear. So I go to the Environment tab and rescan the bus... however... At the time a proactive backup was taking place on the 120T DLT1 on the other SCSI channel of the 39160 card. This promptly failed with communication errors, and the 122T failed during its Scan Media (an error code 90) as it tried to load the 3rd tape in the loader. The 120T sorts itself out eventually without intervention. I power-cycle the 122T and it also sorts itself out. I do another Scan Media on the 122T and it reads back all the names correctly, but tapes 1 & 2 fail to get their names to stick once back in the loader. Although marked as verified (blue tape symbol), they have no names against them. I manually drag each one into the 122T drive and once ejected again, all is well with their names appearing alongside their slot in the loader. So, in summary, in one morning I experienced almost every example of misbehaving I had seen before. I must note that I don't usually access the drives whilst another backup is happening, so that particular fact isn't a common factor from my past experiences with these problems. So it looks like I still have problems. Any comments/suggestions?
  12. Okay, tonight was the night (a week late, I know). I've set the transfer rate to 80 for the IDs on which both the PowerVaults sit. As they're both Ultra-2 SCSI, the absolute maximum is 80MB/s anyway (and the devices' maximum is even lower). Just seen the first backup run and the performance seems on a par to before, so no detrimental effect on raw performance at first glance. However, the library slots still do not automatically reappear once the 122T is put back on-line (i.e. its menu system is exited on its front panel). I had to do a manual rescan in the Environment tab to get them back. As they have appeared automatically on occasions in the past, I don't know whether this is still symptomatic of a fault with either the communication or the drivers. ------------------------------------------------------------------- QUESTION: Should the normal behaviour be that the slots all appear once the PowerVault is back online? It'd be good if this could be the case: I know this happens with the PowerVault 120T we have (the less-fancy DLT1 autoloader). ------------------------------------------------------------------- Overall, I'll see how it all goes. If I get a reoccurence of loader errors and misbehaving, I'll try going down to 40MB/s. Again, the real-world performance of the drives means this shouldn't have much of an impact (if not, any) on their performance. If that still doesn't fix it, I'd say it's time to revisit the Retrospect drivers. I'd really love those slots to come back by themselves without a manual rescan necessary (can that be looked at in a driver update if it isn't the case already?). Anyway, I'll keep you all posted, but it may be a while before I have anything to report. Don't you just love those intermittent problems!!!
  13. Okay: by transfer rate, do you mean the maximum *native* data transfer rate of the autoloader (in this case, it's 15MB/s), or do you mean the SCSI connection ( i.e. Ultra160)? As I'm not sure, anyone care to take a glance at the 122T's tech specs? http://docs.euro.dell.com/docs/stor-sys/122t_LTO/en/122t_lto/specs.htm Can anyone tell me what I should be setting the SCSI card to? Also, anyone also know what specifically I should be changing on an Adaptec 39160 dual-channel card? It'll save me having to dig through the documentation! I'll hopefully be trying this tomorrow night. I'll report back with any successes. Thanks.
  14. Not had a chance yet. I will get round to it at some point, but as it's not a critical problem (more of a nuisance), I don't expect to try anything within the next couple of weeks. I'll post once I've tried. Of course, if rward3182 wants to try first, perhaps he'll post his results. The race is on to see who gets there first! Thanks.
  15. The Windows drivers are disabled for both the drives and the autoloaders. I'm not sure what you mean regarding the SCSI card transfer rate: I have no issue with the speed or performance of the SCSI connection (ASPI is a lot faster than NT Passthrough). The reason I bought it up was in case there is a known issue regarding communication with the autoloader over ASPI that may cause the problems with old tape names sticking on the slots when the tapes are changed. Also, would this have any bearing on getting the library slots to reappear in Retrospect once the drive is back on-line, without having to resort to a manual rescan of the bus?
×