Jump to content
j.a.duke

Copy Backup to tape extremely slow

Recommended Posts

I added a shiny new LTO-8 drive to my backup plan earlier this year.  My copy backups are running fine (no errors), but the performance is currently terrible.

I can't find early executions (the log only has about one month of entries), but I seem to think they were much better throughput (4-6 GB/min vs the current  mid 2 digit MB/min).

Retrospect is 15.1.2 (I see that 15.5 arrived this morning, but I don't see anything in the release notes regarding this problem).

My config is: Mac Mini mid-11, 2GHz QCi7, 16 GB RAM running 10.12.6.  Network connection is via a Promise SANLink2 10Gbase-T adapter.

Primary backup storage is an Areca ARC-8050T3-12.

Tape drive is Quantum LTO-8 external connected via an OWC Helios TB3 with an ATTO ExpressSAS H680.  This card was chosen over a ThunderBolt/SAS box because the vendor (BackupWorks.com) recommended it as more reliable (at least the ATTO card).

For those of you playing along at home, I have the Thunderbolt chain configured like this: Mac Mini - SANLink2 - Apple TB/TB3 adapter - ARC8050T3 - OWC Helios TB3.

Source is the disk set (ungroomed) of our e-mail server.  The rules for that set is "All Files Except for Cache". Catalog is not set to be compressed.

Destination is a Tape set with "Fast Catalog rebuild" checked.  The set has 12,951,189 files occupying 4.5 TB.

The script has the "All Files" rule set. Options set are Media Verification, Match Source Media Set to destination Media Set, Don't add duplicates to Media Set, Match only files in same location/path.

This machine is dedicated to running Retrospect.

Is there something I should set differently that would improve the performance?  

 

Does the information is this post http://forums.retrospect.com/topic/154263-realistic-time-for-copy-media-set/?do=findComment&comment=264611 have any relevance to what I'm doing?

Thanks.

Cheers,
Jon

Share this post


Link to post
Share on other sites
23 minutes ago, j.a.duke said:

I added a shiny new LTO-8 drive to my backup plan earlier this year.

What else did you change? Is the following new hardware, too?

24 minutes ago, j.a.duke said:

Tape drive is Quantum LTO-8 external connected via an OWC Helios TB3 with an ATTO ExpressSAS H680

How was the performance with your older tape drive?

 

Is the firmware in the tape drive fully updated?

Share this post


Link to post
Share on other sites

All the components were new in ~March.  Hadn't had a tape drive previously in this setup (used a VXA-320 long time ago).

But, I'm seeing multi-gig performance (typically) to the array, but 60-70 meg/min from the array to the tape for my mail server copy backup.  I run three copies to tape, a small data amount (<5 gig) from our slowly deprecating file server is ~2 megs/min, our document management server (~15 gig) that runs 1-3 gig/min and the mail server (20-25 gig) at 60-70 megs/min.

Would the number of files needing to be copied affect the performance?  Each of these has two sources (startup & data volumes). But email has ~50k files per copy, the others <7k typically)

Thanks.

Cheers,
Jon

Share this post


Link to post
Share on other sites
2 hours ago, j.a.duke said:

Would the number of files needing to be copied affect the performance?

Yes, many small files are slower than a few large files with the same total size. At least when the source is hard drives, not SSD.

One way to speed things up would be to use disk-to-disk-to-tape. Something like this (with two exceptions, see below).

 

First, I would not use software compression as it would drain the CPU. The tape drive has built-in hardware compression (that should be used).

Second, the "Copy Backup" script should be set to run after the "My backup schedule" (not at the same time). 

Share this post


Link to post
Share on other sites
9 hours ago, Lennart_T said:

Yes, many small files are slower than a few large files with the same total size. At least when the source is hard drives, not SSD.

One way to speed things up would be to use disk-to-disk-to-tape. Something like this (with two exceptions, see below).

<link to Robin's video>

First, I would not use software compression as it would drain the CPU. The tape drive has built-in hardware compression (that should be used).

Second, the "Copy Backup" script should be set to run after the "My backup schedule" (not at the same time). 

Thanks for the information.

I've got the D2D2T setup as Robin has noted in the video with the settings changes that you have noted.

Backup of e-mail server via Retrospect client to disk media set starts at 2000 hours, finishing within 3 hours (~20 gigs of data).

Copy Backup (from disk media set to tape media set) starts at 0300 hours and runs for ~6 1/2 hours currently. In "the early days" it would be wrapped up well before I arrived in the office.

Would it help if I ran a test to a new tape to benchmark the throughput for an initial backup?

There are really two interrelated problems here: speed of backup and efficient use of tape capacity.  

In my media set window, the first tape shows 4.4 TB used, 9.9 TB free for a total of 14.3 TB.  The second tape member show 167.2 GB used, 5.6 TB free for 5.7 TB total.  Both tapes are LTO-7 media (in an LTO-8 drive).  My goal was to fit an entire year of backups onto a single cartridge for each of our critical systems and send those off-site at year-end.

Thanks.

Cheers,
Jon

Share this post


Link to post
Share on other sites

Jon,

Tape needs a smooth, fast, data stream to get both advertised performance and advertised capacity -- the tape always runs at a certain speed, and if the data arrives too slowly it either leaves gaps or stops and spools back then restarts, which also inevitably leaves gaps. So I think your "two interrelated problems" are just one -- data delivery to the drive. It might be that the data transfer is just too slow, but also may be that it is too "spurty".

I'd start by benchmarking the connection with small numbers of big files, doing a standard files-to-tape backup. Try one or more multi gigabyte disk images or similar and, if they go through at better speeds than you are reporting above, the problem is likely with the Copy Backup and the way that process presents data to your tape drive.

So if you do get good speeds with the big files, I'd consider a different way of off-siting. Sounds like your day-to-day restores will be done from disk media set and the tapes are a backstop/archive and possibly compliance step and there's no requirement to restore directly from them. So I'd back up to tape the disk media sets' RDB files instead, though that would mean that restoring any files from "archive" would mean first restoring all the RDB files to the disk array then restoring files from that "rebuilt" backup. Don't forget backup your catalogs as well, or that "rebuilt" backup will have to have a Retrospect "Rebuild".

The above isn't as crazy as it sounds -- for years we did similar with RS6, backing up clients to Internet Backup sets on disk then taking those backup files to tape, to mitigate speed issues for the tape.

Nige

Share this post


Link to post
Share on other sites

j.a.duke,

What Nigel Smith says in his first paragraph is so eminently correct that another developer of client-server backup software years ago implemented a "multiplexed backup" capability that only works for tape destinations.  But that's designed to speed up Backup scripts (whatever NB's terminology is), not Copy Backup scripts.

My suggestion would be to turn off the following Options:  Match Source Media Set to destination Media Set, Don't add duplicates to Media Set, Match only files in same location/path.  The post in another thread you linked to justifies that.  Why should you care if you copy a few extra backups to an offsite backup, if the overall time to complete that copying is improved?

Another approach would be to reduce the time-after-Backup for the Copy Backup to complete, by implementing the overlapping approach described in the first paragraph of this post and the thread it links to (sorry, the Forums software no longer has the post-in-thread-numbering feature it used to have).  However be very aware of the problem stated in the second paragraph of that same post.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×