Jump to content

Backup to an offsite NAS device


kujain

Recommended Posts

I am trying to set up a a NAS storage at a server farm and use Retrospect to create an offsite copy of the local backup disk (transfer/archive/duplicate). This is mainly to avoid the use of offisite technology like tape/cd etc. and make the process more manageable. Since we deal with about 100gb of backup at one session, we cannot possible have enough bandwidth to transfer so much of data in a reasonable time frame. Can someone suggest any possible strategies and methods for this situation ?

 

Also which of the methods - duplicate / archive / transfer will copy a file from a local backup set to the offiste one only if it has been changed ? Only if this is possible the idea of an offiste backup server becomes feasible. I am planning to integrate this with a service like Amazon's S3 which provide about 100gb of space for $15/mo. and can be connected as a network drive letter in windows.

 

Thanks !

 

Server OS: Windows server 2003

Retrospect Version: 7.0.326

Backup Media: Internal hard disk

Offsite Backup Media: Amazon S3 virtual network drive

Link to comment
Share on other sites

  • 2 weeks later...

Hi Kujain,

For one, there is an oxymoron in your statements - you want to backup to an internet virtual drive, however you also state that you don't have enough bandwidth to transfer 100GB of data in a "reasonable" time frame.

So the question here is, how much time is "reasonable" to you? If you feel like moving that data from the "backup" server in less than 8 hours is reasonable, but moving it from a "staging" server in less than 24 hours is reasonable, then a possible strategy is similar to the one I use. I have a usb drive that holds 500GB and contains a single backup set called "offsite usb". Each morning, for 8 hours, I allow ALL backup sets in my "backup" server to TRANSFER to the "offsite usb" backup set. The "offsite usb" backup set has a 10 backup per source grooming policy. Thus, every night, before I leave the office, I have at least 10 backups of all my servers in my datacenter. How can this work for you??? Well, if you transfer the backup sets for say, 8 hours during the day (which may be a reasonable amount of time to you for your backup server to do the transfer) to a USB 2.0 drive, you should be able to get the entire 100GB PLUS done. Then overnight, which gives it 16 hours, copy the backup set to the Amazon S3 virtual drive while the USB drive is attached to a "staging server". Hopefully, you have enough bandwidth to push 100GB in 16 hours, if not, then you may want to transfer the backukp set to a staging server (instead of the USB drive) with enough HDD or DAS space to handle about 150GB to 200GB so that if the push to Amazon is not done in 16 hours, you will be able to continously transfer data to Amazon from the staging server without interrupting the daily "offsite backup" job from the backup server.

 

And in case you didn't understand this from the above, a TRANSFER BACKUPSET is the method that copies from a local backup set to an offsite, compares all files in the backupsets and only copies files needed to create full server restores. For example, I use 2 backup sets for one set of servers. The first is called "Servers S-T", while the second is called "Servers W-F". Thus, upon initial implementation, I got a full backup on Sunday, and a full backup on Wednesday. However, because I only use one offsite backupset, it only copies newly backed up files between the two sets. So on Thursday (after wednesday's full backup was done), I still only had ONE full backup in the Offsite Backup set, because Retrospect recognized that all the files copied on Sunday, were already in the Wednesday backupset.

 

Hope that helps a bit, if not or you need more clarification, just let me know. Thanks.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...