Jump to content

1 Raid 10 vs multiple raid 1 members

Recommended Posts

I'm just about to add more storage to my server. I have 6x6TB drives and I originally assumed I would do a RAID 10. But then I got to wondering if it would be better to do 3 raid 1 arrays each as a member of the backup set. Then, if I understand Recycling correctly, when the backup set gets full, the first RAID1 member could be recycled and made the most recent member of the Backup Set. Combining this with Periodic Snapshot Transfers to archival media would make for a manageable method of keeping the size from getting out of hand.

Please fill it full of holes. I feel like I'm still trying to figure out long term backup storage management is supposed to work. I'm not using tape.


Link to comment
Share on other sites

When you recycle, you recycle the entire backup set and all its members. You do not recycle just one member.

You should instead consider "grooming", which is a process where the oldest backups are groomed out, making more space in the backup set.

I assume that your "archival media" is stored off site. You could use cheap USB disks for this. Think fire, theft, flooding, lightning, hurricanes, you name it.

As for RAID, when using RAID 1, you lose half the capacity. I would go for RAID 5 or RAID 6. You can set up as many logical volumes on the RAID disk as you like. You could have three of them, each being larger than the RAID 1 disks you propose.

Link to comment
Share on other sites

With an array using 6TB drives I would recommend RAID6. I would not recommend RAID5. With RAID5 and >2TB drives the statistical chances of a total loss of the array because of an Unrecoverable Error (URE) during recovery from a single disk failure become increasingly high.

Many factors influence how long it takes a RAID array to rebuild but with 6 x 6TB drives a RAID6 or RAID5 array rebuild could easily be measured in days. (For example my 4 x 4TB RAID5 array in my NAS takes at least 8 hours to rebuild under ideal conditions.)

Link to comment
Share on other sites

  • 3 years later...

I just looked into RAID options for a file server (a different use case than a back up server) and ended up going with 4x4TB + hot spare in RAID 10 for a total usable of 8 TB. My rationale was the performance and rebuild time. (Also have DFS-R to replicate to second server, so 11 X 4TB drives ended up with 8TB usable -- 2 servers * 5 drives + 1 cold spare = 11 drives.) This is a file server -- not a backup strategy.

For a backup server, I'm not sure I'd go with RAID at all. I'd rotate across multiple spindles for each backup. For the same number of drives, I'd get more backups and my risk is only needing to go back 12 hours instead of 6 (assuming 4 sets and every 6 hours). Perhaps the reason to go RAID is to have bigger drives.

If I did go RAID on a backup server, I'd suck of the disk space and stick with RAID 1. 

With a large RAID/drive set the rebuild time is going to kill you -- plus the risk of a URE on rebuild. Also, unless you're getting a better disk controller (HBA), the built-in Intel raid on most motherboards isn't that good.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...