Stephen J Posted November 22, 2017 Report Share Posted November 22, 2017 I'm just about to add more storage to my server. I have 6x6TB drives and I originally assumed I would do a RAID 10. But then I got to wondering if it would be better to do 3 raid 1 arrays each as a member of the backup set. Then, if I understand Recycling correctly, when the backup set gets full, the first RAID1 member could be recycled and made the most recent member of the Backup Set. Combining this with Periodic Snapshot Transfers to archival media would make for a manageable method of keeping the size from getting out of hand. Please fill it full of holes. I feel like I'm still trying to figure out long term backup storage management is supposed to work. I'm not using tape. Thanks, Stephen Quote Link to comment Share on other sites More sharing options...
Lennart_T Posted November 22, 2017 Report Share Posted November 22, 2017 When you recycle, you recycle the entire backup set and all its members. You do not recycle just one member. You should instead consider "grooming", which is a process where the oldest backups are groomed out, making more space in the backup set. I assume that your "archival media" is stored off site. You could use cheap USB disks for this. Think fire, theft, flooding, lightning, hurricanes, you name it. As for RAID, when using RAID 1, you lose half the capacity. I would go for RAID 5 or RAID 6. You can set up as many logical volumes on the RAID disk as you like. You could have three of them, each being larger than the RAID 1 disks you propose. Quote Link to comment Share on other sites More sharing options...
Scillonian Posted November 22, 2017 Report Share Posted November 22, 2017 With an array using 6TB drives I would recommend RAID6. I would not recommend RAID5. With RAID5 and >2TB drives the statistical chances of a total loss of the array because of an Unrecoverable Error (URE) during recovery from a single disk failure become increasingly high. Many factors influence how long it takes a RAID array to rebuild but with 6 x 6TB drives a RAID6 or RAID5 array rebuild could easily be measured in days. (For example my 4 x 4TB RAID5 array in my NAS takes at least 8 hours to rebuild under ideal conditions.) Quote Link to comment Share on other sites More sharing options...
Stephen J Posted November 22, 2017 Author Report Share Posted November 22, 2017 Thank you for the input @Lennart_T and @Scillonian. I think that is what I needed to make this work. Stephen Quote Link to comment Share on other sites More sharing options...
klubar Posted December 17, 2020 Report Share Posted December 17, 2020 I just looked into RAID options for a file server (a different use case than a back up server) and ended up going with 4x4TB + hot spare in RAID 10 for a total usable of 8 TB. My rationale was the performance and rebuild time. (Also have DFS-R to replicate to second server, so 11 X 4TB drives ended up with 8TB usable -- 2 servers * 5 drives + 1 cold spare = 11 drives.) This is a file server -- not a backup strategy. For a backup server, I'm not sure I'd go with RAID at all. I'd rotate across multiple spindles for each backup. For the same number of drives, I'd get more backups and my risk is only needing to go back 12 hours instead of 6 (assuming 4 sets and every 6 hours). Perhaps the reason to go RAID is to have bigger drives. If I did go RAID on a backup server, I'd suck of the disk space and stick with RAID 1. With a large RAID/drive set the rebuild time is going to kill you -- plus the risk of a URE on rebuild. Also, unless you're getting a better disk controller (HBA), the built-in Intel raid on most motherboards isn't that good. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.