i decided to build an archive server to store huge amounts of static data, for this reason i decided to get the best price/gb drives and build an raid5/6 from them.
It turned out the drives are seagate archive hdds with 8tb capacity and SMR. Having had a bad time with seagate drives (Desktop) i was worried about them, but decided to give them a shot as they offer 3 years warranty. I read about SMR and reviews about the drives, many people reported failures of the drives within a few weeks or directly from the beginning (DOA). I decided to skip all the people telling me to not use them in a raid as they are not built for this, but here comes the but:
First thing is, i will use software raid, to be exact zfs raidz2 (the 8tb drives still have an URE count of 10^14, so better use 2 drives for parity). When using hardware raid and a read error occurs the harddrive takes some time and replaces the failed sector with some spare, which takes time, time the hardware raid controller does not accept, so it kicks out the harddrive as failed, which is bad. Many harddrives support TLER/ERC/CCTL which basically tells the controller there was a read error right away without spending time to replace/recover the sector. The seagates archives dont have this, but while using software raid this is not important as the default waiting time for zfs ranges from one minute to approx 16 minutes, depeding on the implementation, which is plenty for such operations.
Second thing is, you have to know your requests to the “NAS”, such a server with archive hdds is only for archiving, not for “floating” data, so in my usecase its a store once, delete never, read always system, which is a perfect fit for the drives. Each drive has a workload rate of 180TB per year, so when keeping the writes out of mind as they only occur once, scrubbing the pool occurs every 35 days, so approx. 12 times a year, which results in a bit more than half the workload its rated for (assuming the drive is filled to the top). Keeping this in mind i have another 90TB per year per drive till the workload rate is saturated (which i dont want to come close). For me this is more than enough, i expect only 10-20TB at top to be read per year.
Another thing to keep in mind is the write speed, i have not tested a single discs write speed, but the total speed of my raidz2 array was astonishingly fast (80mb/s) when transferring the initial data. Read speed is gbit wire speed for me, i haven’t tested the limits of the aes-ni as my whole array is encrypted, but it should be somewhere around 500MB/s.
The items used to build the archive server:
- 350W Enermax
- 6x 8TB Seagate Archive v2 HDDs
- ASRock C2550D4I (intel atom with 4×2.4Ghz, AES, ECC and 2x intel gbit onboard)
- Icy Dock backplane 3x 5.25″ for 5x 3.5″ hotswap slots (with a custom less noisier cooler but still more CFM)
- 2x 8GB DDR3L ECC DIMMs
- Cooltek Antiphon Black Midi Tower silenced case
- 16GB USB thumb drive for FreeNAS
So far it worked fine for the last two weeks, lets see what the future holds.
Update 2015/9/20: Still no issues 🙂
Update 2015/9/26: Scrub just finished, no errors and and avg. read speed of 87MB/s per drive which sums up to a total of 522 MB/s with aes-ni enabled for the 6 drives.
Update 2016/6/28: Still no issues, slightly decreased write speed (60-80MB/s) 🙂
Update 2016/7/1: One drive failed (SMART reports no errors, but reads and writes time out), the disk does a clicking sound in this situation
Update 2016/7/12: Resilvering is running at 132MB/s, hdd write speed of the new drive is at 19.2MB/s avg. and 35.8MB/s max.
Update 2017/1/22: The Mainboard died, the server just went offline, no video, doesn’t boot anymore, it seems to be affected by this bug, warranty covered for it.
Update 2017/6/28: Another drive failed, this time it just disappeared. It doesn’t show up when attached to any computer.
6 Comments