archive server with freenas and SMR drives

i decided to build an archive server to store huge amounts of static data, for this reason i decided to get the best price/gb drives and build an raid5/6 from them.

It turned out the drives are seagate archive hdds with 8tb capacity and SMR. Having had a bad time with seagate drives (Desktop) i was worried about them, but decided to give them a shot as they offer 3 years warranty. I read about SMR and reviews about the drives, many people reported failures of the drives within a few weeks or directly from the beginning (DOA). I decided to skip all the people telling me to not use them in a raid as they are not built for this, but here comes the but:

First thing is, i will use software raid, to be exact zfs raidz2 (the 8tb drives still have an URE count of 10^14, so better use 2 drives for parity). When using hardware raid and a read error occurs the harddrive takes some time and replaces the failed sector with some spare, which takes time, time the hardware raid controller does not accept, so it kicks out the harddrive as failed, which is bad. Many harddrives support TLER/ERC/CCTL which basically tells the controller there was a read error right away without spending time to replace/recover the sector. The seagates archives dont have this, but while using software raid this is not important as the default waiting time for zfs ranges from one minute to approx 16 minutes, depeding on the implementation, which is plenty for such operations.

Second thing is, you have to know your requests to the “NAS”, such a server with archive hdds is only for archiving, not for “floating” data, so in my usecase its a store once, delete never, read always system, which is a perfect fit for the drives. Each drive has a workload rate of 180TB per year, so when keeping the writes out of mind as they only occur once, scrubbing the pool occurs every 35 days, so approx. 12 times a year, which results in a bit more than half the workload its rated for (assuming the drive is filled to the top). Keeping this in mind i have another 90TB per year per drive till the workload rate is saturated (which i dont want to come close). For me this is more than enough, i expect only 10-20TB at top to be read per year.

 

Another thing to keep in mind is the write speed, i have not tested a single discs write speed, but the total speed of my raidz2 array was astonishingly fast (80mb/s) when transferring the initial data. Read speed is gbit wire speed for me, i haven’t tested the limits of the aes-ni as my whole array is encrypted, but it should be somewhere around 500MB/s.

The items used to build the archive server:

  • 350W Enermax
  • 6x 8TB Seagate Archive v2 HDDs
  • ASRock C2550D4I (intel atom with 4×2.4Ghz, AES, ECC and 2x intel gbit onboard)
  • Icy Dock backplane 3x 5.25″ for 5x 3.5″ hotswap slots (with a custom less noisier cooler but still more CFM)
  • 2x 8GB DDR3L ECC DIMMs
  • Cooltek Antiphon Black Midi Tower silenced case
  • 16GB USB thumb drive for FreeNAS

So far it worked fine for the last two weeks, lets see what the future holds.

Update 2015/9/20: Still no issues 🙂

Update 2015/9/26: Scrub just finished, no errors and and avg. read speed of 87MB/s per drive which sums up to a total of 522 MB/s with aes-ni enabled for the 6 drives.

Update 2016/6/28: Still no issues, slightly decreased write speed (60-80MB/s) 🙂

Update 2016/7/1: One drive failed (SMART reports no errors, but reads and writes time out), the disk does a clicking sound in this situation

Update 2016/7/12: Resilvering is running at 132MB/s, hdd write speed of the new drive is at 19.2MB/s avg. and 35.8MB/s max.

Update 2017/1/22: The Mainboard died, the server just went offline, no video, doesn’t boot anymore, it seems to be affected by this bug, warranty covered for it.

Update 2017/6/28: Another drive failed, this time it just disappeared. It doesn’t show up when attached to any computer.

6 Comments

  • Doug Weiner Reply

    Did you ever record real write and read speeds from the six drives in RaidZ2?

    Thanks
    Doug

    • Felix Brucker Reply

      I didn’t do serverside benchmarking, only smb network speeds for transferring data from and to the nas and the scrub read speed.

      However i just did a basic 40gig dd test to a compression-turned-off-dataset file, resulting in an average of 127MB/s write speed and 285MB/s read speed (bs=4K, caches dropped).

      Doing the same test with bs=512K resulted in 242MB/s write speed and 395MB/s read speed.

  • Tomas Reply

    Dear Felix,
    I am glad I could read this. I plan to build similar thing (6x8TB SMR seagates) however I am not very keen to use raidZ, since upscaling the array can be done only by adding same number of drives and I am looking more on 2x8TB per 2 years.
    Do you think the raid6 could be solution for me? It have similar structure to raidz2 and I expect the rebuild be the same time.
    Did you have any problems with your onboard sata controller? – Because I also considering the same on my GB MB (since I am student and little bit short of money, too). 🙂
    Thank you

    • Felix Brucker Reply

      Hi Thomas,
      if by raid6 you mean mdadm raid6 then yes, it should work with those drives as well, though you lose checksums and the “scrubs” which are called checkarray in mdadm world and all rebuilds are using the whole disks regardless of used space on top of that blockdevice. Also keep in mind that the drive-count without the parity drives should be a power of two number for optimal performance and space usage, chosing a non-optimal number of drives wont hurt that much for an archive server of course. That being said rebuilds take MUCH longer if not using the whole space, otherwise its the same. For me im using only the 6 onboard intel chipset sata connectors, which worked without any issue so far, haven’t used the marvell ones, but someone reported it working for FreeNAS, so it should work as well or even better on linux.
      greetings

  • Matthew the frog Reply

    Since it is coming up on a year, how has this performed? I was considering adding a Seagate SMR pool to my existing server for the write-once-read-often data. I was intending to retain my existing Pool for Read/Write activities. I think this falls within the parameters for which these drives should be used.

    Thank you for this blog post, it helps those of us considering the use of these drives for these situations.

    • Felix Brucker Reply

      so far i had no issues with this setup, no failed drive and no bad S.M.A.R.T data.

      the write speed is acceptable for me, it varies from 60 to 80 MB/s, so a slight decrease to the initial values. read speed however is still maxing the gbit connection.

      br

Leave a Reply

Your email address will not be published. Required fields are marked *