6 TB NAS Drives: WD Red, Seagate Enterprise Capacity and HGST Ultrastar He6 Face-Off
by Ganesh T S on July 21, 2014 11:00 AM ESTIntroduction and Testbed Setup
The SMB / SOHO / consumer NAS market has been experiencing rapid growth over the last few years. With declining PC sales and increase in affordability of SSDs, hard drive vendors have scrambled to make up for the deficit and increase revenue by targeting the NAS market. The good news is that the growth is expected to accelerate in the near future (thanks to increasing amounts of user-generated data through the usage of mobile devices).
Back in July 2012, Western Digital began the trend of hard drive manufacturers bringing out dedicated units for the burgeoning SOHO / consumer NAS market with the 3.5" Red hard drive lineup. The firmware was tuned for 24x7 operation in SOHO and consumer NAS units. 1 TB, 2 TB and 3 TB versions were made available at launch. Later, Seagate also jumped into the fray with a hard drive series carrying similar firmware features. Over the last two years, the vendors have been optimizing the firmware features as well as increasing the capacities. On the enterprise side, hard drive vendors have been supplying different models for different applications, but all of them are quite suitable for 24x7 NAS usage. For example, the WD Re and Seagate Constellation ES are tuned for durability under heavy workloads, while the WD Se and Seagate Terascale units are targeted towards applications where scalability and capacity are important.
Usually, the enterprise segment is quite conservative when it comes to capacity, but datacenter / cloud computing requirements have resulted in capacity becoming a primary factor to ward off all-flash solutions. HGST, a Western Digital subsidiary, was the first vendor to bring a 6 TB hard drive to the market. The sealed Helium-filled HDDs could support up to seven disks (instead of the five usually possible in air-filled units), resulting in a bump up to 6 TB in the same height as traditional 3.5" drives. Seagate adopted a six-platter design for the Enterprise Capacity v4 6 TB version. Today, Western Digital launched the first NAS-specific 6 TB drive targeting SOHO / home consumers, the WD Red 6 TB. In expanding their Red portfolio, WD provides us an opportunity to see how the 6 TB version stacks up against other offerings targeting the NAS market.
The correct choice of hard drives for a NAS system is influenced by a number of factors. These include expected workloads, performance requirements and power consumption restrictions, amongst others. In this review, we will discuss some of these aspects while evaluating three different hard drives targeting the NAS market:
- Western Digital Red 6 TB [ WDC WD60EFRX-68MYMN0 ]
- Seagate Enterprise Capacity 3.5 HDD v4 6 TB [ ST6000NM0024-1HT17Z ]
- HGST Ultrastar He6 6 TB [ HUS726060ALA640 ]
Each of these drives target slightly different markets. While the WD Red is mainly for SOHO and home consumers, the Seagate Enterprise Capacity targets ruggedness for heavy workloads while the HGST Ultrastar aims for data centers and cloud storage applications with a balance of performance and power efficiency.
Testbed Setup and Testing Methodology
Unlike our previous evaluation of 4 TB drives, we managed to obtain enough samples of the new drives to test them in a proper NAS environment. As usual, we will start off with a feature set comparison of the three drives, followed by a look at the raw performance when connected directly to a SATA 6 Gbps port. In the same PC, we also evaluate the performance of the drive using some aspects of our direct attached storage (DAS) testing methodology. For evaluation in a NAS environment, we configured three drives in a RAID-5 volume and processed selected benchmarks from our standard NAS review methodology.
We used two testbeds in our evaluation, one for benchmarking the raw drive and DAS performance and the other for evaluating performance when placed in a NAS unit.
AnandTech DAS Testbed Configuration | |
Motherboard | Asus Z97-PRO Wi-Fi ac ATX |
CPU | Intel Core i7-4790 |
Memory |
Corsair Vengeance Pro CMY32GX3M4A2133C11 32 GB (4x 8GB) DDR3-2133 @ 11-11-11-27 |
OS Drive | Seagate 600 Pro 400 GB |
Optical Drive | Asus BW-16D1HT 16x Blu-ray Write (w/ M-Disc Support) |
Add-on Card | Asus Thunderbolt EX II |
Chassis | Corsair Air 540 |
PSU | Corsair AX760i 760 W |
OS | Windows 8.1 Pro |
Thanks to Asus and Corsair for the build components |
In the above testbed, the hot swap bays of the Corsair Air 540 have to be singled out for special mention.
They were quite helpful in getting the drives processed in a fast and efficient manner for benchmarking. For NAS evaluation, we used the QNAP TS-EC1279U-SAS-RP. This is very similar to the unit we reviewed last year, except that we have a slightly faster CPU, more RAM and support for both SATA and SAS drives.
The NAS setup itself was subjected to benchmarking using our standard NAS testbed.
AnandTech NAS Testbed Configuration | |
Motherboard | Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB |
CPU | 2 x Intel Xeon E5-2630L |
Coolers | 2 x Dynatron R17 |
Memory | G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30 |
OS Drive | OCZ Technology Vertex 4 128GB |
Secondary Drive | OCZ Technology Vertex 4 128GB |
Tertiary Drive | OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD) |
Other Drives | 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS) |
Network Cards | 6 x Intel ESA I-340 Quad-GbE Port Network Adapter |
Chassis | SilverStoneTek Raven RV03 |
PSU | SilverStoneTek Strider Plus Gold Evolution 850W |
OS | Windows Server 2008 R2 |
Network Switch | Netgear ProSafe GSM7352S-200 |
Thank You!
We thank the following companies for helping us out with our NAS testbed:
- Thanks to Intel for the Xeon E5-2630L CPUs and the ESA I-340 quad port network adapters
- Thanks to Asus for the Z9PE-D8 WS dual LGA 2011 workstation motherboard
- Thanks to Dynatron for the R17 coolers
- Thanks to G.Skill for the RipjawsZ 64GB DDR3 DRAM kit
- Thanks to OCZ Technology for the two 128GB Vertex 4 SSDs, twelve 64GB Vertex 4 SSDs and the OCZ Z-Drive R4 CM88
- Thanks to SilverStone for the Raven RV03 chassis and the 850W Strider Gold Evolution PSU
- Thanks to Netgear for the ProSafe GSM7352S-200 L3 48-port Gigabit Switch with 10 GbE capabilities.
83 Comments
View All Comments
NonSequitor - Thursday, July 24, 2014 - link
So something isn't adding up here. I have a set of nine Red 3TB drives in a RAID6. They are scrubbed - a full rebuild - once a month. They have been in service for a year and a half with no drive failures. Since it's RAID6, if one drive returns garbage it will be spotted by the second parity drive. Obviously they can't be returning bad data, or they would have been failed out long ago. The array was previously made of 9 1TB Greens, scrubbed monthly, and over two and a half years I had a total of two drive failures, one hard, one a SMART pre-failure.asmian - Friday, July 25, 2014 - link
Logically, I think this might well be due to the URE masking on these Red drives - something the Green drives weren't doing. You've been lucky that with a non-degraded RAID6 you've always had that second parity drive that has perhaps enabled silent repair by the controller when a URE occurred. I've been pondering more about this and here's what I have just emailed to Ganesh, with whom I've been having a discussion...--------------------
On 24/07/2014 03:45, Ganesh T S wrote:
> Ian,
>
> Irrespective of the way URE needs to be interpreted, the points you
> raise are valid within a certain scope, and I have asked WD for their
> inputs. I will ping Seagate too, since their new NAS HDD models are
> expected to launch in late August.
Thanks. This problem is a lot more complicated than it looks, I think, and than a single URE figure might suggest. But the other "feature" of these Red drives is also extremely concerning to me now. I am a programmer and often think in low-level algorithms, and everything about this URE masking seems wrong in the likely usage scenarios. Please help me with my logic train here - correct me if I'm wrong. Let's assume we have an array of these Red drives. Irrespective of the chance of a URE while the array is rebuilding or in use, let's assume one occurs. You have stated WD's info that the URE is masked and the disk simply returns dummy info for the read.
If you were in a RAID5 and you were rebuilding after you've lost a drive, that MUST mean the array is now silently corrupted, right? The drive has masked the read failure and reported no error, and the RAID controller/software has no way to detect the dummy data. Critically, having a back-up doesn't help if you don't know that you NEED to repair or restore damaged files, and without a warning, due to simple space contraints a good backup will likely be over-written with one that now contains the corrupted data... so all ways round you are screwed. Having a masked URE in this situation is worse than having one reported, as you have no chance to take any remedial action.
If you are in RAID6 and you lost a drive, then you still have a parity check to confirm the stripe data with while rebuilding. But if there's a masked URE and dummy data, then how will the controllers react? I presume they ALWAYS test the stripe data with the remaining parity or parities for consistency... so at that point the controller MUST throw a major rebuild error, right? However, they cannot determine which drive gave the bad data - just that the parity is incorrect - unless it's a parity disc that errored and one parity calculates correctly while the other is wrong. If they knew WHICH disc had UREd then they could easily exclude it from the parity calculation and rebuild that dummy data on the fly with the spare parity, but the masking makes that impossible. The rebuild must fail at this point. At least you have your backups... hopefully.
Obviously the above situation will also be the same for a RAID5 array in normal usage or a degraded RAID6. Checking reads with parity, a masked URE means a failure with no way to recover. If you have an unmasked URE at least the drive controller can exclude the erroring disc for that stripe and just repair data silently using the remaining redundancy, knowing exactly where the error has come from. After all, it's logically just an EXPECTED disk event with a statistically low chance of happening, not necessarily an indication of impending disk failure unless it is happening frequently on the same disc. The only issue will be the astronomically unlikely chance of another URE occurring in the same stripe on another disc.
Fundamentally, a masked URE means you get bad data without any explanation of why a disc is returning it, which gives no information to the user (or to the RAID controller so it can take action or warn the user appropriately). For me, that's catastrophic. It all really depends on what the controllers do when they discover parity errors and UREs in rebuild situations and how robust their recovery algorithms are - an unmasked URE does not NEED to be a rebuild-killer for RAID6, as thank G*d you had a second redundancy disc...
Anyway, the question will be whether these new huge drives will, as you say, accumulate empirical evidence from users that array failures are happening more and more frequently. Without information from reviews like yours that warn against their use in RAID5 (or mirrors) despite the marketing as NAS products, the thought-experiment above suggests the most likely scenario is extremely widespread silent array corruption. I stand by my comment that this URE masking should be a total deal-breaker in considering them for home array usage. Better a disk that at least tells you it's errored.
NonSequitor - Tuesday, July 29, 2014 - link
That still doesn't add up - there are no unmasked UREs in my situation, as I'm using Linux software RAID. It is set to log any errors or reconstructions, and there are literally none. One of these arrays has a dozen scrubs on it now with no read errors whatsoever.m0du1us - Friday, July 25, 2014 - link
@asmian This is why we run 28 disk arrays minimum for SAN. If you really want RAID6 to be reliable, you need A LOT of disks, no matter the size. Larger disks decrease your odds of rebuilding the array, that just means you need more disks. Also, you should never build a RAID6 array with less than 5 disks. At 5 disks, you get no protection from a disk failure during rebuild. At 12 disks, You can have 2 spares and 2 failures during rebuild before loosing data.NonSequitor - Friday, July 25, 2014 - link
Your comment makes no sense. A five disk RAID-6 is N+2. A 28 disk RAID-6 is N+2. More disks will decrease reliability. Our storage vendor advised keeping our N+2 array sizes between 12 and 24 disks, for instance: 12 disks as a minimum before it started impacting performance, 24 disks as a maximum before failure risks started to get too high. Our production experience has borne this out. Bigger arrays are actually treated as sets of smaller arrays.LoneWolf15 - Friday, July 25, 2014 - link
asmian, it totally depends on what you are building the array for, and how you are building it.Myself, I'd use them for home or perhaps small business, but in a RAID 10 or RAID 6. Then I'd have some guaranteed security. That said, if I needed constant write performance (e.g., multiple IP cameras) I'd use WD RE, and if I wanted enterprise level performance, I'd use enterprise drives.
That, and I realize that RAID != backup strategy. I have a RAID-5 at home; but I sure as heck have a backup, too.
tuxRoller - Monday, July 21, 2014 - link
Hi Ganesh,Would you mind listing the max power draw of these drives? That is, how much power is required during spin-up?
WizardMerlin - Tuesday, July 22, 2014 - link
If you're not going to show the results on the same graph then for the love of god don't change the scale of the axis between graphs - makes quick comparison completely impossible.ganeshts - Tuesday, July 22, 2014 - link
I have tried that before and the problem is that once all the drives are on the same scale, then some of them become really difficult to track absolute values across. Been down that road and decided the issues it caused are not worth the effort taken for readers (including me and my colleagues) to glance right and left on the axes side to see what the absolute numbers are.Iketh - Tuesday, July 22, 2014 - link
what? why would they become difficult to "track absolute values" ??