Performance - Raw Drives

Prior to evaluating the performance of the drives in a NAS environment, we wanted to check up on the best-case performance by connecting one of them directly to a SATA 6 Gbps port. Using HD Tune Pro 5.50, we ran a number of tests on a raw drives. The following screenshots present the results for the Seagate Enterprise NAS HDD. Corresponding images for similar drives that have been evaluated previously are also provided in the drop-down box for easy comparison.

Sequential Reads

Sequential Writes

Random Reads

Random Writes

Miscellaneous Reads

Miscellaneous Writes

Specifications and Feature Set Comparison Single Client Access - DAS Benchmarks
Comments Locked

51 Comments

View All Comments

  • Communism - Wednesday, December 10, 2014 - link

    Seagate 1TB per platter drives have been the fastest (per RPM) ever since their introduction.

    Compare to WD Blacks with 1TB per platter or HGST 1TB per platter drives and in every single sequential benchmark they have been faster.

    The cache size differential between the competing drives has little to do with the sequential results.
  • Laststop311 - Thursday, December 11, 2014 - link

    The seagate did have like 20-30MB/sec faster sequential transfers but the He6 has 2-3 milliseconds faster latency on the access times. Personally I'd rather have the 2-3 milliseconds lower in access time over 20-30MB/sec higher sequential transfers. Not too mention the lower power use, less heat, less noise and hitachis unrivaled reliability. If you are building a dense NAS setup the lower heat per drive really helps out. I feel like you would notice the lower latency more than like 160MB/sec vs 130MB/sec
  • MrSpadge - Thursday, December 11, 2014 - link

    "The cache size differential between the competing drives has little to do with the sequential results."

    I know. That's exactly why I replied this to Ganesh's

    "... Seagate Enterprise Capacity v4 vs. the WD Red Pro at the 4 TB capacity point. Both of them use the same number of platters, have the same rotational speed. The only difference was the cache size."
  • romrunning - Wednesday, December 10, 2014 - link

    All of the performance test charts shown MB/sec generally in the hundreds. However, the "Real Life 60% Random 65% Reads" test shows only single digits in MB/s. Is this a chart labeling problem? If not, why isn't there any explanation about the huge difference?
  • DanNeely - Wednesday, December 10, 2014 - link

    HDDs are very fast for sequential reads/writes because as soon as it finishes reading/writing one sector, the next is underneath the read heads. They're horribly slow for random IO because most of the time is spent moving the read/write heads into place not doing data reads. This has been the case with every HDD for decades. (Possibly all the way to the beginning; but I'm not familiar with very old designs limitations.) The main advantage of SDDs is that because they don't have to move drive heads around they can be many times faster in random IO than a magnetic HDD. (They're still faster in sequential IO; read the intro to SSD articles on this site from a few years ago for details about their architecture.)
  • romrunning - Wednesday, December 10, 2014 - link

    I agree with you, but that is a serious drop-off. Shouldn't an intelligent NAS be able to have different drives look for different parts of those reads with some type of large LUT?
  • MrSpadge - Wednesday, December 10, 2014 - link

    You've just invented Raid 0 / 5 / whatever :)

    For small files the typical transfer rates of HDDs are in the low single-digit range. Even if you have 4 of them and performance scales perfectly, that's still very slow. That's why a good SSD on SATA 2 get still be 10 to 100 times faster than an HDD, depending on the actual usage case, even though their maximum transfer rates are comparable.
  • romrunning - Thursday, December 11, 2014 - link

    That's what I was thinking - the test was performed on a 3-drive RAID-5 array in the QNAP, right? So why isn't it's RAID controller more intelligent?
  • Supercell99 - Thursday, December 11, 2014 - link

    Honestly, most serious enterprises do not use SATA HDD drives for production servers. The queue depth is only 32 vs 256 for SAS drives. SATA drives are fine for backups, the just can't provide the IOPS an Enterprise server running multiple VM's or DB's. Will still need to demand SAS for better IOPS in the HDD storage arena. VSphere VSAN will choke on SATA based disk system if a hosts dies.
  • cm2187 - Thursday, December 11, 2014 - link

    Most clouds use SATA drives.

Log in

Don't have an account? Sign up now