Random Read Performance

Our first test of random read performance uses very short bursts of operations issued one at a time with no queuing. The drives are given enough idle time between bursts to yield an overall duty cycle of 20%, so thermal throttling is impossible. Each burst consists of a total of 32MB of 4kB random reads, from a 16GB span of the disk. The total data read is 1GB.

Burst 4kB Random Read (Queue Depth 1)

The burst random read performance of the Samsung 970 EVO is the best they've ever delivered from TLC NAND flash memory, but the Intel SSD 760p is a few percent faster still.

Our sustained random read performance is similar to the random read test from our 2015 test suite: queue depths from 1 to 32 are tested, and the average performance and power efficiency across QD1, QD2 and QD4 are reported as the primary scores. Each queue depth is tested for one minute or 32GB of data transferred, whichever is shorter. After each queue depth is tested, the drive is given up to one minute to cool off so that the higher queue depths are unlikely to be affected by accumulated heat build-up. The individual read operations are again 4kB, and cover a 64GB span of the drive.

Sustained 4kB Random Read

On the longer random read test, the Samsung 970 EVO proves to be the fastest TLC-based drive, but Samsung's MLC-based drives offer up to 20% higher performance.

Sustained 4kB Random Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The Samsung 970 EVO and its OEM sibling PM981 have the worst power efficiency of any recent high-end SSD during the random read test. The 970 EVO is drawing over 2.5W while Samsung's previous generation high end drives averaged less than 2W for very similar performance.

The performance scaling of the 970 EVO is almost identical to that of the 960 EVO, but the 970 EVO draws more power throughout the random read test.

Random Write Performance

Our test of random write burst performance is structured similarly to the random read burst test, but each burst is only 4MB and the total test length is 128MB. The 4kB random write operations are distributed over a 16GB span of the drive, and the operations are issued one at a time with no queuing.

Burst 4kB Random Write (Queue Depth 1)

The burst random write performance from the Samsung 970 EVO is disappointing compared to the PM981, especially for the 1TB 970 EVO. Meanwhile, recent Intel and WD drives have been raising the bar with very fast SLC write caches.

As with the sustained random read test, our sustained 4kB random write test runs for up to one minute or 32GB per queue depth, covering a 64GB span of the drive and giving the drive up to 1 minute of idle time between queue depths to allow for write caches to be flushed and for the drive to cool down.

Sustained 4kB Random Write

On the longer random write test, the 1TB PM981 provided top-tier performance, but the 1TB 970 EVO is about 12% slower, putting it on par with the previous generation from Samsung. The 500GB 970 EVO is also slightly slower than its PM981 counterpart.

Sustained 4kB Random Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Power efficiency has also regressed for the 970 EVO on the random write test, leaving it well below the standard set by the WD Black and the slower but similarly efficient Toshiba XG5.

The random write performance of the 1TB 970 EVO tops out at just over 1.5 GB/s at queue depths of 8 and higher. The 500GB 970 EVO starts running out of SLC cache and showing inconsistent performance past QD4. The 1TB PM981 was able to ramp up performance much faster than the 970 EVO and hit a maximum of about 1.8GB/s before running out of SLC cache near the end of the test. The 512GB PM981 behaved very similarly to the 500GB 970 EVO.

AnandTech Storage Bench - Light Sequential Performance
Comments Locked

68 Comments

View All Comments

  • cfenton - Tuesday, April 24, 2018 - link

    I've been meaning to ask about this for a while, but why do you order the performance charts based on the 'empty' results? In most of my systems, the SSD's are ~70% full most of the time. Does performance only degrade significantly if they are 100% full? If not, it seems to me that the 'full' results would be more representative of the performance most users will see.
  • Billy Tallis - Tuesday, April 24, 2018 - link

    At 70% full you're generally going to get performance closer to fresh out of the box than to 100% full. Performance drops steeply as the last bits of space are used up. At 70% full, you probably still have the full dynamic SLC cache size usable, and there's plenty of room for garbage collection and wear leveling.

    When it comes to manual overprovisioning to prevent full-drive performance degradation, I don't think I've ever seen someone recommend reserving more than 25% of the drive's usable space unless you're trying to abuse a consumer drive with a very heavy enterprise workload.
  • cfenton - Tuesday, April 24, 2018 - link

    Thanks for the reply. That's really helpful to know. I didn't even think about the dynamic SLC cache.
  • imaheadcase - Tuesday, April 24, 2018 - link

    So im wondering, i got a small 8TB server i use for media/backup. While i know im limited to network bandwidth, would replacing the drives with ssd make any impact at all?
  • Billy Tallis - Tuesday, April 24, 2018 - link

    It would be quieter and use less power. For media archiving over GbE, the sequential performance of mechanical drives is adequate. Incremental backups may make more random accesses, and retrieving a subset of data from your backup archive can definitely benefit from solid state performance, but it's probably not something you do often enough for it to matter.

    Even with the large pile of SSDs I have on hand, my personal machines still back up to a home server with mechanical drives in RAID.
  • gigahertz20 - Tuesday, April 24, 2018 - link

    @Billy Tallis Just out of curiosity, what backup software are you using?
  • enzotiger - Tuesday, April 24, 2018 - link

    With the exception of sequential write, there are some significant gap between your numbers and Samsung's spec. Any clue?
  • anactoraaron - Tuesday, April 24, 2018 - link

    Honest question here. Which of these tests do more than just test the SLC cache? That's a big thing to test, as some of these other drives are MLC and won't slow down when used beyond any SLC caching.
  • RamGuy239 - Tuesday, April 24, 2018 - link

    So these are sold and markedet with IEEE1667 / Microsoft edrive from the get-go, unlike Samsung 960 EVO and Pro that had this promised only to get it at the end of their life-cycles (the latest firmware update).

    That's good and old. But does it really work? The current implementation on the Samsung 960 EVO and Pro has a major issue, it doesn't work when the disk is used as a boot drive. Samsung keeps claiming this is due to a NVMe module bug in most UEFI firmware's and will require motherboard manufactures to provide a UEFI firmware update including a fix.

    Whether this is indeed true or not is hard for me to say, but that's what Samsung themselves claims over at their own support forums.

    All I know is that I can't get neither my Samsung 960 EVO 1TB, or my Samsung 960 Pro 1TB to use hardware encryption with BitLocker on Windows 10 when its used as a boot drive on neither my Asus Maximus IX Apex or my Asus Maximus X Apex both running the latest BIOS/UEFI firmware update.

    When used as a secondary drive hardware encryption works as intended.

    With this whole mess around BitLocker/IEEE1667/Microsoft Edrive on the Samsung 960 EVO and Pro how does it all fare with these new ones? Is it all indeed a issue with NVMe and most UEFI firmware's requiring new UEFI firmware's with fixes from motherboard manufactures or does the 970 EVO and Pro suddenly work with BitLocker as a boot drive without new UEFI firmware releases?
  • Palorim12 - Tuesday, April 24, 2018 - link

    Seems to be an issue with the BIOS chipset manufacturers like Megatrends, Phoenix, etc, and Samsung has stated they are working with them to resolve the issue.

Log in

Don't have an account? Sign up now