Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  Intel SSD 730 480GB Intel DC S3500 480GB Intel SSD 530 240GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area - -

Thanks to the enterprise DNA in the SSD 730, IO consistency is outstanding. We are looking at S3500 level consistency here, which isn't surprising given the similarity between the two. The faster controller and NAND interface mainly help with peak performance but IO consistency is built deep into the architecture of the drive. The only drive that can really challenge the SSD 730 is OCZ's Vector 150 while even the SanDisk Extreme II falls short once it reaches steady-state. Also of note is that Increasing the OP yields a healthy boost in performance and the SSD 730 actually manages more IOPS than the S3700 even though it has slightly less OP (25% vs 28%).

  Intel SSD 730 480GB Intel DC S3500 480GB Intel SSD 530 240GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area - -

Here you can see the differences a bit better with the linear scale. The SSD 730 manages around 15K IOPS compared to a slighly lower 10K IOPS on the SanDisk Extreme II. With the increased overprovisioning, the SSD 730 is in a class of its own, maintaining a minimum 30K IOPS.

  Intel SSD 730 480GB Intel DC S3500 480GB Intel SSD 530 240GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area - -

TRIM Validation

To test TRIM, I filled the drive with incompressible sequential data and proceeded with 120 minutes of incompressible 4KB random writes at queue depth of 32. I measured performance with Iometer after issuing a single TRIM pass to the drive.

Intel SSD 730 Resiliency - Iometer Sequential Write
  Clean After TRIM
Intel SSD 530 240GB 351.3MB/s 402.9MB/s

TRIM definitely works as performance is actually higher than after a secure erase.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
Comments Locked

96 Comments

View All Comments

  • futrtrubl - Friday, February 28, 2014 - link

    "JEDEC's SSD spec, however, requires that client SSDs must have a data retention time of one year minimum whereas enterprise drives must be rated at only three months"
    I hadn't actually thought about this before for SSDs and after doing some checking around this seems to be the minimum retention time once the endurance cycles have been exhausted.
    Presumable this retention time is higher for sectors that are not exhausted. Does anyone know what sort of retention times could be expected from fresh/moderately used drives?
    The next question would be do controllers move once written data around to refresh this data and/or as part of wear leveling (like OS files that are untouched after the install)?
  • Mr Perfect - Friday, February 28, 2014 - link

    I don't know about your first question, but the answer is "yes" to your second question about the wear leveling. Controllers try to keep writes even across all blocks to keep endurance up.
  • futrtrubl - Saturday, March 1, 2014 - link

    Sorry, that was a badly worded question but yes I am aware that wear leveling exists. What I am asking is, if you write a file once and then only read from it for the next few years while the rest of the drive is being written/rewritten, will the controller intentionally rewrite or move that file? Whether it does it for wear leveling or to refresh "old" data is less important but would be nice to know, if it does.
  • Solid State Brain - Tuesday, March 4, 2014 - link

    SSD-spec NAND memory is supposed to have a 10 years data retention when fresh (0 write cycles). I haven't been able to find much real world data about it, but from a Samsung datasheet about their enterprise drives, inclusing those with TLC NAND memory (here http://www.samsung.com/global/business/semiconduct... ) one can extrapolate that for the rated endurance with sequential workloads at 3 months of data retention, TLC NAND cells would have to endure about 2000 write cycles.
    So very roughly, assuming it's the exact same memory of the consumer drives (no reasons to assume it's not the case) for these drives we have: 0 write cycles = 10 years retention, 1000 cycles = 1 year retention, 2000 cycles = 3 months retention. Torture tests by users worldwide have shown that at over 3000 write cycles, these drives have a data retention ranging from hours to days. So, we could further summarize this as:

    cycles / data retention days
    0000 3650
    1000 365
    2000 36.5
    3000 3.65

    If you plot these values you can see that there's an inverse exponential correlation between NAND wear and data retention.
  • Solid State Brain - Tuesday, March 4, 2014 - link

    It turns out I realized too late before clicking "reply" that for the 2000 cycles datapoint I used about one month of time instead of 3 months. It should be 90. It doesn't affect the end point I was making, though.
  • SiennaPhelpsigi - Saturday, March 1, 2014 - link

    Parker . I can see what your saying... Margaret `s bl0g is impressive, last thursday I bought a great Porsche 911 from having made $8447 this past month an would you believe 10 grand this past month . this is certainly the most-comfortable job I've ever had . I began this nine months/ago and pretty much immediately began to bring home over $77, per-hour . Learn More W­ o­ r­ k­ s­ 7­ 7­
  • kmmatney - Saturday, March 1, 2014 - link

    You can make that kind of money, in real life, as an engineer. Why is their a shortage of engineers in the U.S.again?
  • Jflachs - Saturday, March 1, 2014 - link

    So um, the 840 Pro is still the best SSD then, right?
  • emn13 - Sunday, March 2, 2014 - link

    None of samsungs drives have any form of power loss protection, so unless you really need that last bit of performance, I'd avoid them, especially since there are cheaper drives that do have that protection.

    If you really do need top-of-the line performance, well, then your choice becomes considerably harder.
  • amddude10 - Friday, November 28, 2014 - link

    Power-loss protection isn't very important in a laptop or if someone has a good UPS on a desktop, or at least that's what I would think, so in those cases, samsungs look very good indeed.

Log in

Don't have an account? Sign up now