NAND Lesson: Why Die Capacity Matters

SSDs are basically just huge RAID arrays of NAND. A single NAND die isn't very fast but when you put a dozen or more of them in parallel, the performance adds up. Modern SSDs usually have between 8 and 64 NAND dies depending on the capacity and the rule of "the more, the better" applies here, at least to a certain degree. (Controllers are usually designed for a certain amount of NAND die, so too many dies can negatively impact performance because the controller has more pages/blocks to track and process.) But die parallelism is just a part of the big picture—it all starts inside the die.

Meet the inside version of our Mr. NAND die. Each die is usually divided into two planes, which are further divided into blocks that are then divided into pages. In the early NAND days there were no planes, just blocks and pages, but as the die capacities increased the manufacturers had to find a way to get more performance out of a single die. The solution was to divide the die into two planes, which can be read from or written to (nearly) simultaneously. Without planes you could only read or program one page per die at a time but two-plane reading/programming allows two pages to be read or programmed at the same time.

The reason I said "nearly" is because programming the NAND involves more than just the programming time. There is latency from all the command, address and data inputs, which are marginal compared to the program time but with two-plane programming they take twice the time (you'll still have to send all the necessary commands and addresses separately for both soon-to-be-programmed pages).

I did some rough calculations based on the data I have (though to be honest, it's probably not enough to make my calculations bulletproof) and it seems that the two-plane programming latency is about 2% compared to two individual dies (i.e. it takes 2% longer to program two pages with two-plane programming than with two individual dies). In other words, we can conclude that two-plane programming gives us roughly twice the throughput compared to one-plane programming.

"Okay," you're thinking, "that's fine and all, but what's the point of this? This isn't a new technology and has nothing to do with the M550!" Hold on, it'll make sense as you read further.

Case: M500

  M550 128GB M500 120GB
NAND Die Capacity 64Gbit (8GB) 128Gbit (16GB)
NAND Page Size 16KB 16KB
Sequential Write 350MB/s 130MB/s
4KB Random Write 75K IOPS 35K IOPS

The Crucial M500 was the first client SSD to utilize 128Gbit per die NAND. That allowed Crucial to go higher than 512GB without sacrificing performance but also meant a hit in performance at the smaller capacities. As mentioned many times before, the key to SSD performance is parallelism and when the die capacity doubles, the parallelism is cut in half. For the 120/128GB model this meant that instead of having sixteen dies like in the case of 64Gbit NAND, it only had eight 128Gbit dies.

It takes 1600µs to write 16KB (one page) to Micron's 128Gbit NAND. Convert that to throughput and you get 10MB/s. Well, that's the simple version and not exactly accurate. With eight dies, the total write throughput would be only 80MB/s but the 120GB M500 is rated 130MB/s. The big picture is more than just the program time as in reality you have to take into account the interface latency as well as the gains from two-plane programming and cache-mode (the command, address and data latches are cached so there is no need to wait for them between programmings).

Example of cache programming

Like I described above, two-plane programming gives us roughly twice the throughput compared to one-plane programming. As a result, instead of writing one 16KB page in 1600µs, we are able to write two pages with 32KB of data in total. That doubles our throughput from 80MB/s to 160MB/s. There is some overhead from the commands like the picture above shows but thankfully today's interfaces are so fast that it's only in the magnitude of a few percents and in real world the usable throughput should be around 155MB/s. The 120GB M500 manages around 140MB/s in sequential write, so 155MB/s of NAND write throughput sounds reasonable since there is always some additional latency from channel and die switching. Program times are also averages and vary slightly from die to die and it's possible that the set program times may actually be slightly over 1600µs to make sure all dies meet the criteria. 

Case: M550

While the M500 used solely 128Gbit NAND, Crucial is bringing back the 64Gbit die for the 128GB and 256GB M550s. The switch means twice the amount of die and as we've now learned, that means twice the performance. This is actually Micron's second generation 64Gbit 20nm NAND with 16KB page size similar to their 128Gbit NAND. The increase in page size is required for write throughput (about 60% increase over 8KB page) but it adds complexity to garbage collection and can increase write amplification if not implemented efficiently (and hence lower endurance). 

Micron wouldn't disclose the program time for this part but I'm thinking there is some improvement over the original 128Gbit part. As process nodes mature, you're usually able to squeeze out a little more performance (and endurance) out of the same chip and I'm thinking that is what's happening here. To get ~370MB/s out of the 128GB M550, the program time would have to be 1300-1400µs to be inline with the performance. It's certainly possible that there's something else going on (better channel switching management for instance) but it's clear that Crucial/Micron has been able to better optimize the NAND in the M550.

The point here was to give an idea of where the NAND performance comes from and why there is such dramatical difference between the M550 and M500. Ultimately all the NAND performance characteristics are something the manufacturers won't disclose and hence the figures here may not be accurate but should at least give a rough idea of what is happening at the low level.

Introduction, The Drive & The Test Performance Consistency & TRIM Validation
Comments Locked

100 Comments

View All Comments

  • anh14 - Thursday, March 20, 2014 - link

    you nailed that ; all differences are just academic . Everyone always talks about 'faster' but not about how preventing it of making your computer slower i.e. taking your storage of the critical path

    the only thing with the earlier SSD's are some of them become excessively slow and resetting them is a pain.

    I have a stack of patriots 32Gb that are collecting dust and that OCZ Vertex 2 64 SSD, interesting you mentioned it, I do have that one still but it is my backup C-disk that is in my drawer (e.g. if things go south, I replace the intel 520 I am now using have with this guy which is still better than any slow poke HD)
  • nathanddrews - Tuesday, March 18, 2014 - link

    Coming from a HDD, any modern SSD will be subjectively comparable. Unless you've got a really read or write heavy task, it's really just splitting hairs. I've owned eight SSDs since I bought my first 80GB Intel SSD and I still have all of them in working order (only the largest/newest ones are boot drives, the rest are in external enclosures or serve as scratch drives). Anyway, until we get an interface with considerably higher speed (1-2GBps) and a cost per GB of $0.25 (2TB SSD for $500), the SSD market is just boring IMO.
  • hojnikb - Tuesday, March 18, 2014 - link

    We are already headed to sata-express + flash is rapidly getting cheaper nowdays, so 2TB ssd for a reasonable price is not that far away. Another dieshrink (so we get 256Gbit dies) and maturing of sata-express controllers and this will become a reality.
  • jospoortvliet - Thursday, March 20, 2014 - link

    Prices haven't really been going down all that fast lately so I wouldn't count on it anytime soon.
  • dishayu - Thursday, March 20, 2014 - link

    Sadly,I have to agree with this. I bought a 128GB Plextor M5 in mid 2012 for 84$. That's still the price point where 128GB SSDs sell today.
  • hojnikb - Thursday, March 20, 2014 - link

    You do realize u got a heck of a deal for that ssd, right ?
  • Death666Angel - Friday, March 21, 2014 - link

    Prices for small capacity SSDs have been relatively stable, but the 256/512 sizes have really dropped. I remember buying a 500GB Samsung 840 for 320€ (December 2012) and the equivalent Evo 500GB now costs 210€.
  • hojnikb - Thursday, March 20, 2014 - link

    I disagree.
    Just check m500 prices lately..
  • HisDivineOrder - Tuesday, March 18, 2014 - link

    Pricing is the thing they need. Performance gets ridiculous past a certain point with the given ports we have.
  • GASOLINENL - Tuesday, March 18, 2014 - link

    I read a very technical article about this. In theorie SSD's survive 75 years. Due to different things a very heavy user (lots of writing etc) will kill a SSD after 25!!!! years. So they are great.

Log in

Don't have an account? Sign up now