This is shaping up to be the busiest month in the SSD frontier for ages. Intel released its new flagship SSD 730 series just a couple of weeks ago and there are at least two more releases coming in the next few weeks...but today it's Crucial's turn to up the ante.

Unlike many OEMs, Crucial has more or less had only one active series in its SSD portfolio at a time. A few years ago this approach made sense because the SSD market as a whole mainly focused on enthusiasts and there was no real benefit to a tiered lineup. As the market has matured and prices have dropped over time, we are now in a situation similar to other components: there is the high volume mainstream market where price is the key and the higher margin enthusiast/professional market where performance and features matter. Covering both of these markets with a single product is hard because in order to compete in price, it's usually necessary to use lower quality parts, which in turn affects performance and features.

With the M500 Crucial was mainly targeting the mainstream market. The performance was mostly better than in the m4 days but only mediocre compared to other SSDs in the market. The introduction of the likes of the SanDisk Extreme II, Seagate SSD 600, and OCZ Vector 150 has upped the ante even more in the enthusiast segment and it has become clear that the M500 has no place there. To stay competitive in all product areas, Crucial is now launching the big brother to their M500: the M550.

EDIT: Just to clarify, the M500 will continue to be available and the M550 is merely a higher performing option at slightly higher price.

With 64Gbit NAND, 240/256GB was usually the sweet spot in terms of price and performance. That combination offered enough NAND die to saturate the SATA 6Gbps as well as the controller's/firmware's potential, but with the M500 this was no longer the case thanks to the usage of 128Gbit NAND. With a die that was twice the capacity, you only needed half the dies to build a 240/256GB SSD. As NAND parallelism is a major source of SSD performance, this meant a decrease in performance at 240/256GB and you would now have to go to 480/512GB to get the same level of performance that 240/256GB offered with 64Gbit NAND.

The use of 128Gbit NAND was one of the main reasons for the M500's poor performance, and with others staying with 64Gbit NAND, that backfired on Crucial in terms of performance (more on this later). Since it's not possible to magically decrease program times or add parallelism, Crucial decided to bring back the 64Gbit NAND in the lower capacity M550s. Here's how the new and old models compare:

Crucial M550 vs Crucial M500
  M550 M500
Controller Marvell 88SS9189 Marvell 88SS9187
NAND Micron 64/128Gbit 20nm MLC Micron 128Gbit 20nm MLC
Capacity 128GB 256GB 512GB 1TB 120GB 240GB 480GB 960GB
Sequential Read 550MB/s 500MB/s
Sequential Write 350MB/s 500MB/s 130MB/s 250MB/s 400MB/s
4KB Random Read 90K IOPS 95K IOPS 62K IOPS 72K IOPS 80K IOPS
4KB Random Write 75K IOPS 80K IOPS 85K IOPS 35K IOPS 60K IOPS 80K IOPS
Endurance 72TB (~66GB/day) 72TB (~66GB/day)
Warranty Three years Three years

The 128GB and 256GB models are now equipped with 64Gbit per die NAND while 512GB and 1TB models use the same 128Gbit NAND as in the M500. What this means is that the 128GB and 256GB models are much more competitive in performance because the die count is twice that of the same capacity M500 drives. You get roughly the same performance with both 256GB and 512GB models (unlike the nearly 50% drop in write performance like in the M500) and the 128GB actually beats the 240GB M500 in all metrics. There is obviously some firmware tweaking involved as well and the bigger capacities get a performance bump too, although it's much more moderate compared to the smaller capacities.

Another difference is the controller. Compared to the NAND, this isn't as substantial a change because the Marvell 9189 is more of an updated version of the 9187 and the only major upgrades are support for LPDDR and better optimization for DevSleep, both of which help with power consumption and can hence extend the battery life.

Crucial M550 Specifications
Capacity 128GB 256GB 512GB 1TB
Controller Marvell 88SS9189
NAND Micron 64Gb 20nm MLC Micron 128Gb 20nm MLC
Cache (LPDDR2-1066) 512MB 512MB 512MB 1GB
Sequential Read 550MB/s 550MB/s 550MB/s 550MB/s
Sequential Write 350MB/s 500MB/s 500MB/s 500MB/s
4KB Random Read 90K IOPS 90K IOPS 95K IOPS 95K IOPS
4KB Random Write 75K IOPS 80K IOPS 85K IOPS 85K IOPS

Similar to the earlier drives, Crucial continues to be Micron's household brand whereas OEM drives will be sold under Micron's name. It's just a matter of branding and there are no differences between the retail and OEM drives other than an additional 64GB model for OEMs. 

Crucial switches back to binary capacities in the M550 and with the 1TB model you actually get the full 1024GB of space (though Crucial lists it as 1TB for marketing reasons, and there's still 1024GiB of actual NAND). The reason behind this isn't a reduction in over-provisioning but merely a more optimized use of RAIN (Redundant Array of Independent NAND).

RAIN is similar to SandForce's RAISE and the idea is that you take some NAND space and dedicate that to parity. Almost every manufacturer is doing this at some level nowadays since the NAND error and failure rates are constantly increasing as we move to smaller lithographies. When the M500 came out the 128Gbit NAND was very new and Crucial/Micron wanted to play it safe and dedicated quite a bit of NAND for RAIN to make sure the brand new NAND wouldn't cause any reliability issues down the road. In a year a lot happens in terms of maturity of a process and Crucial/Micron are now confident that they can offer the same level of endurance and reliability with less parity. The parity ratio is 127:1, meaning that for every 127 bits there is one parity bit. This roughly translates to 1GiB of NAND reserved for parity in the 128GB M550 and 2GiB, 4GiB and 8GiB for the higher capacities.

Feature wise the M550 adopts everything from the M500. There is TCG Opal 2.0 and IEEE-1667 support, which are the requirements for Microsoft's eDrive encryption. Along with that is full power loss protection thanks to capacitors that provide the necessary power to complete in-progress NAND writes in case of power loss.

Update: Micron just told us that in addition to the capacitors there is some NAND-level technology that makes the M550 even more robust against power losses. We don't have the details yet but you'll be the first to know once we got them.

NAND Configurations
Raw NAND Capacity 128GiB 256GiB 512GiB 1024GiB
RAIN Allocation ~1GiB ~2GiB ~4GiB ~8GiB
Over-Provisioning 6.1% 6.1% 6.1% 6.1%
Usable Capacity 119.2GiB 238.4GiB 476.8GiB 953.7GiB
# of NAND Packages 16 16 16 16
# of NAND Die per Package 1 x 8GiB 2 x 8GiB 2 x 16GiB 4 x 16GiB

 

Test System

CPU Intel Core i5-2500K running at 3.3GHz
(Turbo and EIST enabled)
Motherboard AsRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel 9.1.1.1015 + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)
Video Card Palit GeForce GTX 770 JetStream 2GB GDDR5
(1150MHz core clock; 3505MHz GDDR5 effective)
Video Drivers NVIDIA GeForce 332.21 WHQL
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit

NAND Lesson: Why Die Capacity Matters
Comments Locked

100 Comments

View All Comments

  • hojnikb - Thursday, March 20, 2014 - link

    You're not alone. I myself am skeptical about TLC aswell, seeing how badly it performed in pretty much every single non ssd device i've had. While samsung has really gone all out on the TLC and used lots of tricks to squeze every bit of performance they can outta TLC, i still don't believe in it.
    While endurance seems to be okay for most users, one thing does come to mind and no one seems to be testing it: data retention.
  • Cerb - Sunday, March 23, 2014 - link

    Any SMART reading program will work. There are tons of them, even included in some OSes.
    Any secure erase program will work. There aren't tons of free ones, but they exist...or you can fart around with hdparm (frustrating, to say the least, but I was able to unbrick a SF drive that way, once).

    Software just for their SSDs is an extra cost that brings very little value, but has to be made up by the gross profits of the units sold. Since there aren't special diags to run, beyond checking SMART stats, and seeing if it's bricked, for starting an RMA, why bother with software, beyond the minimum needed for performing firmware updates?
  • CiccioB - Wednesday, March 19, 2014 - link

    You're right.
    Synthetic tests are interesting till a certain degree.
    Morereal life one would be much more appreciated. For example, many SSD are used like boot drives. How does it really change using one cheap SSD vs one much more expensive?
    How does it change copying a folder of images of few MB each (think about an archive of RAW pictures). How faster is loading whatever level to a whatever game that on a mechanical HDD maybe takes several seconds?

    Having bar and graphs is nice. Having them applied to real life usage, where other overheads and bottlenecks apply, would be better, though.
  • Lucian2244 - Wednesday, March 19, 2014 - link

    I second this, would be interesting to know.
  • hojnikb - Wednesday, March 19, 2014 - link

    +1 for that.
    Fancy numbers are fine and all, but mean nothing to lots of people.
  • HammerStrike - Wednesday, March 19, 2014 - link

    IMO, the biggest advancement in SSD over the last year is not the performance increases, but the price drops, which have been spearheaded by Crucial and the M500. It seems odd to me that synthetic performances is given so much weight in the reviews while advent of "affordable" 240 & 500 GB drives is somewhat subdued. The vast majority of consumer applications are not going to have any real life difference between a M5xx and a faster drive, but the M5xx is either going to get you more storage at the same price, or let you save a chunk of change that can better be deployed to a different part of your system where you will notice the impact, such as your GPU. That, along with MLC NAND, power loss protection and the security features make them no brainers for gaming or general purpose rigs.
  • CiccioB - Wednesday, March 19, 2014 - link

    I Agree.
    I've recently bought a Crucial M500 240GB for a bit more than 100€ which is going to replace an older Vertex3 60GB (75€ at that time) which works perfectly but has become a little small.
    It's a device that is going to be used mainly for boot and application launching (data is on a separate mechanical disk) so writing performance are of not importance.
    Considering that when you do work (for real and not simply running benchmark) you copy from something to your SSD (or viceversa) you know that the bottleneck is not your SSD but the other source/destination (which in my case can also be somewhere on 1GB/s ethernet).
    With such a "low tier" SSD boot times are about 10 secs and application launching is immediate (LibreOffice as well, even without the pre-caching deamon running) I have invested the extra money needed for a faster SSD for 8 GB more RAM (total of 16GB) in order to be able to use a RAM disk to do fast work when access speed is really critical.
  • Kristian Vättö - Wednesday, March 19, 2014 - link

    I've been playing around with real world tests quite a bit lately and it's definitely something that we'll be implementing in the (near) future. However, real world testing is far more complicated than most think. Creating a test suite that's realistic is close to impossible because in reality you are doing more than just one thing at a time. In tests you can't do that because one slight change or issue with a background task can ruin all you results. The number of variables in real world testing is just insane and it's never possible to guarantee that the results are fully reproducible.

    After all, the most important thing is that our tests are reproducible and results accurate. And I'm not saying this to disregard real world tests but because I've faced this when running these tests. It's not very pleasing when some random background task ruins your hour of work and you have to start over hoping that it won't do it again. And then repeat this with 50 or so drives.

    That said, I think I've been able to set up a pretty good suite of real world benchmarks. I don't want to just clone a Windows install, install a few apps and then load some games because that's not realistic. In real world you don't have a clean install of Windows and plenty of free space to speed everything up. I don't want to share the details of the test yet because there are still so many tests to run. When I've got everything ready, you'll see the results.

    What I can say already is that IO consistency isn't just a FUD - it can make a difference in basic everyday tasks. How big the difference is in real world is another question and it's certainly not as big as what benchmarks show but that doesn't mean it's meaningless.
  • CiccioB - Thursday, March 20, 2014 - link

    Well, don't misunderstand what I wrote.
    I didn't said those tests on IO consistency are FUD. It's just that they do not tell the entire thruth.
    It's like benchmarking a GPU only for, let's say, pixel filling capacity ignoring all the rest.
    High IOPS don't really tell how good an SSD is going to work.
    I already appreciate that the test are done on SSDs that are not secured formatted every time the tests are performed as it is is done in other rewiews that show only the best performances at initial life of the device. Infact, many ignore the fact that SSD can become even 1/3 slower (in real life usage, not only in tests) when they are going to reuse cells, a thing that secure formatted SSDs never show as being a problem.

    Real usage tests are what matters in the end. Even the fact that the test may be comprimised by background tasks. Users do not use their SSD in a ideal world where the OS does nothing but is all optimized to run a benchmark. Synthetic tests are good to show the potential of a device, but real life usage is a complete different thing. Even loading a game level is subject to many variables, but it's meangless saying that the SSD could load the level in 5 seconds by just looking at its performances on paper while in reality, with all the other variables that affect performances in real life, it takes 20 seconds. And it would be quite useful knowing that, for example, the fastest SSD on the market which may cost twice these "mainstream SSD with mediocre results in synthetic tests" is in reality able to load the same game level in 18 secs instead of 20, even if on paper it has twice the IO performances and can do burst transfers twice the speed.
    It's not necessary to create a test where disk intensive application are used. Even the low end user starts OpenOffice/LibreOffice (or even MS Office it is is lucky and rich enough) once in a while, and the loading times of those elephants may be more interesting than knowing that the SSD can do 550MB/s when doing sequential reading with a QD32 and blocks of 128K (which in reality never happens in a consumer enviroment). Comncurrent accesses may also be an interesting test, as in real life usage, it may be possible to do many things at the same time (copying to/from the SSD while saving a big document in InDesign or loading a big TIFF/RAW with Photoshop).
    Some real life tests may be created to show where a particular high performance SSD may do the difference and thus measure that difference in order to evaluate price gap with better, more useful, numbers in hands.
    But just disregarding real like tests for everyday usage just because they are subjected to many variables is, in the end, not describing the entire truth on what a cheap SSD compare to a more expensive one.

    You could even test extreme cases where, like me, a RAM disk is used instead of a mechanical disk or a SSD to do heavy loads. That would show how really using those different storage devices impacts on prductivity and if in the end it is really useful to invest in a more performant (and expensive) SSD or in a cheap one + more RAM.
    If the aim is to guide the user to buy the best it can to do the work faster, I think it could be quite interesting doind these kind of comparison.
  • hojnikb - Wednesday, March 19, 2014 - link

    Performance consistency is actually quite important. While most modern controllers are pretty much ok for an avarage consumer, there were times when consistency was utter gargabe and was noticeable to your avarage user aswell (phison and jmicron are fine example for that -- crucials v4 for example frequently locks up if you write a lot even with the latest firmware and write speed is consistently dropping to near zero).

Log in

Don't have an account? Sign up now