During the hard drive era, the Serial ATA International Organization (SATA-IO) had no problems keeping up with the bandwidth requirements. The performance increases that new hard drives provided were always quite moderate because ultimately the speed of the hard drive was limited by its platter density and spindle speed. Given that increasing the spindle speed wasn't really a viable option for mainstream drives due to power and noise issues, increasing the platter density was left as the only source of performance improvement. Increasing density is always a tough job and it's rare that we see any sudden breakthroughs, which is why density increases have only given us small speed bumps every once in a while. Even most of today's hard drives can't fully saturate the SATA 1.5Gbps link, so it's obvious that the SATA-IO didn't have much to worry about. However, that all changed when SSDs stepped into the game.

SSDs no longer relied on rotational media for storage but used NAND, a form of non-volatile storage, instead. With NAND the performance was no longer dictated by the laws of rotational physics because we were dealing with all solid-state storage, which introduced dramatically lower latencies and opened the door for much higher throughputs, putting pressure on SATA-IO to increase the interface bandwidth. To illustrate how fast NAND really is, let's do a little calculation.

It takes 115 microseconds to read 16KB (one page) from IMFT's 20nm 128Gbit NAND. That works out to be roughly 140MB/s of throughput per die. In a 256GB SSD you would have sixteen of these, which works out to over 2.2GB/s. That's about four times the maximum bandwidth of SATA 6Gbps. This is all theoretical of course—it's one thing to dump data into a register but transferring it over an interface requires more work. However, the NAND interfaces have also caught up in the last couple of years and we are now looking at up to 400MB/s per channel (both ONFI 3.x and Toggle-Mode 2.0). With most client platforms being 8-channel designs, the potential NAND-to-controller bandwidth is up to 3.2GB/s, meaning it's no longer a bottleneck.

Given the speed of NAND, it's not a surprise that the SATA interface quickly became a bottleneck. When Intel finally integrated SATA 6Gbps into its chipsets in early 2011, SandForce immediately came out with its SF-2000 series controllers and said, "Hey, we are already maxing out SATA 6Gbps; give us something faster!" The SATA-IO went back to the drawing board and realized that upping the SATA interface to 12Gbps would require several years of development and the cost of such rapid development would end up being very high. Another major issue was power; increasing the SATA protocol to 12Gbps would have meant a noticeable increase in power consumption, which is never good.

Therefore the SATA-IO had to look elsewhere in order to provide a fast yet cost efficient standard in a timely matter. Due to these restrictions, it was best to look at already existing interfaces, more specifically PCI Express, to speed up the time to the market as well as cut costs.

  Serial ATA PCI Express
  2.0 3.0 2.0 3.0
Link Speed 3Gbps 6Gbps 8Gbps (x2)
16Gbps (x4)
16Gbps (x2)
32Gbps (x4)
Effective Data Rate ~275MBps ~560MBps ~780MBps
~1560MBps
~1560MBps
~3120MBps (?)

PCI Express makes a ton of sense. It's already integrated into all major platforms and thanks to scalability it offers the room for future bandwidth increases when needed. In fact, PCIe is already widely used in the high-end enterprise SSD market because the SATA/SAS interface was never enough to satisfy the enterprise performance needs in the first place.

Even a PCIe 2.0 x2 link offers about a 40% increase in maximum throughput over SATA 6Gbps. Like most interfaces, PCIe 2.0 isn't 100% efficient and based on our internal tests the bandwidth efficiency is around 78-79%, so in the real world you should expect to get ~780MB/s out of a PCIe 2.0 x2 link, but remember that SATA 6Gbps isn't 100% either (around 515MB/s is the typical maximum we see). The currently available PCIe SSD controller designs are all 2.0 based but we should start to see some PCIe 3.0 drives next year. We don't have efficiency numbers for 3.0 yet but I would expect to see nearly twice the bandwidth of 2.0, making +1GB/s a norm.

But what exactly is SATA Express? Hop on to next page to read more!

What Is SATA Express?
Comments Locked

131 Comments

View All Comments

  • frenchy_2001 - Friday, March 14, 2014 - link

    no, it does not. It adds latency, which is the delay before any command is received. Speed stays the same and unless your transmission depends on hand shake and verification and can block, latency is irrelevant.
    See internet as a great example. Satellite gives you fast bandwidth (it can send a lot of data at a time), but awful latency (it takes seconds to send the data).
    As one point of those new technology is to add a lot of queuing, latency becomes irrelevant, as there is always some data to send...
  • nutjob2 - Saturday, March 15, 2014 - link

    You're entirely incorrect. Speed is a combination of both latency and bandwidth and both are important, depending on how the data is being used.

    Your dismissal of latency because "there is always data to send" is delusional. That's just saying that if you're maxing out the bandwidth of your link then latency doesn't matter. Obviously. But in the real world disk requests are small and intermittent and not large enough to fill the link, unless you're running something like a database server doing batch processing. As the link speed gets faster (exactly what we're talking about here) and typical data request sizes stay roughly the same then latency becomes a larger part of the time it takes to process a request.

    Perceived and actual performance on most computers are very sensitive to disk latency since the disk link is the slowest link in the processing chain.
  • MrPoletski - Thursday, March 13, 2014 - link

    wait:
    by Kristian Vättö on March 13, 2014 7:00 AM EST

    It's currently March 13, 2014 6:38 AM EST - You got a time machine over at Anandtech?
  • Ian Cutress - Thursday, March 13, 2014 - link

    I think the webpage is in EDT now, but still says EST.
  • Bobs_Your_Uncle - Saturday, March 15, 2014 - link

    PRECISELY the point of Kristian's post. It's NOT a time machine in play, but rather the dramatic effects of reduced latency. (The other thing that happens is the battery in your laptop actually GAINS charge in such instances.)
  • mwarner1 - Thursday, March 13, 2014 - link

    The cable design, and especially its lack of power transmission, is even more short sighted & hideous than that of the Micro-B USB3.0 cable.
  • 3DoubleD - Thursday, March 13, 2014 - link

    Agreed, what a terrible design. Not only is this cable a monster, but I can already foresee the slow and painful rollout of PCIe2.0 SATAe when we should be skipping directly to PCIe3.0 at this point.

    Also, the reasons given for needing faster SATA SSDs are sorely lacking. Why do we need this hideous connector when we already have PCIe SSDs? Plenty of laptop vendors are having no issue with this SATA bottleneck. I also debate whether a faster, more power hungry interface is actually better on battery life. The SSD doesn't always run at full speed when being accessed, so the battery life saved will be less than the 10 min calculated in the example... if not worse that the reference SATA3 case! And the very small number of people who edit 4k videos can get PCIe SSDs already.
  • DanNeely - Thursday, March 13, 2014 - link

    Blame Intel and AMD for only putting pcie 2.0 on the southbridge chips that everything not called a GPU are connected to in consumer/enthusiast systems.
  • Kristian Vättö - Thursday, March 13, 2014 - link

    A faster SSD does not mean higher power consumption. The current designs could easily go above 550MB/s if SATA 6Gbps wasn't bottlenecking, so a higher power controller is not necessary in order to increase performance.
  • fokka - Thursday, March 13, 2014 - link

    i think what he meant is that while the actual workload may be processed faster and an idle state is reached sooner on a faster interface, the faster interface itself uses more power than sata 6g. so the question now is in what region the savings of the faster ssd are and in what region the additional power consumption of the faster interface.

Log in

Don't have an account? Sign up now