When the PCI Special Interest Group (PCI-SIG) first announced PCIe 4.0 a few years back, the group made it clear that they were not just going to make up for lost time after PCIe 3.0, but that they were going to accelerate their development schedule to beat their old cadence. Since then the group has launched the final versions of the 4.0 and 5.0 specifications, and now with 5.0 only weeks old, the group is announcing today that they are already hard at work on the next version of the PCIe specification, PCIe 6.0. True to PCIe development iteration, the forthcoming standard will once again double the bandwidth of a PCIe slot – a x16 slot will now be able to hit a staggering 128GB/sec – with the group expecting to finalize the standard in 2021.

As with the PCIe iterations before it, the impetus for PCIe 6.0 is simple: hardware vendors are always in need of more bandwidth, and the PCI-SIG is looking to stay ahead of the curve by providing timely increases in bandwidth. Furthermore in the last few years their efforts have taken on an increased level of importance as well, as other major interconnect standards are building off of PCIe. CCIX, Intel’s CXL, and other interfaces have all extended PCIe, and will in turn benefit from PCIe improvements. So PCIe speed boosts serve as the core of building ever-faster (and more interconnected) systems.

PCIe 6.0, in turn, is easily the most important/most disruptive update to the PCIe standard since PCIe 3.0 almost a decade ago. To be sure, PCIe 6.0 remains backwards compatible with the 5 versions that have preceded it, and PCIe slots aren’t going anywhere. But with PCIe 4.0 & 5.0 already resulting in very tight signal requirements that have resulted in ever shorter trace length limits, simply doubling the transfer rate yet again isn’t necessarily the best way to go. Instead, the PCI-SIG is going to upend the signaling technology entirely, moving from the Non-Return-to-Zero (NRZ) tech used since the beginning, and to Pulse-Amplitude Modulation 4 (PAM4).

At a very high level, what PAM4 does versus NRZ is to take a page from the MLC NAND playbook, and double the number of electrical states a single cell (or in this case, transmission) will hold. Rather than traditional 0/1 high/low signaling, PAM4 uses 4 signal levels, so that a signal can encode for four possible two-bit patterns: 00/01/10/11. This allows PAM4 to carry twice as much data as NRZ without having to double the transmission bandwidth, which for PCIe 6.0 would have resulted in a frequency around 30GHz(!).


NRZ vs. PAM4 (Base Diagram Courtesy Intel)

PAM4 itself is not a new technology, but up until now it’s been the domain of ultra-high-end networking standards like 200G Ethernet, where the amount of space available for more physical channels is even more limited. As a result, the industry already has a few years of experience working with the signaling standard, and with their own bandwidth needs continuing to grow, the PCI-SIG has decided to bring it inside the chassis by basing the next generation of PCIe upon it.

The tradeoff for using PAM4 is of course cost. Even with its greater bandwidth per Hz, PAM4 currently costs more to implement at pretty much every level, from the PHY to the physical layer. Which is why it hasn’t taken the world by storm, and why NRZ continues to be used elsewhere. The sheer mass deployment scale of PCIe will of course help a lot here – economies of scale still count for a lot – but it will be interesting to see where things stand in a few years once PCIe 6.0 is in the middle of ramping up.

Meanwhile, not unlike the MLC NAND in my earlier analogy, because of the additional signal states a PAM4 signal itself is more fragile than a NRZ signal. And this means that along with PAM4, for the first time in PCIe’s history the standard is also getting Forward Error Correction (FEC). Living up to its name, Forward Error Correction is a means of correcting signal errors in a link by supplying a constant stream of error correction data, and it’s already commonly used in situations where data integrity is critical and there’s no time for a retransmission (such as DisplayPort 1.4 w/DSC). While FEC hasn’t been necessary for PCIe until now, PAM4’s fragility is going to change that. The inclusion of FEC shouldn’t make a noticeable difference to end-users, but for the PCI-SIG it’s another design requirement to contend with. In particular, the group needs to make sure that their FEC implementation is low-latency while still being appropriately robust, as PCIe users won’t want a significant increase in PCIe’s latency.

The upshot of the switch to PAM4 then is that by increasing the amount of data transmitted without increasing the frequency, the signal loss requirements won’t go up. PCIe 6.0 will have the same 36dB loss as PCIe 5.0, meaning that while trace lengths aren’t officially defined by the standard, a PCIe 6.0 link should be able to reach just as far as a PCIe 5.0 link. Which, coming from PCIe 5.0, is no doubt a relief to vendors and engineers alike.

Even with these changes, however, as previously mentioned PCIe 6.0 is fully backwards compatible with earlier standards, and this will go for both hosts and peripherals. This means that to a certain extent, hardware designers are essentially going to be implementing PCIe twice: once for NRZ, and again for PAM4. This will be handled at the PHY level, and while it’s not a true doubling of logic (what is NRZ but PAM4 with half as many signal levels?), it does mean that backwards compatibility is a bit more work this time around. Though discussing the matter in today’s press conference, it doesn’t sound like the PCI-SIG is terribly concerned about the challenges there, as PHY designers have proven quite capable (e.g. Ethernet).

PCI Express Bandwidth
(Full Duplex)
Slot Width PCIe 1.0
(2003)
PCIe 2.0
(2007)
PCIe 3.0
(2010)
PCIe 4.0
(2017)
PCIe 5.0
(2019)
PCIe 6.0
(2021)
x1 0.25GB/sec 0.5GB/sec ~1GB/sec ~2GB/sec ~4GB/sec ~8GB/sec
x2 0.5GB/sec 1GB/sec ~2GB/sec ~4GB/sec ~8GB/sec ~16GB/sec
x4 1GB/sec 2GB/sec ~4GB/sec ~8GB/sec ~16GB/sec ~32GB/sec
x8 2GB/sec 4GB/sec ~8GB/sec ~16GB/sec ~32GB/sec ~64GB/sec
x16 4GB/sec 8GB/sec ~16GB/sec ~32GB/sec ~64GB/sec ~128GB/sec

Putting all of this in practical terms then, PCIe 6.0 will be able to reach anywhere between ~8GB/sec for a x1 slot up to ~128GB/sec for a x16 slot (e.g. accelerator/video card). For comparison’s sake, 8GB/sec is as much bandwidth as a PCIe 2.0 x16 slot, so over the last decade and a half, the number of lanes required to deliver that kind of bandwidth has been cut to 1/16th the original amount.

Overall, the PCI-SIG has set a rather aggressive schedule for this standard: the group has already been working on it, and would like to finalize the standard in 2021, two years from now. This would mean that the PCI-SIG will have improved PCIe’s bandwidth by eight-fold in a five-year period, going from PCIe 3.0 and its 8 GT/sec rate in 2016 to 4.0 and 16 GT/sec in 2017, 5.0 and 32 GT/sec in 2019, and finally 6.0 and 64 GT/sec in 2021. Which would be roughly half the time it has taken to get a similar increase going from PCIe 1.0 to 4.0.

As for end users and general availability of PCIe 6.0 products, while the PCI-SIG officially defers to the hardware vendors here, the launch cycles of PCIe 4.0 and 5.0 have been very similar, so PCIe 6.0 will likely follow in those same footsteps. 4.0, which was finalized in 2017, is just now showing up in mass market hardware in 2019, and meanwhile Intel has already committed to PCIe 5.0-capable CPUs in 2021. So we may see PCIe 6.0 hardware as soon as 2023, assuming development stays on track and hardware vendors move just as quickly to implement it as they have on earlier standards. Though for client/consumer use, it bears pointing out that with the rapid development pace for PCIe – and the higher costs that PAM4 will incur – just because the PCI-SIG develops 6.0 it doesn't mean it will show up in client decides any time soon; economics and bandwidth needs will drive that decision.

Speaking of which, as part of today’s press conference the group also gave a quick update on PCIe compliance testing and hardware rollouts. PCIe 4.0 compliance testing will finally kick off in August of this year, which should further accelerate 4.0 adoption and hardware support. Meanwhile PCIe 5.0 compliance testing is still under development, and like 4.0, once 5.0 compliance testing becomes available it should open the flood gates to much faster adoption there as well.

Source: PCI-SIG

Comments Locked

119 Comments

View All Comments

  • Targon - Thursday, June 20, 2019 - link

    Apple needs to get away from Intel, which has shown a complete disregard for security. If I were the sort of person to write malware, I'd have a field day with all the security problems with Intel processors, knowing that too many people don't install updates.
  • mode_13h - Thursday, June 20, 2019 - link

    > Apple needs to get away from Intel

    That might be about to happen.
  • ats - Wednesday, June 19, 2019 - link

    That doesn't actually mean anything. Technically, you can say the same thing for PCI-X from before PCIe. Technically, Ethernet also supports cache coherency. There are such systems even in existence which is more than you can say for Gen-Z. Gen-Z will have its uses, but marketing hype isn't product reality.
  • ats - Wednesday, June 19, 2019 - link

    Gen-Z is pretty much vapor and pretty much useless for main memory. And no one has demonstrated this mythical <100ns latency for Gen-Z.
  • Luffy1piece - Wednesday, June 19, 2019 - link

    Why the dislike for Gen-Z?

    I'm basing my opinion on https://www.youtube.com/watch?v=OeJxZMTgCcE&in...
    (Gen-Z starts at 8 minutes) and many other sources. Curious about how did you come to your conclusion

    Also on the topic of latency, I think even a figure around 100ns will be a bottleneck esp. with the upcoming NVRAM technologies. I hope they can improve it
  • ats - Thursday, June 20, 2019 - link

    Gen-Z is largely a solution looking for a problem. Nothing they are really trying to solve is going to be actually useful to solve. Dis-aggregation is just a bad idea and completely counter to the actual trends in technology and power. Esp wrt to memory.

    I base my opinion on having been down this road before multiple times and designing actual high speed interconnects used in hundreds of millions of computers. Gen-Z is trying to push the roughly the same koolaid that Future I/O and Next Gen I/O pushed both before and after they merged into Infiniband. In the end, IB was just lower latency networking (despite plenty of work to make it work in other areas).

    I/O will still connect to whatever PCI SIG goes with since that is where the installed base will go. Gen-Z might have a life as a replacement for IB in the network niche but doubtful, Ethernet has learned the lesson well and is now working on latency to counter just that. More likely, Gen-Z might find a niche as a standardized side band for accelerators if it is lucky. Everything else basically died when intel announce CXL as effectively an extension of PCIe.

    Real NVRAM technologies will either have a dedicated memory bus interface, share an existing memory bus interface, or not actually be real usable NVRAM but instead SSDs.
  • Targon - Wednesday, June 19, 2019 - link

    Intel and NVIDIA decided against joining the Gen-Z Consortium, so I doubt that Intel or NVIDIA will support it for at least 5-7 more years.
  • Luffy1piece - Wednesday, June 19, 2019 - link

    Don't know about Nvidia, but Intel has a conflict of interest coz they wanna push their own memory and interconnect solutions, and tap into that TAM as shown in their latest investor meeting: https://s21.q4cdn.com/600692695/files/doc_presenta...

    Thus my hope is from AMD. It's getting better with the CPU's, and Gen-Z can be a really good USP for it. It would need the support of memory and equipment manufacturers, but then the good news is that other than Intel and Nvidia, pretty much every big name company is a member of Gen-Z consortium.
  • ats - Thursday, June 20, 2019 - link

    And all the big names that actually do anything are also members now of CXL and PCI SIG. AMD will go with what PCI SIG supports which will be CXL which runs on PCIe.

    It is easy to sign onto a working group, it has nothing to do with you actually using the solution the working group is pushing. In most cases, you join in just to get the info.

    Look at the history. Infiniband was the previous Gen-Z. It had support from all the big names. EVERYONE at the time: Intel, Compaq, Dell, HP, IBM, Intel, Microsoft, Sun. That was quite literally the entirety of the computer universe at the time. Gen-Z is basically the carbon copy and will largely have the same issues.

    PCI SIG is VERY VERY good at what it does. It has existed forever. It basically owns I/O. If they aren't pushing an I/O standard, it has zero chance to be mainstream. They know what they are doing.
  • Luffy1piece - Thursday, June 20, 2019 - link

    PCI SIG took 7 years to get to 4.0, so I won't call it "very very good". In fact that slow progress is one of the main reasons behind the alternatives like Gen-Z, esp. when you think about the bandwidth requirement growth in data-centers.

    It is indeed hard to change status quo, esp. when there's a monopoly (like Intel) but future is not always the same as past, and technologies change all the time. At the end of the day the market needs decide everything.

    The PC segment is shifting to all SaaS - everything accessed through a browser, even gaming very soon - as a result the priorities are changing to portability and longer battery backup. Therefore there are less(not zero) chances for my dream of having a Gen-Z enabled laptop in which I can switch NVRAM as they come in the future. CPU single-thread progress of 3% YoY is disappointing, so de-coupling memory from CPU and encouraging competition in memory market will be a good performance boost. Lastly I also like the idea of one interconnect tech covering all connections, from memory to I/O to networking, and having excess bandwidth for future VR, 16K or else.

    On the other hand PC sales have been decreasing since 2011, and are projected to dwindle more and more every year. Meanwhile data-centers have been and will continue to keep increasing, even Intel has officially changed its brand from PC-centric to data-centric. Gen-Z's current bandwidth of 200 GB/s is something PCIe will reach in 6.0 which is due in 2021 if everything goes well, let alone the actual implementation of it. Therefore I doubt data-centers will just wait years for PCI SIG to figure it out and ignore a solution already available, which is probably why I've already seen Gen-Z in AMD HPC presentations.

Log in

Don't have an account? Sign up now