With the battleground moving from single core performance to multi-core acceleration, a new war is being fought with how data is moved around between different compute resources. The Interconnect Wars are truly here, and the battleground just got a lot more complicated. We’ve seen NVLink, CCIX, and GenZ come out in recent years as offering the next generation of host-to-device and device-to-device high-speed interconnect, with a variety of different features. Now CXL, or Compute Express Link, is taking to the field.

This new interconnect, for which the version 1.0 specification is being launched today, started in the depths of Intel’s R&D Labs over four years ago, however what was made is being launched as an open standard, headed up by a consortium of nine companies. These companies include Alibaba, Cisco, Dell EMC, Facebook, Google, HPE, Huawei, Intel, and Microsoft, which as a collective was described as one of the companies as ‘the biggest group of influencers driving a modern interconnect standard’.

In our call, we were told that the specification is actually been in development for a few years at Intel, and was only lifted recently to drive a new consortium around an open cache coherent interconnect standard. The upcoming consortium of the nine founding companies will be incorporated later this year, working under US rules. The consortium states that members will be free to use the IP on any device, and aside from the nine founders, other companies can become contributors and/or adopters, depending on if they want to use the technology or help contribute to the next standard.

At its heart, Compute Express Link (CXL) will initially begin as a cache-coherent host-to-device interconnect, focusing on GPUs and FPGAs. It will use current PCIe 5.0 standards for physical connectivity and electrical standards, providing protocols for I/O and memory with coherency interfaces.  The focus of CXL is to help accelerate AI, machine learning, media services, HPC, and cloud applications. With Intel being at the heart of this technology, we might expect to see future Intel GPUs and FPGAs connecting in a PCIe slot in ‘CXL’ mode. It will be interesting to see if this will be an additional element of the product segmentation strategy.

While some of the competing standards have 20-50+ members, the Compute Express Link actually has more founding members than PCIe (5) or USB (7). That being said however, there are a few key names in the industry missing: Amazon, Arm, AMD, Xilinx, etc. Other standards playing in this space, such as CCIX and GenZ, have common members with CXL, and when questioned on this, the comment from CXL was that GenZ made a positive comment to the CXL press release  - they stated that there is a lot of synergy between CXL and GenZ, and they expect the standards to dovetail rather than overlap. It should be pointed out that Xilinx, Arm, and AMD have already stated core CCIX support, either plausible future support or in products at some level, making this perhaps another VHS / Betamax battle. The other missing company is NVIDIA, who are more than happy with NVLink and its association with IBM.

The Compute Express Link announcement is part standard and part recruitment – the as-yet consortium is looking for contributors and adopters. Other CPU architectures beyond x86 are more than welcome, with the Intel representative stating that he is happy to jump on a call to explain the company’s motivation behind the new standard. Efforts are currently underway to develop the CXL 2.0 Specification.

CXL Promoter Statements of Support

Dell EMC

“Dell EMC is delighted to be part of the CXL Consortium and its all-star cast of promoter companies. We are encouraged to see the true openness of CXL, and look forward to more industry players joining this effort. The synergy between CXL and Gen-Z is clear, and both will be important components in supporting Dell EMC’s kinetic infrastructure and this data era.”

Robert Hormuth, Vice President & Fellow, Chief Technology Officer, Server & Infrastructure Systems, Dell EMC

Facebook

“Facebook is excited to join CXL as a founding member to enable and foster a standards-based open accelerator ecosystem for efficient and advanced next generation systems.”

Vijay Rao, Director of Technology and Strategy, Facebook

Google

“Google supports the open Compute Express Link collaboration. Our customers will benefit from the rich ecosystem that CXL will enable for accelerators, memory, and storage technologies.”

Rob Sprinkle, Technical Lead, Platforms Infrastructure, Google LLC

HPE

“At HPE we believe that being able to compose compute resources over open interfaces is critical if our industry is to keep pace with the demands of a data and AI-driven future. We applaud Intel for opening up the interface to the processor. CXL will help customers utilize accelerators more efficiently and dovetails well with the open Gen-Z memory-semantic interconnect standard to aid in building fully-composable, workload-optimized systems.”

Mark Potter, HPE CTO and Director of Hewlett Packard Labs

Huawei

“Being a leading provider in the industry, Huawei will play an important role in the contribution of technology specification. Huawei’s intelligent computing products which incorporates Huawei’s chip, acceleration components and intelligent management together with innovative optimized system design, can deliver end-to-end solutions which significantly improves the rollout and system efficiency of data centers.”

Zhang Xiaohua, GM of Huawei’s Intelligent Computing BU

Intel

“CXL is an important milestone for data-centric computing, and will be a foundational standard for an open, dynamic accelerator ecosystem. Like USB and PCI Express, which Intel also co-founded, we can look forward to a new wave of industry innovation and customer value delivered through the CXL standard.”

Jim Pappas, Director of Technology Initiatives, Intel Corporation

Microsoft

“Microsoft is joining the CXL consortium to drive the development of new industry bus standards to enable future generations of cloud servers. Microsoft strongly believes in industry collaboration to drive breakthrough innovation. We look forward to combining efforts of the consortium with our own accelerated hardware achievements to advance emerging workloads from deep learning to high performance computing for the benefit of our customers.”

Dr. Leendert van Doorn, Distinguished Engineer, Azure, Microsoft

Gen-Z Consortium

“As a Consortium founded to encourage an open ecosystem for the next-generation memory and compute architectures, Gen-Z welcomes Compute Express Link (CXL) to the industry and we look forward to opportunities for future collaboration between our organizations.”

Kurtis Bowman, President, Gen-Z Consortium

Comments Locked

48 Comments

View All Comments

  • A5 - Monday, March 11, 2019 - link

    IIRC, PCIe 5.0 PHYs are supposed to hit in 2020. It isn't going to be a long gap like PCI 3 -> 4.
  • p1esk - Monday, March 11, 2019 - link

    Wait, PCIe 5 next year? What happened to PCIe 4?
  • Kevin G - Monday, March 11, 2019 - link

    It has been shipping in the POWER9 since late 2017. High end IO devices have been supporting it but with only a handful of host platforms released to date supporting it, there hasn't been much traction. Intel's migration to 10 nm was also supposed to bring PCIe 4.0 to the mainstream. AMD is shipping PCIe 4.0 host devices later this year.
  • p1esk - Monday, March 11, 2019 - link

    So basically you're saying we *might* see PCIe 4 "later this year", but PCIe 5 is coming next year?
  • mode_13h - Monday, March 11, 2019 - link

    AMD already announced (limited) PCIe 4.0 support in (some) *existing* AM4 motherboards!
  • mode_13h - Monday, March 11, 2019 - link

    PCIe 4.0 is coming to an AMD 7nm CPU near you!
  • mode_13h - Tuesday, March 12, 2019 - link

    Also, AMD has professional/datacenter GPUs based on 7 nm Vega that support it. It'll be disabled in Radeon VII, however.
  • sorten - Monday, March 11, 2019 - link

    My understanding is that PCIe 4 and 5 are both coming out this year, with PCIe 4 being consumer focused and PCIe 5 being data center focused. There really aren't any consumer workloads that would benefit from a quadrupling (3.0 -> 4.0 -> 5.0) of bandwidth at this point.
  • sorten - Monday, March 11, 2019 - link

    *edit : PCIe 4.0, as others have noted, is already out. I meant that both will be available this year and 4.0 should be available on mainstream AMD and maybe Intel consumer boards this year.
  • Alexvrb - Tuesday, March 12, 2019 - link

    Actually there is a consumer use-case that will rapidly take advantage of faster PCIe. M.2 slots. They're kinda short in the lanes department. I really wish M.2 was designed to be more robust in terms of lanes, power, and clearance specs. Also wish they didn't have SATA as part of the spec, at all.

    Anyway, PCIe 4.0 is still beneficial even if it's short-lived. If you buy a board mid-2019 with PCIe 4, as opposed to one with 3.0 only, you still have more bandwidth available for a PCIe 5 device. That includes the aforementioned M.2 devices, once 4/5 models hit the market. Obviously PCIe 5 would be better to have, but it's a matter of what's the best thing available when you build your rig.

Log in

Don't have an account? Sign up now