This week, Micron announced the sample availability of its first CXL 2.0 memory expansion modules for servers that promise easy and cheap DRAM subsystem expansions. 

Modern server platforms from AMD and Intel boast formidable 12- and 8-channel DDR5 memory subsystems offering bandwidth of up to 460.8 – 370.2 GB/s and capacities of up to 6 – 4 TB per socket. But some applications consume all DRAM they can get and demand more. To satisfy the needs of such applications, Micron has developed its CZ120 CXL 2.0 memory expansion modules that carry 128 GB and 256 GB of DRAM and connect to a CPU using a PCIe 5.0 x8 interface.

"Micron is advancing the adoption of CXL memory with this CZ120 sampling milestone to key customers," said Siva Makineni, vice president of the Micron Advanced Memory Systems Group.

Micron's CZ120 memory expansion modules use Microchip's SMC 2000-series smart memory controller that supports two 64-bit DDR4/DDR5 channels as well as Micron's DRAM chips made on the company's 1α (1-alpha) memory production node. Every CZ120 module delivers bandwidth up to 36 GB/s (measured by running an MLC workload with a 2:1 read/write ratio on a single module), putting it only slightly behind a DDR5-4800 RDIMM (38.4 GB/s) but orders of magnitude ahead of a NAND-based storage device.

Micron asserts that adding four of its 256 GB CZ120 CXL 2.0 Type 3 expansion modules to a server running 12 64GB DDR5 RDIMMs can increase memory bandwidth by 24%, which is significant. Perhaps more significant is that adding an extra 1 TB of memory enables such a server to handle nearly double the number of database queries daily.

Of course, such an expansion means using PCIe lanes and thus reducing the number of SSDs that can be installed into such a machine. But the reward seems quite noticeable, especially if Micron's CZ120 memory expansion modules are cheaper than actual RDIMMs or have comparable costs.

For now, Micron has announced sample availability, and it is unclear when the company will start to ship its CZ120 memory expansion modules commercially. Micron claims that it has already tested its modules with major server platform developers, so right now, its customers are probably validating and qualifying the modules with their machines and workloads, so it is reasonable to expect CZ120 to be deployed already in 2024.

"We have been developing and testing our CZ120 memory expansion modules utilizing both Intel and AMD platforms capable of supporting the CXL standard," added Makineni. "Our product innovation coupled with our collaborative efforts with the CXL ecosystem will enable faster acceptance of this new standard, as we work collectively to meet the ever-growing demands of data centers and their memory-intensive workloads."

Source: Micron

Comments Locked

7 Comments

View All Comments

  • nfriedly - Wednesday, August 9, 2023 - link

    > PCIe 5.0 x8 interface

    That looks more like an x4 just from eyeballing the image(?)

    Although I suppose they could fit an x8 if they moved it off-center. Maybe it's just an old photo?
  • TeXWiller - Thursday, August 10, 2023 - link

    The modules are in the E3.S form factor.
  • ballsystemlord - Wednesday, August 9, 2023 - link

    > But some applications consume all DRAM they can get and demand more.

    Like web browsers...
  • James5mith - Wednesday, August 9, 2023 - link

    "36 GB/s (measured by running an MLC workload with a 2:1 read/write ratio on a single module), putting it only slightly behind a DDR5-4800 RDIMM (38.4 GB/s) but orders of magnitude ahead of a NAND-based storage device."

    Really? No NAND based storage device can hit 10's of GB/s? Not even PCIe5 x4 drives?

    Also, order(s) indicates more than one order of magnitude higher. Even arguing that NAND can only do GB/s instead of 10's of GB/s is a single order of magnitude.
  • lmcd - Wednesday, August 9, 2023 - link

    At 4K page size it's multiple orders of magnitude. Thanks for being argumentative though!
  • erinadreno - Friday, August 11, 2023 - link

    No NAND based storage can hit 10's of GB/s when R/W on 64bit size, which is the word width of current device. Unless you buffer multiple R/W requests but that quickly adds additional latency, like hundreds of times the latency of a single R/W command
  • Goku solos - Saturday, August 26, 2023 - link

    Even if it could it wouldn't matter bcuz ram is still 1000's of times lower latency than gen 5 ssds. Hell even the now legacy optane drives absolutely decimate gen 5 drives despite being nearly half the bandwidth bcuz sequential bandwidth hardly matters, it's the iops, particularly at low queue depths and latency where optane is still like 5x faster in low queue depths. Now imagine instead of 5x it's 1000's. V cache is so beneficial because ram takes the cpu like 5x as many cycles to get data from compared to l3. I'm sure the cpu would have ages to fantasise about those sweet, gen 5 sequential bandwidth numbers as it takes the 250000 cycles required to get mere bytes of data from the ssd instead of the 250 needed for ram. Yeah sounds like a bandwidth issue tbh 😂

Log in

Don't have an account? Sign up now