Intel Launches Optane DIMMs Up To 512GB: Apache Pass Is Here!by Ian Cutress & Billy Tallis on May 30, 2018 2:15 PM EST
Intel today announced the availability of their long-awaited Optane DIMMs, bringing 3D XPoint memory onto the DDR4 memory bus. The modules that have been known under the Apache Pass codename will be branded as Optane DC Persistent Memory, to contrast with Optane DC SSDs, and not to be confused with the consumer-oriented Optane Memory caching SSDs.
The new Optane DC Persistent Memory modules will be initially available in three capacities: 128GB, 256GB and 512GB per module. This implies that they are probably still based on the same 128Gb 3D XPoint memory dies used in all other Optane products so far. The modules are pin-compatible with standard DDR4 DIMMs and will be supported by the next generation of Intel's Xeon server platforms.
The Optane DC Persistent Memory modules Intel is currently showing off have heatspreaders covering the interesting bits, but they appear to feature ten packages of 3D XPoint memory. This suggests that the 512GB module features a raw capacity of 640GB and that Optane DC Persistent Memory DIMMs have twice the error correction overhead of ECC DRAM modules.
The Optane DC Persistent Memory modules are currently sampling and will be shipping for revenue later this year, but only to select customers. Broad availability is planned for 2019. In a similar strategy to how Intel brought Optane SSDs to market, Intel will be offering remote access to systems equipped with Optane DC Persistent Memory so that developers can prepare their software to make full use of the new memory. Intel is currently taking applications for access to this program. The preview systems will feature 192GB of DRAM and 1TB of Optane Persistent Memory, plus SATA and NVMe SSDs. The preview program will run from June through August. Participants will be required to keep their findings secret until Intel gives permission for publication.
Intel is not officially disclosing whether it will be possible to mix and match DRAM and Optane Persistent Memory on the same memory controller channel, but the 192GB DRAM capacity for the development preview systems indicates that they are equipped with a 16GB DRAM DIMM on every memory channel. Also not disclosed in today's briefing: power consumption, clock speeds, specific endurance ratings, and whether Optane DC Persistent Memory will be supported across the Xeon product line or only on certain SKUs. Intel did vaguely promise that Optane DIMMs will be operational for the normal lifetime of a DIMM, but we don't know what assumptions Intel is making about workload.
Intel has been laying the groundwork for application-level persistent memory support for years through their open-source Persistent Memory Development Kit (PMDK) project, known until recently as NVM Library. This project implements the SNIA NVM Programming Model, an industry standard for the abstract interface between applications and operating systems that provide access to persistent memory. The PMDK project currently includes libraries to support several usage models, such as a transactional object store or log storage. These libraries build on top of existing DAX capabilities in Windows and Linux for direct memory-mapped access to files residing on persistent memory devices.
Optane SSD Endurance Boost
The existing enterprise Optane SSD DC P4800X initially launched with a write endurance rating of 30 drive writes per day (DWPD) for three years, and when it hit widespread availability Intel extended that to 30 DWPD for five years. Intel is now preparing to introduce new Optane SSDs with a 60 DWPD rating, still based on first-generation 3D XPoint memory. Another endurance rating increase isn't too surprising: Intel has been accumulating real-world reliability information about their 3D XPoint memory and they have been under some pressure from competition like Samsung's Z-NAND that also offers 30 DWPD with a more conventional flash-based memory.
Post Your CommentPlease log in or sign up to comment.
View All Comments
tracker1 - Thursday, May 31, 2018 - linkWell, ideally, in terms of application programming, it would. Persisted direct memory access would be incredible. I keep thinking how much this could transform database applications or other larger data maps. Right now, there's a lot of overhead in the OS->FS->RAM->Transform data, with further seeks, etc. If it was all direct-memory, not having to flow through various FS/OS conventions to use... that could be incredible.
frenchy_2001 - Thursday, May 31, 2018 - linkIt would, and it's the long term goal, but we're not there yet.
Xpoint, ReRam, MRAM and others are aiming for this, but they'd need faster access and better reliability/endurance.
Realistically, as long as we have a faster technology (DRAM), the new ones will not replace, but supplement.
It may make sense for *some* applications (huge database, HPC with huge dataset) to get more local storage, even if slower, but most applications will behave better with limited, faster DRAM.
So, for the moment, they will ADD some intermediary persistent storage, not replace DRAM with it. It just bridges the gap between DRAM (ns scale) and Storage (SSDs are us scale, 1000x slower).
Yojimbo - Thursday, May 31, 2018 - linkThe point is that compared with DRAM it costs less, has higher capacity, and uses significantly less power per capacity. Compared with NAND it is faster and has much lower latency. There are applications that may benefit from such a mix of attributes.
I'm not sure what you mean by "very large gulf". That is a judgment on relative size. What is important is how it matches up with demands of applications. Machine learning and databases, for example, may see large benefits from using this type of memory. The only thing that's clear is that your off-the-cuff dismissal of its potential is inappropriate.
Yojimbo - Thursday, May 31, 2018 - linkjordanclock, you can't estimate the performance of a part that operates through the memory bus by looking at a different part that runs through the I/O bus, especially as it relates to latency. Those numbers are about as useless as if you had pulled them from your ass.
Spunjji - Thursday, May 31, 2018 - linkHe could have been more polite, but you really have just made those numbers up (at best it's a semi-educated guess) so he's not wrong either.
jordanclock - Thursday, May 31, 2018 - linkYeah, but that's why I started my first comment with "correct me if I'm wrong." Not "berate me if you disagree." I admitted I am getting my numbers from the next closest related product AND that they are broad estimates. It just seems like everyone else is getting their panties in a bunch thinking that we can't discuss the potential performance until we have EXACT numbers.
Billy Tallis - Wednesday, May 30, 2018 - link20µs sounds pretty conservative, especially for read latency. PCIe and NVMe are responsible for at least half of that latency, judging by what's been reported for DRAM-backed NVMe drives.
jordanclock - Wednesday, May 30, 2018 - linkThat's what I was suspecting. Still, we're talking tens of microseconds verse tens of nanoseconds. Waaaay closer than we've ever been but I sure wouldn't want it for my system's main memory yet!
tracker1 - Thursday, May 31, 2018 - linkYou might... I mean, lets say DRAM becomes more common in smaller amounts, and Optane becomes the bulk... with DRAM becoming another cache layer. Optane becoming direct-memory storage space.
Spunjji - Thursday, May 31, 2018 - linkNobody's trying to sell it to you as such, though. That's made pretty clear a few times over.