For users keeping track of AMD’s rollout of its new Zen microarchitecture, stage one was the launch of Ryzen, its new desktop-oriented product line last week. Stage three is the APU launch, focusing mainly on mobile parts. In the middle is stage two, Naples, and arguably the meatier element to AMD’s Zen story.

A lot of fuss has been made about Ryzen and Zen, with AMD’s re-launch back into high-performance x86. If you go by column inches, the consumer-focused Ryzen platform is the one most talked about and many would argue, the most important. In our interview with Dr. Lisa Su, CEO of AMD, the launch of Ryzen was a big hurdle in that journey. However, in the next sentence, Dr. Su lists Naples as another big hurdle, and if you decide to spend some time with one of the regular technology industry analysts, they will tell you that Naples is where AMD’s biggest chunk of the pie is. Enterprise is where the money is.

So while the consumer product line gets columns, the enterprise product line gets profits and high margins. Launching an enterprise product that gains even a few points of market share from the very large blue incumbent can implement billions of dollars to the bottom line, as well as provided some innovation as there are now two big players on the field. One could argue there are three players, if you consider ARM holds a few niche areas, however one of the big barriers to ARM adoption, aside from the lack of a high-performance single-core, is the transition from x86 to ARM instruction sets, requiring a rewrite of code. If AMD can rejoin and a big player in x86 enterprise, it puts a small stop on some of ARMs ambitions and aims to take a big enough chunk into Intel.

With today’s announcement, AMD is setting the scene for its upcoming Naples platform. Naples will not be the official name of the product line, and as we discussed with Dr. Su, Opteron one option being debated internally at AMD as the product name. Nonetheless, Naples builds on Ryzen, using the same core design but implementing it in a big way.

The top end Naples processor will have a total of 32 cores, with simultaneous multi-threading (SMT), to give a total of 64 threads. This will be paired with eight channels of DDR4 memory, up to two DIMMs per channel for a total of 16 DIMMs, and altogether a single CPU will support 128 PCIe 3.0 lanes. Naples also qualifies as a system-on-a-chip (SoC), with a measure of internal IO for storage, USB and other things, and thus may be offered without a chipset.

Naples will be offered as either a single processor platform (1P), or a dual processor platform (2P). In dual processor mode, and thus a system with 64 cores and 128 threads, each processor will use 64 of its PCIe lanes as a communication bus between the processors as part of AMD’s Infinity Fabric. The Infinity Fabric uses a custom protocol over these lanes, but bandwidth is designed to be on the order of PCIe. As each core uses 64 PCIe lanes to talk to the other, this allows each of the CPUs to give 64 lanes to the rest of the system, totaling 128 PCIe 3.0 again.

On the memory side, with eight channels and two DIMMs per channel, AMD is stating that they officially support up to 2TB of DRAM per socket, making 4TB in a single server. The total memory bandwidth available to a single CPU clocks in at 170 GB/s.

While not specifically mentioned in the announcement today, we do know that Naples is not a single monolithic die on the order of 500mm2 or up. Naples uses four of AMD’s Zeppelin dies (the Ryzen dies) in a single package. With each Zeppelin die coming in at 195.2mm2, if it were a monolithic die, that means a total of 780mm2 of silicon, and around 19.2 billion transistors – which is far bigger than anything Global Foundries has ever produced, let alone tried at 14nm. During our interview with Dr. Su, we postulated that multi-die packages would be the way forward on future process nodes given the difficulty of creating these large imposing dies, and the response from Dr. Su indicated that this was a prominent direction to go in.

Each die provides two memory channels, which brings us up to eight channels in total. However, each die only has 16 PCIe 3.0 lanes (24 if you want to count PCH/NVMe), meaning that some form of mux/demux, PCIe switch, or accelerated interface is being used. This could be extra silicon on package, given AMD’s approach of a single die variant of its Zen design to this point.

Note that we’ve seen multi-die packages before in previous products from both AMD and Intel. Despite both companies playing with multi-die or 2.5D technology (AMD with Fury, Intel with EMIB), we are lead to believe that these CPUs are similar to previous multi-chip designs, however there is Infinity Fabric going through them. At what bandwidth, we do not know at this point. It is also pertinent to note that there is a lot of talk going around about the strength of AMD's Infinity Fabric, as well as how threads are manipulated within a silicon die itself, having two core complexes of four cores each. This is something we are investigating on the consumer side, but will likely be very relevant on the enterprise side as well.

In the land of benchmark numbers we can’t verify (yet), AMD showed demonstrations at the recent Ryzen Tech Day. The main demonstration was a sparse matrix calculation on a 3D-dataset for seismic analysis. In this test, solving a 15-diagonal matrix of 1 billion samples took 35 seconds on an Intel machine vs 18 seconds on an AMD machine (both machines using 44 cores and DDR4-1866). When allowed to use its full 64-cores and DDR4-2400 memory, AMD shaved another four seconds off. Again, we can’t verify these results, and it’s a single data point, but a diagonal matrix solver would be a suitable representation for an enterprise workload. We were told that the clock frequencies for each chip were at stock, however AMD did say that the Naples clocks were not yet finalized.

What we don’t know are power numbers, frequencies, processor lists, pricing, partners, segmentation, and all the meaty stuff. We expect AMD to offer a strong attack on the 1P/2P server markets, which is where 99% of the enterprise is focused, particularly where high-performance virtualization is needed, or storage. How Naples migrates into the workstation space is an unknown, but I hope it does. We’re working with AMD to secure samples for Johan and me in advance of the Q2 launch.

Related Reading

Comments Locked

91 Comments

View All Comments

  • ACE76 - Tuesday, March 7, 2017 - link

    Now Intel's real worry begins...I highly doubt they cared much about the enthusiast market as it's very small to the company's overall revenue... Data center penetration from AMD is where Intel's wallet is going to hurt the most.
  • jjj - Tuesday, March 7, 2017 - link

    "a few points of market share from the very large blue incumbent can implement billions of dollars to the bottom line"

    You are overestimating the market. It's maybe 14 billion this year and margins are great but so is retail and Summit Ridge.
    A few % in server is little and even 20% is a lot less than what CPU desktop can do mid term. Summit Ridge and Pinnacle Ridge can be huge for AMD as there is a lot of demand for more cores.Think how many users just refused to upgrade to another 4 cores over the last few years.
    Ofc they do need to show some gains in games, improve IPC next year and push clocks higher to get the most out of it.
  • PixyMisa - Tuesday, March 7, 2017 - link

    The server market is around $14 billion per quarter.

    Or were you referring to Intel's revenue on Xeons? That probably is around $14 billion per year.
  • jjj - Tuesday, March 7, 2017 - link

    I was referring to the server CPU market and yeah Intel has over 99% revenue.
    In units it's 22-23 million per year - units as in CPUs ( sockets) not server(the box) units.

    The PC CPU/APU market is 30+ billion but Summit Ridge has really nice ASPs and margins in retail.
  • Twirrim - Tuesday, March 7, 2017 - link

    "the transition from x86 to ARM instruction sets, requiring a rewrite of code"

    That's not really true. Most languages and compilers already fully support ARM. All that is needed is to recompile the code for the target architecture.

    The only people who would need to rewrite code are those who have in-lined hand-tuned assembly code. Which isn't the biggest segment of the market.
  • fackamato - Tuesday, March 7, 2017 - link

    Recompiling the code for another architecture is not as easy as just recompiling. All those man hours spent fine tuning the code to get around compiler bugs and increase performance have to be spent again doing the same thing for the different architecture.
  • stephenbrooks - Tuesday, March 7, 2017 - link

    Problem with recompiling code (e.g. to ARM) for these sorts of specialist software is that you might not be the one with the source code. You might be forking out $10k per license and then the software vendor just gives you a Windows EXE for example. You ask the vendor about ARM and Linux and they say "Yeah that's interesting, we'd like to do that, but we don't have the staff."
  • stephenbrooks - Tuesday, March 7, 2017 - link

    And if it's open source, on day 1 you say "Yay! I have the source, I'll just type make", then "hmm what are these weird flags and switches in the makefile?", then "why are there 15 makefiles that call each other and the ARM option isn't written in 3 of them?"
    When you've sorted that out (on day 20) you realise you now need an ARM build of all 136 libraries the program relies on or links to. Fortunately, it's all open source so you download them all. Hmm. It doesn't link. It appears this software relies on an outdated version of the API for four of these libraries. Now you have to search the repositories and reconstruct the old version of the source for four codes you know nothing about, at a particular point in time. See how this can get time consuming?
  • deltaFx2 - Thursday, March 9, 2017 - link

    @ Twirrim: ARM support isn't the same as ARM support with performance. There are enough ISA differences, such as the memory model (ARMs is weaker) which make it non-trivial to be performant and correct at the same time. These can certainly be solved but first, you need to know that the problem exists and then, spend time and resources to fix it. x86-64 has had over 2 decades of optimization behind it that a simple recompile cannot fix. I've said this before on one AT forum, that if naples is half good, it pushes the ARM server market back by ~5 years. Naples is more than half-good; certainly gives more threads, more I/O and more memory channels than both Qualcomm and Cavium. We know ST and MT perf is good from Ryzen launch. The reason the market wants choice is to keep Intel's prices down. It doesn't need choice in ISA, just choice in vendor.
  • jihe - Tuesday, March 7, 2017 - link

    Based on Ryzen performance these will be very very good. Hats off to AMD.

Log in

Don't have an account? Sign up now