Memory Subsystem: Bandwidth

Measuring the full bandwidth potential of a system with John McCalpin's Stream bandwidth benchmark is getting increasingly difficult on the latest CPUs, as core and memory channel counts have continued to grow.  As you can see from the results below, it not easy to measure bandwidth. The result vary wildly depending on the setting you choose.

Memory: STREAM Bandwidth
Mem
Hierarchy
Compiler & OS settings Result
Cavium ThunderX2
Gcc 7.2 binary
-O2 -mcmodel=large -fopenmp -DVERBOSE -fno-PIC"
OMP_PROC_BIND=spread
241 GB/s
Cavium ThunderX2
Gcc 7.2 binary
-Ofast -fopenmp -static
OMP_PROC_BIND=spread
157 GB/s
Cavium ThunderX2
Gcc 7.2 binary
OMP_PROC_BIND not configured 118 GB/s
Intel ICC Binary -fast  -qopenmp  -parallel
KMP_AFFINITY=verbose,scatter
183 GB/s
Intel gcc Binary Ofast -fopenmp -static
OMP_PROC_BIND=spread
151 GB/s
Intel gcc Binary Ofast -fopenmp -static
OMP_PROC_BIND not configured
150 GB/s

Theoretically, the ThunderX2 has 33% more bandwidth available than an Intel Xeon, as the SoC has 8 memory channels compared to Intel's six channels. These high bandwidth numbers can only be achieved in very specific conditions and require quite a bit of tuning to avoid reaching out to remote memory. In particular, we have to ensure that threads don't migrate from one socket to the other.

We first tried to achieve the best results on both architectures. In case of Intel the ICC compiler always produced better results with some low level optimizations inside the stream loops. In case of Cavium, we followed the instructions of Cavium. So strictly speaking these are not comparable, but it should give you an idea of what kind of bandwidth these CPUs can achieve at their respective peaks. To be fair to Intel, with ideal settings (AVX-512) you should be able to achieve 200 GB/s.

Nevertheless, it is clear that the ThunderX2 system can deliver between 15% and 28% more bandwidth to its CPU cores. This works out to 235 GB/sec, or about 120 GB/sec per socket. Which in turn is about 3 times more than what the original ThunderX was capable off.

Memory Subsystem: Latency

While Bandwidth measurements are only relevant to a small part of the server market, almost every application is heavily impacted by the latency of memory subsystem. To that end, we used LMBench in an effort to try to measure cache and memory latency. The numbers we looked at were "Random load latency stride=16 Bytes". Note that we're expressing the L3 cache and DRAM latency in nanoseconds since we don't have accurate L3-cache clockspeed values.

Memory: LMBench Latency
Mem
Hierarchy
Cavium
ThunderX
DDR4-2133
Cavium
ThunderX2
DDR4-2666
Intel Skylake
8176
DDR4-2666
L1-cache (cycles) 3 4 4
L2-cache (cycles) 40/80 (*) 8-9 12
L3-cache 4-8 MB (ns) N/A 27-30 ns 24-29 ns
Memory 384-512  (ns) 103/206 (*) 156-157 ns 89-91 ns

The L2-cache of the ThunderX2 is accessed with very little latency, and with a single thread running, the L3-cache is competitive with the Intel's complex L3 cache. Once we hit the DRAM however, Intel offers significantly lower latency.

Memory Subsystem: TinyMemBench

To get a deeper understanding of the respective architectures, we also ran the open source TinyMemBench benchmark. The source code was compiled with GCC 7.2 and the optimization level was set to "-O3". The benchmark's testing strategy is described rather well in its manual:

Average time is measured for random memory accesses in the buffers of different sizes. The larger the buffer, the more significant the relative contributions of TLB, L1/L2 cache misses, and DRAM accesses become. All the numbers represent extra time, which needs to be added to L1 cache latency (4 cycles).

We tested with single and dual random read (no huge pages), as we wanted to see how the memory system coped with multiple read requests. 

One of the major weaknesses of the original ThunderX was that it did not support multiple outstanding misses. Memory level parallelism is an important feature for any high-performance modern CPU core: using it it avoids cache misses that would starve the wide back end. A non-blocking cache is thus a key feature for wide cores.

The ThunderX2 does not suffer from that problem at all, thanks to its non-blocking cache. Just like the Skylake core in the Xeon 8176, a second read causes the overall latency to increase by only 15-30%, and not 100%. According to TinyMemBench, the Skylake core has tangibly better latencies.  The datapoint at 512 KB is of course easy to explain: the Skylake core is still fetching from its fast L2, while the ThunderX2 core has to access its L3. But the numbers at 1 and 2 MB indicate that Intel's prefetchers offer a serious advantage as the latency stays is an averag of the L2 and the L3-cache. Around 8 to 16 MB, the latency numbers are close, but once we go beyond the L3 (64 MB), Intel's Skylake offers lower memory latencies.

Benchmark Configuration & Energy Consumption Single-Threaded Integer Performance: SPEC CPU2006
Comments Locked

97 Comments

View All Comments

  • imaheadcase - Sunday, May 27, 2018 - link

    Yah i tried that for a bit, it worked ok. But was not foolproof, it missed some stuff.
  • repoman27 - Wednesday, May 23, 2018 - link

    Just to provide a counter point, this article made my day. And that’s coming entirely from intellectual curiosity—I don’t plan on deploying any servers with these chips in the near future. I always enjoy Johan’s writing, and was really looking forward to seeing how ThunderX2 would stack up. Many people are convinced that ARM is really only suitable in low power / mobile scenarios, but this is the chip that may finally prove otherwise. That has significant ramifications for the entire industry (including the consumer space), especially when you consider that Cavium could put out a TSMC 10nm or even 7nm shrink of ThunderX2 before Intel can get off of 14nm.
  • HStewart - Wednesday, May 23, 2018 - link

    This does not proved that ARM is suitable in higher end space - look at the core specific speed - it extremely low compare to Intel and AMD server chips. Keep in mind it takes 128 total cores - running at 4SMT system. And what about other operations - what about Virtual Machine situation - where you have many virtual x86 machines on VMWare server,

    How about high end mathematical and vector logic?

    It does seem like ARM can run more threads - but maybe Intel or AMD has never had the need to

    I think this latest Core battle is silly - I think it really not the number of cores you have but combination of type and speed of cores along with number of cores.
  • Wilco1 - Wednesday, May 23, 2018 - link

    It certainly does prove that Arm can do high end servers - the results clearly show IPC/GHz is very close on SPECINT. Base clock speeds are the same as the Intel cores, and that's the speed the server runs at when not idle. But there are more cores as you say, so who will win is obvious.

    Now imagine a next-gen 7nm version before Intel manages 10nm. Not a pretty picture, right?
  • HStewart - Wednesday, May 23, 2018 - link

    Ok I have learn to agree to disagree with some people

    Can this server run the VMWare server

    https://kb.vmware.com/s/article/1003882

    The answer is no - just one example - many more,

    On 10nm - it not number that matters - it technology behind it - Intel supposely has a i3 and Y based for CannonLake coming this year - probably more.
  • Wilco1 - Wednesday, May 23, 2018 - link

    There are plenty of VMs for Arm, so virtualization is not an issue.

    10nm will be behind 7nm even if it ends up as originally promised and not using relaxed rules to become viable for volume production.
  • ZolaIII - Thursday, May 24, 2018 - link

    When optimized for SIMD NEON extension things changed dramatically. All tho NEON isn't exactly the best SIMD never the less number's speak for them self.
    https://blog.cloudflare.com/neon-is-the-new-black/
    Tho Centriq is a bit pricier, bit overly slower than this but main point is it whose built on comparable lithography to current Intel's 14nm. So you get cheaper hardware, which can be packaged tighter & will consume much less power while being compatible regarding the performance. Triple win situation (initial cost, cost of ownership and scaling) but it still isn't turn key one whit isn't crucial for big vendor server farms anyway.
  • name99 - Thursday, May 24, 2018 - link

    ARM (and this particular chip) aren't trying to solve every problem in the world. They're trying to offer a better (cheaper) solution for a PARTICULAR subset of customers.

    If you think such customers don't exist, then why do you think Intel has such a wide range of Xeons, including eg all those Xeon Silvers that only turbo up to 3GHz? Or Xeon Gold's that max out at 2.8GHz?
  • lmcd - Thursday, May 24, 2018 - link

    Second page: supports SR-IOV, which is important for KVM and Xen. If you're not aware, Xen and KVM are powerful virtualization solutions that cover the feature set of VMWare quite nicely.
  • HStewart - Wednesday, May 23, 2018 - link

    "I really think Anandtech needs to branch into different websites. Its very strange and unappealing to certain users to have business/consumer/random reviews/phone info all bunched together."

    I different in this - I don't think AnandTech should concentrate on just gaming in focus - this is rather old school - I am not sure about mobile phones in the mess of all this

    But comparing ARM cpu's to Intel/AMD is interesting subject. It basically RISC vs CISC discussion - yes RISC can do operations quicker in some cases - but by definition of the architecture they are Reduce in what they do. Fox example it would take RISC a ton of instructions to executed a single AVX style operation.

    This article is closest I have seen in comparing ARM vs x86 base machines - but even though I see some holes - it comes close - but having just be Linux based leaves out why people purchase such machine - I think Virtual Machine server is huge - but like everything else on the internet that is just an opinion

Log in

Don't have an account? Sign up now