Arm Cortex-X4: Fastest Arm Core Ever Built (Again)

Diving further into Arm's new CPU core microarchitectures, we'll start with the Cortex-X4, which stands out as the most substantial advancement. Arm has consistently achieved significant double-digit improvements in instructions per cycle (IPC) with each iteration, starting from the original Cortex X1 core, then progressing to the Cortex X2, and continuing with the Cortex-X3 IP introduced last year, and they'll do so again for the Cortex-X4 in 2023 as well. The Cortex-X4 is specifically designed to cater to cutting-edge flagship Android-based smartphones and leading mobile devices that utilize robust Arm IP-based System-on-Chips (SoCs). Representing a subtle yet impactful enhancement over its predecessor, the Cortex-X4 further refines the capabilities of the Cortex-X3 core.

The Cortex-X4 is designed to deliver top-tier compute performance in mobile System-on-Chips (SoCs), particularly tailored to handle demanding workloads like AAA gaming and bursty operations. The Cortex-X4 is Arm's highest-performing core to date, featuring an anticipated core clock speed of 3.4 GHz and an increased L2 cache per core, doubling its capacity to 2 MB compared to last year's 1 MB Cortex-X3 . Despite these enhancements, Arm has managed to maintain a minimal increase in the physical size of the core, with the more complex X4 CPU core coming in at under a 10% die size increase (the additional L2 cache excluded).

As for power efficiency, Arm claims a notable improvement in power savings of approximately 40% compared to previous generations. Don't expect to see too many CPU vendors take advantage of that, since the primary job of the X-series is to run fast, but it goes to show what the X4 can accomplish in conjunction with the latest fab nodes.

Arm Cortex-X4: Front End Reshuffle, Redesigning Instruction Fetching

In terms of architecture, the Cortex-X4 exhibits similarities to its predecessor, the Cortex-X3, with the primary focus being on refining the existing architecture and optimizing efficiency across various core components.

Now while things haven't changed all too much architecturally from the Cortex-X4 to the Cortex-X3, the Cortex-X4 front end has had a reshuffle and a tweak of the instruction fetching block. The aim of Arm has been to keep latencies low while offering peak bandwidth throughout its Cortex-X4 core and within the entire TSC23 core cluster.

With regards to the Cortex-X4's front end, the big architectural change here has been through its dispatch width.  The Cortex-X4 now has a more focused 10-wide dispatch width, up for the 6/8-wide dispatch width of the X3. That said, while the front-end has gotten wider, the effective pipeline length has actually shrunken even so slightly; the branch mispredict penalty is down from 11 cycles to 10.

The other big front-end focus has been on the instruction fetching process itself; Arm has essentially redesigned the entire instruction fetch delivery system to ensure better efficiency throughout the pipeline when compared to the Cortex-X3.

The latest architecture also takes another pass on improving Arm's branch prediction units, further improving their prediction accuracy. Arm isn't saying much about how they accomplished this, though we do know that they've targeted conditional branch accuracy in particular. None of this comes for free, though; Arm was quick to note that the improved predictors were more expensive to implement. Still, Arm believes this is worth it in keeping the beast (Cortex-X4) fed, so to speak.

Shifting to the back-end of the CPU core, Arm has taken a focus on execution bandwidth. Among other changes, Arm has increased the number of ALUs from 6 to 8. Of these, six are simple ALUs for processing single-cycle uOPS. Meanwhile, there are two complex ALUs for processing dual and multi-cycle instructions, Arm has also squeezed another branch unit in, giving the Cortex-X4 a total of 3, up from 2, as well as adding an extra Integer MAC. Meanwhile on the floating point side of matters, the Cortex-X4 also upgrades a pipelined FP divider.

So to some extent, the X4's performance improvements come from a brute force increase throughout the chip, with the chip able to dispatch and retire more instructions in a single clock. The goal for the Cortex-X4 is to offer peak performance on both benchmarks and real-world workloads, as well as an increase in the fetch bandwidth for any instruction set going through the pipeline. The benefits come through latency reductions and instruction fusion benefits for larger instruction footprint workloads.

Increasing the Micro-op Commit Queue (MCQ) capacity – and thus the size of the window for instruction re-ordering – is another refinement in Arm's toolbox for Cortex-X4. As with previous increases in Arm's re-order buffers, the larger queue affords more opportunities to look for instruction re-ordering, to hide memory stalls and otherwise extract more opportunities for the rest of the CPU back-end to get some work done. And with CPU performance continuing to outpace memory bandwidth, the need for larger buffers only grows with each generation.

Finally, at the far back end of the X4 CPU core, Arm has added a fourth address generation unit. Interestingly, this one is just for stores; Arm already had a load-only unit, but opted for a store-only unit rather than converting it to a full mixed LS unit.

The L1 cache subsystem of the Cortex-X4 has also received a lot of work. The L1's translation lookaside buffer (TLB) has been doubled to 96 entries, and there's a new L1 temporal data prefetcher. Finally, Arm has taken steps to reduce the number of L1 data bank conflicts on the X4.

There have also been some changes made to better support the larger L2 cache size of the Cortex-X4 that we previously discussed. The L2 has been moved physically closer to the CPU core for performance reasons, and Arm has been able to expand the L2 size without any resulting increase in latency. So there is less of a trade off here than is often the case for increasing cache sizes.

Cortex-X4: IPC Uplift, Scalable up to 14-Cores, Up to 32 MB L3 Cache

One of the primary benefits of Arm's v9.2 architecture shift is that it offers increased scalability. The TSC23 core cluster now supports up to 14 cores which adds a level of flexibility for SoC vendors to implement into their latest designs. Perhaps one of the biggest changes is support for up to 32 MB of shared L3 cache within the TSC23 core cluster. The levels of L3 cache implemented is of course down to the SoC manufacturer, but the maximum levels that can be offered is 32 MB, which allows increased support for higher-end mobile devices such as tablets and notebooks, where applicable.

The maximum number of cores across the entire TSC23 core cluster stretches to 14 in total, with a mixture of big and little cores, with multiple avenues for SoC vendors to explore to capitalize on things like performance gains and efficiency. All of this flexibility is given to the SoC vendors to design their own variations depending on the level of the device. So a flagship mobile device will leverage different combinations of Cortex-X4, Cortex-A720, and Cortex-A520 depending on multiple factors such as cost, power budget, and expected performance levels.

A bigger core and optimizations of existing processes typically come with a performance benefit. Arm is claiming that, based on its pre-silicon simulation numbers, the Cortex-X4 will deliver a 15% IPC uplift at iso-frequency and iso-bandwidth versus the Cortex-X3 used in last year's flagship Android SoCs. There are a number of factors at play here in delivering that total performance improvement, including front-end optimizations and improvements, as well as a larger L2 per core cache of 2 MB and a larger L1D-TLB, which is a cache designed for recently accessed page transitions.

Arm in 2023: Moving to Armv9.2, 64-Bit Only, New DSU-120 Cortex A720: Middle Core, Big on Efficiency
Comments Locked

52 Comments

View All Comments

  • tipoo - Sunday, May 28, 2023 - link

    6 years after iOS went 64 bit only. I'm guessing the cores have also been 64 bit only there for a while?
  • goatfajitas - Monday, May 29, 2023 - link

    IOS is an OS from one company that is made for a few specific products from that one company. You cant evenly compare an open platform to a narrow closed market like that.
  • iAPX - Monday, May 29, 2023 - link

    Yes Apple SoC seems to be 64bit-only for years, that simplify their own design and gives more efficiency.

    As 64bit ARM ISA as nothing in common with 32bit ARM ISA, contrary to the x86 and AMD64, they basically started with a blank page, profiting from experience of various preceding 64bit ISA, I feel it was the right way to go.
  • dotjaz - Monday, May 29, 2023 - link

    No it's not, Apple is not allowed to modify ARM ISA. If it's ARMv8 compliant, it CANNOT possibly be 64bit only.
  • Doug_S - Monday, May 29, 2023 - link

    ARMv8 makes execution of AArch32 optional. Apple may have been responsible for that as they were involved in the spec of ARMv8 and AArch64 - they would have known they'd want to drop 32 bit code as soon as it was practical.
  • dotjaz - Tuesday, May 30, 2023 - link

    That's factually UNTRUE, Aarch32 execution is mandatory in **hardware implementation**, Aarch64 **OS** can choose not to execute Aarch32 codes
  • Doug_S - Tuesday, May 30, 2023 - link

    Sorry but you are wrong, ARMv8 specifically makes support for AArch32 optional for hardware implementations.
  • Jaybird99 - Monday, May 29, 2023 - link

    Apple is a founding partner with an architectural license. They can change anything they wish on the CPU design, then have it fabricated. I thought this was known because of the wildly different core design from Apple. They take the ISA they pick and choose and add/delete what they need. They actually help ARM in the long run as seeing how Apple uses 64bit and finds solutions to their issues, because as stated above 64bit was blank slate for ARM. I'm very fairly certain of this, but if you know something I don't? (I might not..)
  • Doug_S - Monday, May 29, 2023 - link

    An architectural license allows them to implement the ISA, but they can't delete things from it. They are able to add things to it (i.e. TSO, their AMX instructions, etc.) but it still has to pass ARM's conformance tests to show it is capable of running ARM code.

    They were able to "delete" AArch32 because ARMv8 allows that. ARMv9 goes further and makes AArch32 a special license addition or something like that - basically Aarch32 is deprecated with ARMv9 and will probably go away entirely with ARMv10.
  • dotjaz - Tuesday, May 30, 2023 - link

    No, they were not able to "delete" AArch32. They can disallow AArch32 codes execution in their OS just like Google Pixel 7-series, they cannot remove the support from hardware.

    And Apple did not add anything to ARM ISA. AMX is masked as a co-processor only available through frameworks, it doesn't directly execute any code other than a "firmware".

    TSO is not an instruction. It's a **mode**. It pertains to HOW the CPU reorders L/S queue. It has nothing to do with the ISA.

Log in

Don't have an account? Sign up now