CPU Benchmark Performance: Power And Office

Our previous sets of ‘office’ benchmarks have often been a mix of science and synthetics, so this time we wanted to keep our office section purely on real-world performance. We've also incorporated our power testing into this section too.

The biggest update to our Office-focused tests for 2023 and beyond include UL's Procyon software, which is the successor to PCMark. Procyon benchmarks office performance using Microsoft Office applications, as well as Adobe's Photoshop/Lightroom photo editing software, and Adobe Premier Pro's video editing capabilities. Due to issues with UL Procyon and the video editing test, we haven't been able to properly run these, but once we identify a fix with UL, we will re-test each chip.

We are using DDR5 memory on the Ryzen 9 7950X3D and the other Ryzen 7000 series we've tested. This also includes Intel's 13th and 12th Gen processors. We tested the aforementioned platforms with the following settings:

  • DDR5-5600B CL46 - Intel 13th Gen
  • DDR5-5200 CL44 - Ryzen 7000
  • DDR5-4800 (B) CL40 - Intel 12th Gen

All other CPUs such as Ryzen 5000 and 3000 were tested at the relevant JEDEC settings as per the processor's individual memory support with DDR4.

Power

The nature of reporting processor power consumption has become, in part, a bit of a nightmare. Historically the peak power consumption of a processor, as purchased, is given by its Thermal Design Power (TDP, or PL1). For many markets, such as embedded processors, that value of TDP still signifies the peak power consumption. For the processors we test at AnandTech, either desktop, notebook, or enterprise, this is not always the case.

Modern high-performance processors implement a feature called Turbo. This allows, usually for a limited time, a processor to go beyond its rated frequency. Exactly how far the processor goes depends on a few factors, such as the Turbo Power Limit (PL2), whether the peak frequency is hard coded, the thermals, and the power delivery. Turbo can sometimes be very aggressive, allowing power values 2.5x above the rated TDP.

AMD and Intel have different definitions for TDP that are, broadly speaking, applied the same. The difference comes from turbo modes, turbo limits, turbo budgets, and how the processors manage that power balance. These topics are 10000-12000 word articles in their own right, and we’ve got a few articles worth reading on the topic.

(0-0) Peak Power

Given that the Ryzen 9 7950X3D has a lower TDP and PPT rating than the Ryzen 9 7950X, it pulls less power. We observed a peak power output of 144.53 W on the 7950X3D, compared to 221.87 W on the 7950X. Talking figures, the Ryzen 7950X3D is pulling around 65% of the power of the 7950X, which is understandable given the power limitations due to the CCX laden with AMD's 3D V-Cache packaging.

Looking at the power consumption of the Ryzen 9 7950X3D in closer detail, we can see that it delivered a consistent load of between 140 and 144 W in our Prime95 sustained power test. This is around 18 Watts lower than the official Package Power Tracking (PPT) level AMD has set at 162 W. However, it operates higher than the TDP of 120 W, which is to be expected. The TDP and PPT ratings are different as the TDP is the base power the CPU should be drawing, while the PPT (socket), set at 162 W, is the maximum the processor can draw as a maximum under full load.

Office/Web

(1-2) UL Procyon Office: Word

(1-3) UL Procyon Office: Excel

(1-4) UL Procyon Office: PowerPoint

(1-5) UL Procyon Office: Outlook

(1-6) UL Procyon Photo Editing: Image Retouching

(1-7) UL Procyon Photo Editing: Batch Processing

(1-8) Kraken 1.1 Web Test

In our office-based testing, the Ryzen 9 7950X3D performs a little worse than the 7950X, but this is to be expected given the differences in TDP, PPT, and the overall power envelope. Still, the 7950X3D performs well and is more than suitable for office and web-based tasks.

The AMD Ryzen 9 7950X3D Review: AMD's Fastest Gaming Processor CPU Benchmark Performance: Science
Comments Locked

122 Comments

View All Comments

  • Kuhar - Tuesday, February 28, 2023 - link

    @Dizoja86 nice comment :) I was waiting for one like it!
  • mikato - Tuesday, February 28, 2023 - link

    LOL, fully agree. Is that for real?
  • Gastec - Wednesday, March 1, 2023 - link

    If they would give us the very best of their tech at an acceptable price then the consumers would not upgrade every X short interval of time, they would not consume as much as desired, and profits would dwindle. The spice must flow.
  • escksu - Wednesday, March 1, 2023 - link

    I don't see that and intentionally cripple the 7950x3d.

    And has to balance out price Vs market demand and finishing returns. For end users, the main benefit from the extra cache is mostly just gaming. Dumping more cache doesn't necessarily means games will run faster.. then we have to look at thermals as well.

    Also, even though you are willing to pay $1000, it doesn't mean everyone will. AMD doesn't make decisions based on what a few individuals want.
  • Samus - Tuesday, February 28, 2023 - link

    No matter how you look at this, improperly optimized or not, it's impressive AMD pulled this off with reliable performance gains. The optimizations are all in software\drivers at this point, which is strange to say for a CPU that hasn't had any 'architectural' modification unless one were to consider L3 cache part of the 'architecture.'

    But I think we all agree this is going to be nuts when they stack V-Cache on both CCD's, presumably using a cheaper manufacturing process to possible keep costs identical or lower.

    The trick now is going to be getting devs to optimize for large L3 caches. That shouldn't be hard due to AMD's presence in the console market, possibly pitching this as a next-gen console design win, but at the same time they have a fraction of the marketshare Intel has and traditionally devs have slept with Intel while AMD is on the sofa.
  • Blueskull - Thursday, March 23, 2023 - link

    AMD knows exactly how to add X3D to all CCDs, just like the existing 7773X and the upcoming 9004X series. The problem is, however, that those server chips are NOT designed for gaming, and don't have crazy high clock rates. For gaming, you need high clock on at least some cores, and adding X3D severely limits clock speed. This is the balance they try to strike.

    The 50W less TDP penalty is what you pay for adding X3D to one CCD, and if you add X3D to both, you will have to pay 100W of TDP penalty. This makes the total power budget a mere 70W, and this is probably not good for single core speed.

    Though 35W/CCD is kinda low, but if you put 12 of those, it still has a total TDP in north of 400W, what's how Genoa-X gets its performance. On a desktop platform for gaming? This could be really bad.

    That being said, 35W-rated laptop Ryzens can still sustain 3.xGHz, so that is not low by any standard. Heck, my laptop runs at less than 2.5GHz sustained, and I never had a problem running any productivity tools, though clearly I do not game. This lower clock impacts server applications by very little, so the "all X3D" method works for this market.

    Getting higher TDP while having X3D is kinda difficult, and I don't think they are getting there any soon. The problem is the thickness of that extra stacked chip, and that thickness translated to thermal resistance. Moreover, the thermal gradient on the X3D chip causes it to expand more near the CCD and less near the IHS, giving it a trapezoidal deformation. This applies tremendous amount of stress on the hard material (in the level of hundreds of MPa to a few GPa), causing both the CCD and X3D to fail prematurely.

    So, mother nature doesn't seem to like the concept of X3D when single core performance is important, and AMD must figure out a way to solve this.
  • DanNeely - Monday, February 27, 2023 - link

    Buying a mid-range CPU and putting the savings into the GPU has been the most short term bang for the buck in a gaming system for years. The main risk if you're someone who keeps the base system and swaps in faster GPUs every other year or so is that the mid-range CPU might not age well.

    A fairly recent example is that while fine at the time, intel's 4 core 4 thread i5 processors ended up becoming CPU bottlenecks several years before the 4/8 core i7s from their generation did.
  • CaptRiker - Tuesday, February 28, 2023 - link

    still running a system from 2015

    intel hedt 5960x (8c/16t) oc'd to 4ghz
    32gig pc-2166 memory
    asus rampage V extreme X99 mobo
    thermalake 1200watt ps
    all 7 yrs old.. was running dual 980 ti's in sli when I put it together
    then switched out to a 2080 ti 4 years ago.. then last month 2080 ti died so now
    I'm sporting a brand new gigabyte aero oc 4080
    running pretty well atm (win 10 pro w/all latest patches and feature updates)
    I did recently mothball my original Intel 750 1.2tb nvme ssd card (was very close to max endurance). put in a 2tb Kingston fury nvme ssd (running in 3.0 mode since mobo only has single 3.0 m.2 slot)
  • Gastec - Wednesday, March 22, 2023 - link

    And why do we need to know all that? That you've bough expensive video cards, that you are "sporting"?
  • Kangal - Tuesday, February 28, 2023 - link

    Yes, but that only applies to Intel.
    AMD systems are built with the future in-mind.

    For instance, if you hopped on AM4 with a low budget and only built an r3-1400 and threw the rest at a GTX 1060 you got a good deal. Better than an r5-1600 and GTX 1050. And you can upgrade both the CPU and GPU without having to change other components like the Case, Motherboard, Cooler, PSU, etc etc.

    For gaming, there's not much gains to be had from upgrading the CPU frequently, unlike with GPU. For that reason, it doesn't make sense to upgrade to Zen+ and regular Zen3. So I'd envision from the r3-1400 to the r5-3600 then to the r7-5800x3D. Meanwhile GPU upgrade can goto GTX 1660-Super, or RTX 2060-Super, then RTX 3060, lastly RTX 4060Ti. That's the more value oriented way of Gaming and upgrading from 2017 to 2021, a 5-Year period akin to a console generation.

Log in

Don't have an account? Sign up now