Power Consumption

The nature of reporting processor power consumption has become, in part, a dystopian nightmare. Historically the peak power consumption of a processor, as purchased, is given by its Thermal Design Power (TDP, or PL1). For many markets, such as embedded processors, that value of TDP still signifies the peak power consumption. For the processors we test at AnandTech, either desktop, notebook, or enterprise, this is not always the case.

Modern high performance processors implement a feature called Turbo. This allows, usually for a limited time, a processor to go beyond its rated frequency. Exactly how far the processor goes depends on a few factors, such as the Turbo Power Limit (PL2), whether the peak frequency is hard coded, the thermals, and the power delivery. Turbo can sometimes be very aggressive, allowing power values 2.5x above the rated TDP.

AMD and Intel have different definitions for TDP, but are broadly speaking applied the same. The difference comes to turbo modes, turbo limits, turbo budgets, and how the processors manage that power balance. These topics are 10000-12000 word articles in their own right, and we’ve got a few articles worth reading on the topic.

In simple terms, processor manufacturers only ever guarantee two values which are tied together - when all cores are running at base frequency, the processor should be running at or below the TDP rating. All turbo modes and power modes above that are not covered by warranty. Intel kind of screwed this up with the Tiger Lake launch in September 2020, by refusing to define a TDP rating for its new processors, instead going for a range. Obfuscation like this is a frustrating endeavor for press and end-users alike.

However, for our tests in this review, we measure the power consumption of the processor in a variety of different scenarios. These include full AVX2/AVX512 (delete as applicable) workflows, real-world image-model construction, and others as appropriate. These tests are done as comparative models. We also note the peak power recorded in any of our tests.

(0-0) Peak Power

In peak power, the Core i7-5775C sticks to the 65 W value, whereas the Core i5 variant is below its TDP value. This is beyond the 22nm Core i7-4790S which is also a 65 W part.

In real-world tests, first up is our image-model construction workload, using our Agisoft Photoscan benchmark. This test has a number of different areas that involve single thread, multi-thread, or memory limited algorithms.

For Photoscan, the Core i7 spends its 'real world' time around 60 W, but does momentarily spike up above that 60 W mark. The Core i5 by comparison doesn't even touch 50 W.

The second test is from y-Cruncher, which is our AVX2/AVX512 workload. This also has some memory requirements, which can lead to periodic cycling with systems that have lower memory bandwidth per core options.

We're seeing some slight variation in power as the y-Cruncher algortihm moves out to DRAM movement over compute, however both processors seem to be hitting either their power limits or just a natural peak power consumption.

Test Setup and #CPUOverload Benchmarks CPU Tests: Office and Science
Comments Locked

120 Comments

View All Comments

  • dotjaz - Saturday, November 7, 2020 - link

    *serves
  • Samus - Monday, November 9, 2020 - link

    That's not true. There were numerous requests from OEM's for Intel to make iGPU-enabled XEONs for the specific purpose of QuickSync, so there are indeed various applications other than ML where an iGPU in a server environment is desirable.
  • erikvanvelzen - Saturday, November 7, 2020 - link

    Ever since the Pentium 4 Extreme Edition I've wondered why intel does not permanently offer a top product with a large L3 or L4 cache.
  • lemmemakethis - Thursday, December 3, 2020 - link

    Great blog post for better understanding <a href="https://farmslik.com/sales/">Buy rams near me </a>
  • plonk420 - Monday, November 2, 2020 - link

    been waiting for this to happen ...since the Fury/Fury X. would gladly pay the $230ish they want for a 6 core Zen 2 APU but even with "just" 4c8t + Vega 8 (but preferably 11) + HBM(2)
  • ichaya - Monday, November 2, 2020 - link

    With the RDNA2 infinitycache announcement and the increase (~2x) in effective BW from it, and we know Zen has always done better with more memory BW, so it's just dead obvious now that an L4 cache on the I/O die would increase performance (especially in workloads like gaming) more than it's power cost.

    I really should have said waiting since Zen 2, since that was the I/O die was introduced, but I'll settle for eDRAM or SRAM L4 on the I/O die as that would be easier than a CCX with HBM2 as cache. Some HBM2 APUS would be nice though.
  • throAU - Monday, November 2, 2020 - link

    I think very soon for consumer focused parts, on package HBM won't necessarily be cache, but they'll be main memory. End users don't need massive amounts of RAM in end user devices, especially as more workload moves to cloud.

    8 GB of HBM would be enough for the majority of end user devices for some time to come and using only HBM instead of some multi-level caching architecture would be simpler - and much smaller.
  • Spunjji - Monday, November 2, 2020 - link

    Really liking the level of detail from this new format! Fascinated to see how the Broadwell secret sauce has stood up to the test of time, too.

    Hopefully the new gaming CPU benchmarks will finally put most of the benchmark bitching to bed - for sure it goes to show (at quite some length) that the ranking under artificially CPU-limited scenarios doesn't really correspond to the ranking in a realistic scenario, where the CPU is one constraint amongst many.

    Good work all-round 👍👍
  • lemurbutton - Monday, November 2, 2020 - link

    Anandtech: We're going to review a product from 2015 but we're not going to review the RTX 3080, RTX 3090, nor the RTX 3070.

    If I were management, I'd fire every one of the editors.
  • e36Jeff - Monday, November 2, 2020 - link

    The guy that tests GPUs was affected by the Cali wildfires. Ian wouldn't be writing a GPU review regardless, he does CPUs.

Log in

Don't have an account? Sign up now