Power Consumption

The nature of reporting processor power consumption has become, in part, a dystopian nightmare. Historically the peak power consumption of a processor, as purchased, is given by its Thermal Design Power (TDP, or PL1). For many markets, such as embedded processors, that value of TDP still signifies the peak power consumption. For the processors we test at AnandTech, either desktop, notebook, or enterprise, this is not always the case.

Modern high performance processors implement a feature called Turbo. This allows, usually for a limited time, a processor to go beyond its rated frequency. Exactly how far the processor goes depends on a few factors, such as the Turbo Power Limit (PL2), whether the peak frequency is hard coded, the thermals, and the power delivery. Turbo can sometimes be very aggressive, allowing power values 2.5x above the rated TDP.

AMD and Intel have different definitions for TDP, but are broadly speaking applied the same. The difference comes to turbo modes, turbo limits, turbo budgets, and how the processors manage that power balance. These topics are 10000-12000 word articles in their own right, and we’ve got a few articles worth reading on the topic.

In simple terms, processor manufacturers only ever guarantee two values which are tied together - when all cores are running at base frequency, the processor should be running at or below the TDP rating. All turbo modes and power modes above that are not covered by warranty. Intel kind of screwed this up with the Tiger Lake launch in September 2020, by refusing to define a TDP rating for its new processors, instead going for a range. Obfuscation like this is a frustrating endeavor for press and end-users alike.

However, for our tests in this review, we measure the power consumption of the processor in a variety of different scenarios. These include full AVX2/AVX512 (delete as applicable) workflows, real-world image-model construction, and others as appropriate. These tests are done as comparative models. We also note the peak power recorded in any of our tests.

(0-0) Peak Power

In peak power, the Core i7-5775C sticks to the 65 W value, whereas the Core i5 variant is below its TDP value. This is beyond the 22nm Core i7-4790S which is also a 65 W part.

In real-world tests, first up is our image-model construction workload, using our Agisoft Photoscan benchmark. This test has a number of different areas that involve single thread, multi-thread, or memory limited algorithms.

For Photoscan, the Core i7 spends its 'real world' time around 60 W, but does momentarily spike up above that 60 W mark. The Core i5 by comparison doesn't even touch 50 W.

The second test is from y-Cruncher, which is our AVX2/AVX512 workload. This also has some memory requirements, which can lead to periodic cycling with systems that have lower memory bandwidth per core options.

We're seeing some slight variation in power as the y-Cruncher algortihm moves out to DRAM movement over compute, however both processors seem to be hitting either their power limits or just a natural peak power consumption.

Test Setup and #CPUOverload Benchmarks CPU Tests: Office and Science
Comments Locked

120 Comments

View All Comments

  • Billy Tallis - Wednesday, November 4, 2020 - link

    Ian already said he tests at JEDEC speeds, which includes the latency timings. Using modules that are capable of faster timings does not prevent running them at standard timings.
  • Quantumz0d - Tuesday, November 3, 2020 - link

    Don't even bother Ian with these people.
  • Nictron - Wednesday, November 4, 2020 - link

    I appreciate the review and context over a period of time. Having a baseline comparison is important and it is up to us the reader to determine the optimal environment we would like to invest in. As soon as we do the price starts to skyrocket and comparisons are difficult.

    Reviews like this also show that a well thought out ecosystem can deliver great value. Companies are here to make money and I appreciate reviewers that provide baseline compatible testing over time for us to make informed decisions.

    Thank you and kind regards,
  • GeoffreyA - Tuesday, November 3, 2020 - link

    Thanks, Ian. I thoroughly enjoyed the article and the historical perspective especially. And the technical detail: no other site can come close.
  • eastcoast_pete - Tuesday, November 3, 2020 - link

    Ian, could you comment on the current state of the art of EDRAM? How fast can it be, how low can the latency go? Depending on those parameters and difficulty of manufacturing, there might be a number of uses that make sense.
    One where it could is to possibly allow Xe graphics to use cheaper and lower power LPDDR-4 or -5 RAM without taking a large performance hit vs. GDDR6. 128 or 256 MB EDRAM cache might just do that, and still keep costs lower. Pure speculation, of course.
  • DARK_BG - Tuesday, November 3, 2020 - link

    Hi , what I'm wondering is where the 30% gap between the 5770C and 4790K in Games came from , compared to your original review and all other reviews out there of 5770C. Since I'm with a Z97 platform and 4.0GHz Xeon , moving to 4770k or 4790K doesn't make any sense given their second hand prices but 5770C on this review makes alot of sense.

    So is it the OS,the drivers , some BIOS settings or on the older reviews the systems were just GPU limited failing to explore the CPU performance?
  • jpnex - Friday, January 8, 2021 - link

    Lol, no, the I7 5775c is just stronger than an i7 4790k, this is a known fact. Other benchmarks show the same thing. Old benchmarks don't show It because back then people didn't know that deactivating the iGPU would give a performance boost.
  • DARK_BG - Wednesday, July 20, 2022 - link

    I forgot back then to reply back , based on this review I've sourced 5775C (for a little less than 100$ this days going for 140-150$) coupled with Asus Z97 Pro and after some tweaking (CPU at 4.1GHz , eDRAM at 2000MHz and some other minor stuff that I already forgot) the difference compared to the Xeon 4.0GHz in games was mind blowing.Later I was able to source and 32GB Corsair Dominator DDR3 2400MHz CL10 just for fun to make it top spec config. :)

    It is a very capable machine but this days I'll swap it for Ryzen 5800X3D to get the final train on the fastest Windows 7 capable gaming system.Yeah i know it is OLD OS but everything I need runs flawessly for more than a decade with only reainstall 7 years ago due to an SSD failure. It is my only personal Intel System for the past 22 years since it was the for a first time the best price performance second hand platform for a moment , all the rest were AMD based and I keep them all in working condition.

    BTW I was able to run Windows XP 64bit on the Z97 platform , I just need to swap the GTX 1070 for GTX 980/980 Ti to be fully functional everything else runs like a charm under XP i was able to hack the driver to install as an GTX 960 so I have a 2D hardware acceleration under XP on GTX 1070 since nvidia havent changed anything in regard to 2D compared to the previous generation
  • dew111 - Tuesday, November 3, 2020 - link

    Rocket lake should have been the comet lake processor with eDRAM. Instead they'll be lucky to beat comet lake at all.
  • erotomania - Tuesday, November 3, 2020 - link

    Thanks, Ian. I enjoyed this article from a NUC8i7BEH that has 128MB of coffee-flavored eDRAM. Also, thanks Ganesh for the recent reminder that Bean > Frost.

Log in

Don't have an account? Sign up now