CPU Benchmark Performance: Power, Office, and Science

Our previous set of ‘office’ benchmarks have often been a mix of science and synthetics, so this time we wanted to keep our office section purely on real-world performance.

For the remainder of the testing in this review of the Core i7-12700K and i5-12600K, we are using DDR5 memory at the following settings:

  • DDR5-4800(B) CL40

Power

(0-0) Peak Power

Comparing power draw to other competing CPUs, both the Core i7-12700K and Core i5-12600K are noticeably more power-efficient than previous generations including both Intel's 11th Gen and 10th Gen Core. Though at full-load with no overclocking, AMD's Ryzen 5000 and 3000 series processors remain much more power-efficient.

Office

(1-1) Agisoft Photoscan 1.3, Complex Test

In our office benchmarks, Intel's 12th Gen Core reigns supreme in Agisoft Photoscan due to its higher core frequency and IPC performance.

Science

(2-1) 3D Particle Movement v2.1 (non-AVX)

(2-2) 3D Particle Movement v2.1 (Peak AVX)

(2-3) yCruncher 0.78.9506 ST (250m Pi)

(2-4) yCruncher 0.78.9506 MT (2.5b Pi)

(2-4b) yCruncher 0.78.9506 MT (250m Pi)

(2-5) NAMD ApoA1 Simulation

(2-6) AI Benchmark 0.1.2 Total

(2-6a) AI Benchmark 0.1.2 Inference

(2-6b) AI Benchmark 0.1.2 Training

In the majority of our science-based benchmarks, both the Core i7 and Core i5 did well. The only benchmarks that didn't favor the 12th Gen Core series processors were in 3DPM 2.1, but more specifically in the AVX test. 

Intel Core i7-12700K and Core i5-12600K Review: Mid-Range Desktop CPU Benchmark Performance: Simulation And Rendering
Comments Locked

196 Comments

View All Comments

  • mode_13h - Tuesday, April 5, 2022 - link

    > Intel would collapse under the weight of its own cost structure built around those fabs

    It wouldn't have to be overnight, and obviously they'd have to rationalize some aspects of the business. However, it seems like the right thing to do, especially if there are activities they couldn't undertake without manufacturing in-house. That just screams either "inefficiency" or, more likely, "unfair advantage".

    The one thing I don't accept is that "it has to be this way, because it always was". That's almost never a good reason not to change something.
  • Mike Bruzzone - Tuesday, April 5, 2022 - link

    "That just screams either "inefficiency" or, more likely, "unfair advantage".

    I agree, both, there are many inefficiencies in enterprise and industry relations, and governance and oversight.

    "Always was", the inefficiency is being addressed for a very long time! It's just the way things have worked out over 24 years to resolve Intel inefficiencies that are not effective under democratic capitalism caught in associate network conundrums. mb
    .

  • Spunjji - Friday, April 1, 2022 - link

    "Because it quite literally cannot be both."

    It literally can given that Intel only started adding more than 4 cores after Ryzen launched and then, subsequently, had to blow their power requirements out just to keep up... and you're reaching all the way back to 'dozer - a CPU designed long after AMD had relinquished leadership - to try to bat back the valid accusation that Intel have always abused their leadership position to rinse consumers.
  • ballsystemlord - Saturday, April 2, 2022 - link

    And CUDA isn't vendor lock-in?
    GPU compute is a great idea -- and that's not just my opinion. AMD failed to deliver in a big way when it came to getting CPU/ GPU sharing compute capabilities off of the ground. They're still working on it (CDNA...). But it's unlikely at this point to be available to us -- which is what I dislike.
  • mode_13h - Sunday, April 3, 2022 - link

    > AMD failed to deliver in a big way when it came to getting
    > CPU/ GPU sharing compute capabilities off of the ground.

    Yeah, HSA really fizzled and even the original APU & fusion concept as some kind of synergistic processing unit went sideways.

    Then, AMD got distracted by AI and became consumed by chasing Nvidia in that market and HPC. The consumer platform has been largely neglected by them, since.
  • Khanan - Friday, April 8, 2022 - link

    Nonsense. Fusion culminated into APUs and is one of the biggest successes of AMD ever, please talk and comment less, you’re a huge wannabe.
  • mode_13h - Monday, April 11, 2022 - link

    > Fusion culminated into APUs and is one of the biggest successes of AMD ever,

    What I mean is that "Fusion" turned out to be a marketing thing. The idea of using iGPUs as a compute accelerator didn't really go anywhere.

    AMD jumped from backing OpenCL to HSA, thinking that would spur industry adoption, but it fizzled even worse than OpenCL (which has continued plodding along, in spite of loss of interest/support).

    Microsoft is even discontinuing C++ AMP.

    > please talk and comment less, you’re a huge wannabe.

    Please troll less. News comments were fine without you. I have yet to see you add anything of value. Mostly, you just seem to antagonize people, which is the very definition of trolling.
  • mode_13h - Monday, April 11, 2022 - link

    > OpenCL (which has continued plodding along, in spite of loss of interest/support).

    I meant AMD's loss of interest/support. Heck, even Nvidia has gotten on board with 3.0!
  • Kangal - Tuesday, March 29, 2022 - link

    It's hard not to agree.
    These Intel 12th-gen products are a 2022 product and should be compared with a 2022 alternative. Besides they're somewhat of a paper launch, anyway. AMD has a lot of headroom to turbo boost solo core, add more cores, increase thermal headroom, add faster memory..... without having to do major overhaul on Zen3 architecture. They have somewhat rested on their laurels with Zen3, but I suspect that Zen4 is going to be a very distinct uplift. The way the companies stack is:

    2017 Zen1 vs Intel 7th-gen
    2018 Zen+ vs Intel 8th-gen
    2019 Zen2 vs Intel 9th-gen
    2020 Zen3 vs Intel 10th-gen
    2021 Zen3. vs Intel 11th-gen
    2022 Zen4 vs Intel 12th-gen

    PS: both AMD Zen2 and Intel 10th-gen are significantly slower in single-core, multi-thread, and use much more energy than Apple M1 chips. Things look a bit more even with Zen3 and Intel 11th-gen. But the Apple M2 chips will likely "humiliate" the likes of Intel 12th-gen and AMD Zen4. But then again this is comparing Apple's to Windows, so a moot point.
  • theMillen - Wednesday, March 30, 2022 - link

    Except, 12th-gen launched in 2021. And 13th-gen will launch in 2022... soooo

Log in

Don't have an account? Sign up now