Test Bed and Setup

As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's maximum supported frequency. This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date.

Test Setup
Intel Cascade Lake Core i9-10980XE
Motherboard ASRock X299 OC Formula (BIOS P1.80)
CPU Cooler TRUE Copper + Silverstone Fan
DRAM Corsair Vengeance RGB 4x8 GB DDR4-2933
GPU Sapphire RX 460 2GB (CPU Tests)
MSI GTX 1080 Gaming 8G (Gaming Tests)
PSU Corsair AX860i
SSD Crucial MX500 2TB
OS Windows 10 1909

For our motherboard, we are using the latest firmware. I do not believe that ASRock has updated its BIOSes to provide fixes for the latest Intel security updates, as these take time.

The latest AMD TR3 benchmarks were run by Gavin Bonshor, while I attended Supercomputing in Denver last week. Unfortunately both Intel and AMD decided to sample processors before the annual trade show conference, with launches only a couple of days after the show finished. As a result, our testing has been split between Gavin and myself, and we have endevoured to ensure parity through my automated testing suite.

Also, our compile test seems to have broken itself when we used Windows 10 1909, and due to travel we have not had time to debug why it is no longer working. We hope to get this test up and running in the new year, along with an updated test suite.

We must thank the following companies for kindly providing hardware for our multiple test beds. Some of this hardware is not in this test bed specifically, but is used in other testing.

Hardware Providers
Sapphire RX 460 Nitro MSI GTX 1080 Gaming X OC Crucial MX200 +
MX500 SSDs
Corsair AX860i +
AX1200i PSUs
G.Skill RipjawsV,
SniperX, FlareX
Crucial Ballistix
DDR4
Silverstone
Coolers
Silverstone
Fans
Power Consumption CPU Performance: Rendering Tests
Comments Locked

79 Comments

View All Comments

  • Thanny - Wednesday, November 27, 2019 - link

    Zen does not support AVX-512 instructions. At all.

    AVX-512 is not simply AVX-256 (AKA AVX2) scaled up.

    Something to consider is that AVX-512 forces Intel chips to run at much slower clock speeds, so if you're mixing workloads, using AVX-512 instructions could easily cause overall performance to drop. It's only in an artificial benchmark situation where it has such a huge advantage.
  • Everett F Sargent - Monday, November 25, 2019 - link

    Obviously, AMD just caught up with Intel's 256-bit AVX2, prior to Ryzen 3 AMD only had 128-bit AVX2 AFAIK. It was the only reason I bought into a cheap Ryzen 3700X Desktop (under $600US complete and prebuilt). To get the same level of AVX support, bitwise.

    I've been using Intel's Fortran compiler since 1983 (back then it was on a DEC VAX).

    So I only do math modeling at 64-bits like forever (going back to 1975), So I am very excited that AVX-512 is now under $1KUS. An immediate 2X speed boost over AVX2 (at least for the stuff I'm doing now).
  • rahvin - Monday, November 25, 2019 - link

    I'd be curious how much the AVX512 is used by people. It seems to be a highly tailored for only big math operations which kinda limits it's practical usage to science/engineering. In addition the power use of the module was massive in the last article I read, to the point that the main CPU throttled when the AVX512 was engaged for more than a few seconds.

    I'd be really curious what percentage of people buying HEDT are using it, or if it's just a niche feature for science/engineering.
  • TEAMSWITCHER - Tuesday, November 26, 2019 - link

    If you don't need AVX512 you probably don't need or even want a desktop computer. Not when you can get an 8-core/16-thread MacBook Pro. Desktops are mostly built for show and playing games. Most real work is getting done on laptops.
  • Everett F Sargent - Tuesday, November 26, 2019 - link

    LOL, that's so 2019.
    Where I am from it's smartwatches all the way down.
    Queue Four Yorkshiremen.
  • AIV - Tuesday, November 26, 2019 - link

    Video processing and image processing can also benefit from AVX512. Many AI algorithms can benefit from AVX512. Problem for Intel is that in many cases where AVX512 gives good speedup, GPU would be even better choice. Also software support for AVX512 is lacking.
  • Everett F Sargent - Tuesday, November 26, 2019 - link

    Not so!
    https://software.intel.com/en-us/parallel-studio-x...
    It compiles and runs on both Intel and AMD. Full AVX-512 support on AVX-512 hardware.
    You have to go full Volta to get true FP64, otherwise desktop GPU's are real FP64 dogs!
  • AIV - Wednesday, November 27, 2019 - link

    There are tools and compilers for software developers, but not so much end user software actually use them. FP64 is mostly required only in science/engineering category. Image/video/ai processing is usually just fine with lower precision. I'd add that also GPUs only have small (<=32GB) RAM while intel/amd CPUs can have hundreds of GB or more. Some datasets do not fit into a GPU. AVX512 still has its niche, but it's getting smaller.
  • thetrashcanisfull - Monday, November 25, 2019 - link

    I asked about this a couple of months ago. Apparently the 3DPM2 code uses a lot of 64b integer multiplies; the AVX2 instruction set doesn't include packed 64b integer mul instructions - those were added with AVX512, along with some other integer and bit manipulation stuff. This means that any CPU without AVX512 is stuck using scalar 64b muls, which on modern microarchitectures only have a throughput of 1/clock. IIRC the Skylake-X core and derivatives have two pipes capable of packed 64b muls, for a total throughput of 16/clock.

    I do wish AnandTech would make this a little more clear in their articles though; it is not at all obvious that the 3DPM2 is more of a mixed FP/Integer workload, which is not something I would normally expect from a scientific simulation.

    I also think that the testing methodology on this benchmark is a little odd - each algorithm is run for 20 seconds, with a 10 second pause in between? I would expect simulations to run quite a bit longer than that, and the nature of turbo on CPUs means that steady-state and burst performance might diverge significantly.
  • Dolda2000 - Monday, November 25, 2019 - link

    Thanks a lot, that does explain much.

Log in

Don't have an account? Sign up now