H11DSi Conclusions

The ability to provide a suitable recommendation on a product is stifled when it turns out to be the only one available for a given market. I was surprised when I started doing research on dual socket EPYC solutions as to just how few motherboards were available. The fact that the number was one was even more dumbfounding – is there really next to no market for custom off-the-shelf built dual socket EPYC systems? Over the years we’ve seen a plentiful number of dual socket Xeon offerings for each generation, but it is perhaps that the market is still getting used to EPYC is why the options are limited. In a similar line, only four single socket options exist as well.

So despite the Supermicro H11DSi and its derivatives being the only option available, if you need it then there is no other question. But if the option is between this and perhaps going to two single socket systems, it still begs the question as to whether it is good motherboard to invest in. On paper, ~$680 for a dual socket EPYC board seems like a reasonable price.

First, let us start with some of the failings. Personally I felt that the lack of fan control support was a big let-down, especially if this is a motherboard that might find its way into desktop-class chassis for workstations. The only way to access the fan control is through the web-interface IPMI, and the options are extremely basic.

The other issue on the board is simply one of layout – due to board limitations there isn’t much that can be done here, but ultimately the second CPU is underutilized. Out of the 64 PCIe lanes that it offers (the chip has 128, but 64 are being used for CPU-to-CPU links), only 8 are used for an external PCIe device. It’s almost as if this motherboard needs a separate add-on device to be able to use more of what the hardware can offer. This brings me back around to the conclusion I made on the first page – this motherboard is likely more for CPU computational use cases than anything that needs to fully use the IO of the hardware.

 

On the positives, the power cable layout is good for most systems, as I’ve experienced some bad power connector placement in similar boards for older systems. Having 8 fan headers, despite what I’ve said above, is a good thing as well. We successfully booted both from SATA and NVMe, and accessing IPMI through a board-specific password is something we expect all future server products to adhere to in the future. We had no issues with the latest high-end processors, the 7F02 series, as well as some of the high core count ones running some super high density memory. That price isn’t too bad either, especially if this is going to be a compute focused system with high-value CPUs inside.

It is difficult not to sound downtrodden when you have to write 'it gets recommended by default, because it's the only option'. Users looking at EPYC systems might find that single socket deployments might be in your favor - the EPYCD82T that we reviewed previously costs around the same price as this 2P Supermicro board, but makes a lot better use of the IO per socket. The Supermicro H11DSi has density in its favor, and will cater to that crowd, but there are a number of decent single socket offerings that should be explored as well.

System Benchmarks
Comments Locked

36 Comments

View All Comments

  • 1_rick - Wednesday, May 13, 2020 - link

    Yeah, "numerous" was the correct word here.
  • peevee - Thursday, May 14, 2020 - link

    Nope. 1 is not numerous.
  • heavysoil - Friday, May 15, 2020 - link

    He's talking about the options for single socket, and lists three - numerous compared to the single available option for dual socket.
  • Guspaz - Wednesday, May 13, 2020 - link

    $600 enterprise board supporting up to 256 threads, and it's still just using one gigabit NICs?
  • Sivar - Wednesday, May 13, 2020 - link

    "Don't worry, widespread 10-gigabit is just around the corner." --2006
  • Holliday75 - Wednesday, May 13, 2020 - link

    1gb is pennies. 10gg costs a bit more. If you plan on using a different solution you have the option to get the cheaper board and install it. Save the 1gb for management duties or not at all.
  • DigitalFreak - Wednesday, May 13, 2020 - link

    Why waste the money on onboard 10 gig NICs when most buyers are going to throw in their own NIC anyway?
  • AdditionalPylons - Thursday, May 14, 2020 - link

    Exactly. This way the user is free to choose from 10/25/100 GbE or even Infiniband or something more exotic if they wish. I would personally go for a 25 GbE card (about about $100 used).
  • heavysoil - Friday, May 15, 2020 - link

    There's one model with gigabit NICs, and one with 10 gigabit NICs. That covers what most people would want, and PCIe NICs for SPF+, and/or 25/40/100 gigabit covers most everyone else.

    I can see this with the 1 gigabit NICs for monitoring/management and a 25 gigabit PCIe card for the VMs to use, for example.
  • eek2121 - Wednesday, May 13, 2020 - link

    I wish AMD would restructure their lineup a bit next gen.

    - Their HEDT offerings are decently priced, but the boards are not.
    - All of the HEDT boards I’ve seen are gimmicky, not supporting features like ECC, and are focused on gaming and the like.
    - HEDT does not support a dual socket config, so you would naturally want to step up to EPYC. However, EPYC is honestly complete overkill, and the boards are typically cut down server variants.
    - For those that don’t need HEDT, but need more IO, they don’t have an offering at all.

    I would love to see future iterations of Zen support an optional quad channel mode or higher, ECC standardized across the board (though if people realized how little ECC matters in modern systems...), and more PCIE lanes for everything.

Log in

Don't have an account? Sign up now