ASRock X99 WS-E/10G Conclusion

One could invite a saying 'the mind wants what the body can't have'. For a number of months and launches, I have wondered why there was a lack of 10GBase-T on consumer motherboards. The simple answer is that the X540 chips, or the Broadcom variants, are not only expensive but also power hungry enough require their own cooling but also require PCIe 2.0 x8 as a recommended minimum. It still baffles me why, despite these issues, it took so long to get on a product.

Our testing however shows the reality of the situation. In a single user point to point transfer, we yielded just over 2.0 Gbps, only 20% of the supposed rating. This was with a 1 GB transfer, with higher sizes increasing the speed up to a point. In order to get more than 2.0 Gbps, we needed to instigate multiple access streams to simulate more than one transfer request. The results pushed us into the 6-8 Gbps range from 4-10 streams and above 8 Gbps for 10+.

This puts a limit on the usefulness for a single individual. It means that the 10GBase-T network interface benefits from individuals that can emulate multiple access streams through software development (bulk information transfer such as MPI or rendering data) or in an SOHO/SMB environment where many users might want to be situated on VMs located on the machine. The motherboard is aimed at the workstation market with support for Xeons and RDIMMs, as well as 1U height clearance for servers too. I can imagine a server or workstation environment using this with several PCIe co-processors attached, each one assigned to a VM and being connected to via the 10G ports.

Due to the price of the controller, ASRock put the 10GBase-T on their highest end motherboard model under the premise that only extreme users will need it. As a result the board is equipped with two PLX8747 chips to allow for x16/x16/x16/x16 operation in a four-way GPU arrangement or x8/x8/x8/x8/x8/x8/x16 when single slot cards are in play. This opens up the market to PCIe coprocessors, RAID card arrangements and FPGA implementation workstations aplenty.  Elsewhere on the board are twelve total SATA ports, eight USB 3.0 ports, two Intel I210 GBit ports alongside the two Intel X540-BT2 10Gbit ports, TPM, SATA DOM, M.2 x4 and an enhanced Realtek ALC1150 audio codec solution.

Benchmark results were pretty much ballpark for X99 at stock levels, with multicore turbo putting the CPU results up nearer the top. One disappointing note was the DPC Latency which was reasonable only when the X540 10GBit ports were disabled, suggesting that the combination of 10G drivers and BIOS are not yet optimized for this sort of scenario. However on the plus side POST times were not affected by the X540 controller.

In the end, despite not yet knowing the price of the ASRock X99 WS-E/10G, we can say that it will be expensive. This means possibly in the $700-900 range, due to all the higher end connectivity in play. For that reason alone, the only way this board will be sold is to those that need 10G but also multi-GPU bandwidth. With all that said, I'm still glad ASRock has shown that 10G is possible in the consumer space. 

Gaming Performance
Comments Locked

45 Comments

View All Comments

  • AngelosC - Wednesday, January 7, 2015 - link

    They could have tested it on Linux KVM with SR-IOV or just run iperf on Linux between the 2 interfaces.

    They ruined the test.
  • eanazag - Monday, December 15, 2014 - link

    Okay, so the use case of a board like this is for network attached storage using iSCSI or SMB3. That network storage has to be able to perform above 1GbE bandwith for a single stream. 1 GbE = ~1024 Mbps = ~128 MBps no counting overhead. Any single SSD these days can outperform a 1GbE connection.

    If you're considering this board, there is a Johan written article on Anand that is a couple of years old about 10GbE performance. It will cover why it is worth it. I did the leg work and found them.

    http://www.anandtech.com/show/4014/10g-more-than-a...
    http://www.anandtech.com/show/2956/10gbit-ethernet...
  • extide - Monday, December 15, 2014 - link

    At the end of the day, I still think I'd rather the X99 Extreme 11.
  • tuxRoller - Monday, December 15, 2014 - link

    How Is the DPC measurement made? Average (which?), worst case, or just once?
  • Ian Cutress - Tuesday, November 1, 2016 - link

    Peak (worst value) during our testing period, which is usually a minute at 'idle'
  • TAC-2 - Tuesday, December 16, 2014 - link

    Either there's something wrong with your test of the NICs or there is a problem with this board. I've been using 10GBase-T for years now, even with default settings I can push 500-1000 MB/s using intel NICs.
  • AngelosC - Wednesday, January 7, 2015 - link

    I recon they were not testing this board's most important feature properly.

    The reviewer makes it sounds like they don't know how to test…
  • jamescox - Tuesday, December 16, 2014 - link

    This seems more like a marketing thing; who will actually buy this? Given the current technology, it seems like it is much better to buy a discrete card, if you actually need 10GB.

    The feature I would like to see come down to the consumer market is ECC memory. I have had memory start to get errors after installation. I always run exhaustive memory test when building a system (memtest86 or other hardware specific test). I did not have any stability issues. I only noticed that something was wrong when I found that recently written files were corrupted. Almost everything passes through system memory at some point. Why is it okay for this not to be ECC protected? Given how far system memory is from the cpu (with L3 cache, and soon to be L4 with stacked memory), the speed is actually less important. Everything should be ECC protected.

    There may be some argument that the gpu memory doesn't need to be ECC, since if it is just being used for display; errors will only result in display artifacts. I am not sure if this is actually the case anymore though with what gpus are being used for. Can a single bit error in gpu memory cause a system crash? I may have to start running gpu memory test also.
  • petar_b - Thursday, December 18, 2014 - link

    ASROCK solely targets users with need of 10G network. If network card was an discrete option price would be lower and they would target wider audience. I like two PLXes, as I can attach all kind of Network, SAS and GPU cards. PLX and ASROCK quality is the reason I use their mobos.

    Regarding ECC memory for GPU, not agree there. If GPU is used to do math with OpenCL, then avoiding memory errors is very important.
  • akula2 - Thursday, December 18, 2014 - link

    Avoiding memory errors is beyond extremely important in my case when I churn tons of Science and Engineering things out of those Nvidia Titan Black, Quadro and Tesla cards. AMD did an amazing job with FirePro W9100 cards too.

Log in

Don't have an account? Sign up now