Ashes of the Singularity Revisited: A Beta Look at DirectX 12 & Asynchronous Shading
by Daniel Williams & Ryan Smith on February 24, 2016 1:00 PM ESTThe Performance Impact of Asynchronous Shading
Finally, let’s take a look at Ashes’ latest addition to its stable of DX12 headlining features; asynchronous shading/compute. While earlier betas of the game implemented a very limited form of async shading, this latest beta contains a newer, more complex implementation of the technology, inspired in part by Oxide’s experiences with multi-GPU. As a result, async shading will potentially have a greater impact on performance than in earlier betas.
Update 02/24: NVIDIA sent a note over this afternoon letting us know that asynchornous shading is not enabled in their current drivers, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled.
Since async shading is turned on by default in Ashes, what we’re essentially doing here is measuring the penalty for turning it off. Not unlike the DirectX 12 vs. DirectX 11 situation – and possibly even contributing to it – what we find depends heavily on the GPU vendor.
All NVIDIA cards suffer a minor regression in performance with async shading turned on. At a maximum of -4% it’s really not enough to justify disabling async shading, but at the same time it means that async shading is not providing NVIDIA with any benefit. With RTG cards on the other hand it’s almost always beneficial, with the benefit increasing with the overall performance of the card. In the case of the Fury X this means a 10% gain at 1440p, and though not plotted here, a similar gain at 4K.
These findings do go hand-in-hand with some of the basic performance goals of async shading, primarily that async shading can improve GPU utilization. At 4096 stream processors the Fury X has the most ALUs out of any card on these charts, and given its performance in other games, the numbers we see here lend credit to the theory that RTG isn’t always able to reach full utilization of those ALUs, particularly on Ashes. In which case async shading could be a big benefit going forward.
As for the NVIDIA cards, that’s a harder read. Is it that NVIDIA already has good ALU utilization? Or is it that their architectures can’t do enough with asynchronous execution to offset the scheduling penalty for using it? Either way, when it comes to Ashes NVIDIA isn’t gaining anything from async shading at this time.
Meanwhile pushing our fastest GPUs to their limit at Extreme quality only widens the gap. At 4K the Fury X picks up nearly 20% from async shading – though a much smaller 6% at 1440p – while the GTX 980 Ti continues to lose a couple of percent from enabling it. This outcome is somewhat surprising since at 4K we’d already expect the Fury X to be rather taxed, but clearly there’s quite a bit of shader headroom left unused.
153 Comments
View All Comments
itchypoot - Wednesday, February 24, 2016 - link
Continuing the trend of nvidias very bad DX12 performance.Sttm - Wednesday, February 24, 2016 - link
Wouldn't you need multiple data points to have a trend, and as this is really the only DX12 game, you do not have that do you?No what we have here is one game where one side has an advantage, and a fanboy for that side shouting how it means everything. As if we haven't seen that 1000 times before.
itchypoot - Wednesday, February 24, 2016 - link
Nothing of the sort, but you resort to insult because you have no substance. Likely you fit that description and see everyone else as being the same.There are other DX12 metrics available, nvidia continues to do poorly in them. Make yourself aware of them and return with data rather than insults.
Nvidia+DX12 = unfortunate state of affairs
willis936 - Wednesday, February 24, 2016 - link
"Make yourself aware of them so I don't have to make my own arguments"flashbacck - Wednesday, February 24, 2016 - link
Lol. This is pretty fantastic.close - Thursday, February 25, 2016 - link
Given that we only have (almost) one DX12 game available I wouldn't worry too much about the performance of any of the two players. By the time enough games are available to actually care about DX12 I assume both will be more than ready to deliver.HalloweenJack - Thursday, February 25, 2016 - link
so by the summer then - oh wait , tomb raider IS DX12 , on console - but Nv threw enough money at the dev to make it DX11 on the pc....close - Thursday, February 25, 2016 - link
Complaining (or worrying) about DX12 performance at this point is pointless. The whole ecosystem is very much in beta stages starting with the only version of Windows that supports DX12, Windows 10. The OS, the drivers, the games, they are all in a phase where they are subject to pretty big changes. Even the hardware will start supporting different subsets of DX12 in the future. And the title sums it up pretty well: "a beta look".But some people just need a reason to complain, to lament, to try on some sarcasm, etc. Only time will tell which platform will be "the best" and for how long once all the development is done. But what I can tell you right now is that both players will be "good enough".
P.S. Regardless of which side you're on, being a fanboy only works when you have the very top end product. So unless you have a FuryX or a 980Ti/Titan X pointing fingers at the performance of the competition is like driving a Fiesta and thinking it's a sort of Mustang.
silverblue - Thursday, February 25, 2016 - link
What about a Fiesta ST? (yes, I'm trolling, albeit mildly)MattKa - Thursday, February 25, 2016 - link
What a load of shit. Nvidia threw money at them to make it DX11?It's not DX12 on X-Box you uninformed baboon. In fact Crystal Dynamics is going to be patching DX12 support into the game.
You joker.