Comparing Percentile Numbers Between the GTX 980 Ti and Fury X

As the two top end cards from both graphics silicon manufacturers were released this year, there was all a big buzz about which is best for what. Ryan’s extensive review of the Fury X put the two cards head to head on a variety of contests. For DirectX 12, the situation is a little less clear cut for a number of reasons – games are yet to mature, drivers are also still in the development stage, and both sides competing here are having to rethink their strategies when it comes to game engine integration and the benefits that might provide. Up until this point DX12 contests have either been synthetic or having some controversial issues. So for Fable Legends, we did some extra percentile based analysis for NVIDIA vs. AMD at the top end.

For this set of benchmarks we ran our 1080p Ultra test with any adaptive frame rate technology enabled and recorded the result:

For these tests, usual rules apply – GTX 980 and Fury X, in our Core i7/i5/i3 configurations at all three resolution/setting combinations (3840x2160 Ultra, 1920x1080 Ultra and 1280x720 Low). Data is given in the form of frame rate profile graphs, similar to those on the last page.

As always, Fable Legends is still in early access preview mode and these results may not be indicative of the final version, but at this point they still provide an interesting comparison.


At 3840x2160, both frame rate profiles from each card looks the same no matter the processor used (one could argue that the Fury X is mildly ahead on the i3 at low frame rates), but the 980 Ti has a consistent gap across most of the profile range.


At 1920x1080, the Core i7 model gives a healthy boost to the GTX 980 Ti in high frame rate scenarios, though this seems to be accompanied by an extended drop off region in high frame rate areas. It is also interesting that in the Core i3 mode, the Fury X results jump up and match the GTX 980 Ti almost across the entire range. This again points to some of the data we saw on the previous page – at 1080p somehow having fewer cores gave the results a boost due to lighting scenarios.


At 1280x720, as we saw in the initial GPU comparison page on average frame rates, the Fury X has the upper hand here in all system configurations. Two other obvious points are noticeable here – moving from the Core i5 to the Core i7, especially on the GTX 980 Ti, makes the easy frames go quicker and the harder frames take longer, but also when we move to the Core i3, performance across the board drops like a stone, indicating a CPU limited environment. This is despite the fact that with these cards, 1280x720 at low settings is unlikely to be used anyway.

Discussing Percentiles and Minimum Frame Rates - AMD Fury X Final Words
Comments Locked

141 Comments

View All Comments

  • Gotpaidmuch - Thursday, September 24, 2015 - link

    Sad day for all of us when even the small wins, that AMD gets, are omitted from the benchmarks.
  • Oxford Guy - Thursday, September 24, 2015 - link

    "we are waiting for a better time to test the Ashes of the Singularity benchmark"
  • ZipSpeed - Thursday, September 24, 2015 - link

    The 7970 sure has legs. Turn down the quality down one notch from ultra to high, and the card is still viable at 1080p gaming.
  • looncraz - Thursday, September 24, 2015 - link

    As a long-time multi-CPU/threaded software developer AMD's results show one thing quite clearly: they have some unwanted lock contention in their current driver.

    As soon as that is resolved, we should see a decent improvement for AMD.

    On another note, am I the only one that noticed how much the 290X jumped compared to the rest of the lineup?!

    Does that put the 390X on par with the 980 for Direct X 12? That would be an interesting development.
  • mr_tawan - Thursday, September 24, 2015 - link

    Well even if UE4 uses DX12, it would probably just a straight port from DX11 (rather than from XBONE or other console). The approach it uses maynot flavour AMD as much as Nvidia, who know ?

    Also I think the Nvidia people would have involved with the engine development more than AMD (due to its developer relationships team size I guess). The Oxide games also mentioned that they got this kind of involvement as well (even if the game is AMD title).
  • tipoo - Thursday, September 24, 2015 - link

    Nice article. Looks like i3s are going to only get *more* feasible for gaming rigs under DX12. There's still the odd title that suffers without quads though, but most console ports at least should do fine.
  • ThomasS31 - Thursday, September 24, 2015 - link

    Still not a Game performance test... nor a CPU.

    There is no AI... and I guess a lot more is missing that would make a difference in CPU as well.

    Though yeah... kind a funny that an i3 is "faster" than an i5/7 here. :)
  • Traciatim - Thursday, September 24, 2015 - link

    This is what I was thinking too. I thought that DX12 might shake up the old rule of thumb saying 'i5 for gaming and i7 for working' but it seems to be this still holds true. In some cases it might even make more sense budget wise to go for a high end i3 and sink as much in to your video card as possible rather than go for an i5 depending on where your budget and current expected configuration are.

    More CPU benchmarking and DX12 benchmarks are needed of course, but it still looks like the design of machines isn't going to change all that much.
  • Margalus - Friday, September 25, 2015 - link

    this test shows absolutely nothing about "gaming". It is strictly rendering. When it comes to "gaming" I believe your i3 is going to drop like a rock once it has to start dealing with AI and other "gaming" features. Try playing something like StarCraft or Civilization on your i3. I don't think it's going to cut the muster in the real world.
  • joex4444 - Thursday, September 24, 2015 - link

    As far as using X79 as the test platform here goes, I'm mildly curious what sort of effect the quad channel RAM had. Particularly with Core i3, most people pair that with 2x4GB of cheap DDR3 and won't be getting even half the memory bandwidth your test platform had available.

    Also fun would be to switch to X99 and test the Core i7-5960X, though dropping an E5-2687W in the X79 platform (hey, it *is* supported after all).

Log in

Don't have an account? Sign up now