High Dynamic Range: Setting the Stage For The Next Generation

The final element of RTG’s visual technologies presentation was focused on high dynamic range (HDR). In the PC gaming space HDR rendering has been present in some form or another for almost 10 years. However it’s only recently that the larger consumer electronics industry has begun to focus on HDR, in large part due to recent technical and manufacturing scale achievements.

Though HDR is most traditionally defined with respect to the contrast ratio and the range of brightness within an image – and how the human eye can see a much wider range in brightness than current displays can reproduce – for RTG their focus on HDR is spread out over several technologies. This is due to the fact that to bring HDR to the PC one not only needs a display that can cover a wider range of brightness than today’s displays that top out at 300 nits or so, but there are also changes required in how color information needs to be stored and transmitted to a display, and really the overall colorspace used. As a result RTG’s HDR effort is an umbrella effort covering multiple display-related technologies that need to come together for HDR to work on the PC.

The first element of this – and the element least in RTG’s control – is the displays themselves. Front-to-back HDR requires having displays capable not only of a high contrast ratio, but also some sort of local lighting control mechanism to allow one part of the display to be exceptionally bright while another part is exceptionally dark. The two technologies in use to accomplish this are LCDs with local dimming (as opposed to a single backlight) or OLEDS, which are self-illuminating, both of which until recently had their own significant price premium. The price on these styles of displays is finally coming down, and there is hope that displays capable of hitting the necessary brightness, contrast, and dimming levels for solid HDR reproduction will become available within the next year.

As for RTG’s own technology is concerned, even after HDR displays are on the market, RTG needs to make changes to support these displays. The traditional sRGB color space is not suitable for true HDR – it just isn’t large enough to correctly represent colors at the extreme ends of the brightness curve – and as a result RTG is laying the groundwork for improved support for larger color spaces. The company already supports AdobeRGB for professional graphics work, however the long-term goal is to support the BT.2020 color space, which is the space the consumer electronics industry has settled upon for HDR content. BT.2020 will be what 4K Blu Rays will be mastered in, and in time it is likely that other content will follow.

Going hand-in-hand with the BT.2020 color space is how it’s represented. While it’s technically possible to display the color space using today’s 8 bit per color (24bpp) encoding schemes, the larger color space would expose and exacerbate the banding that results from only having 256 shades of any given primary color to work with. As a result BT.2020 also calls for increasing the bit depth of images from 8bpc to a minimum of 10bpc (30bpp), which serves to increase the number of shades of each primary color to 1024. Only by both increasing the color space and at the same time increasing the accuracy within that space can the display rendering chain accurately describe an HDR image, ultimately feeding that to an HDR-capable display.

The good news here for RTG (and the PC industry as a whole) is that 10 bit per color rendering is already done on the PC, albeit traditionally limited to professional grade applications and video cards. BT.2020 and the overall goals of the consumer electronics industry means that 10 bit per color and BT.2020's specific curve will need to become a consumer feature, and this is where RTG’s HDR presentation lays out their capabilities and goals.

The Radeon 300 series is already capable of 10bpc rendering, so even older cards if presented with a suitable monitor will be capable of driving HDR content over HDMI 1.4b and DisplayPort 1.2. The higher bit depth does require more bandwidth, and as a result it’s not possible to combine HDR, 4K, and 60Hz with any 300 series cards due to the limitations of DisplayPort 1.2 (though lower resolutions with higher refresh rates are possible). However this means that the 2016 Radeon GPUs with DisplayPort 1.3 would be able to support HDR at 4K@60Hz.

And indeed it’s likely the 2016 GPUs where HDR will really take off. Although RTG can support all of the basic technical aspects of HDR on the Radeon 300 series, there’s one thing none of these cards will ever be able to do, and that’s to directly support the HDCP 2.2 standard, which is being required for all 4K/HDR content. As a result only the 2016 GPUs can play back HDR movies, while all earlier GPUs would be limited to gaming and photos.

Meanwhile RTG is also working on the software side of matters as well in conjunction with Microsoft. At this time it’s possible for RTG to render to HDR, but only in an exclusive fullscreen context, bypassing the OS’s color management. Windows itself isn’t capable of HDR rendering, and this is something that Microsoft and its partners are coming together to solve. And it will ultimately be a solved issue, but it may take some time. Not unlike high-DPI rendering, edge cases such as properly handling mixed use of HDR/SDR are an important consideration that must be accounted for. And for that matter, the OS needs a means of reliably telling (or being told) when it has HDR content.

Finally at the other end of the spectrum will be software developers. While the movie/TV industries have already laid the groundwork for HDR production, software and game developers will be in a period of catching up as most current engines implicitly assume that they’ll be rendering for a SDR display. This means at a minimum reducing/removing the step in the rendering process where a scene is tonemapped for an SDR display, but there will also be some cases where rendering algorithms need to be changed entirely to make best use of the larger color space and greater dynamic range. RTG for their part seems to be eager to work with developers through their dev relations program to give them the tools they need (such as HDR tonemapping) to do just that.

Wrapping things up, RTG expects that we’ll start to see HDR capable displays in the mass market in 2016. At this point in time there is some doubt over whether this will include PC displays right away, in which case there may be a transition period of “EDR” displays that offer 10bpc and better contrast ratios than traditional LCDs, but can’t hit the 1000+ nit brightness that HDR really asks for. Though regardless of the display situation, AMD expects to be rolling out their formal support for HDR in 2016.

FreeSync Over HDMI to Hit Retail In Q1’16
Comments Locked

99 Comments

View All Comments

  • BurntMyBacon - Thursday, December 10, 2015 - link

    @Samus: "GCN scales well, but not for performance. Fury is their future."

    Fury is GCN. Their issue isn't GCN as GCN is actually a relatively loose specification that allows for plenty of architectural leeway in its implementation. Also note that GCN 1.0, GCN 1.1, and GCN 1.2 are significantly different from each other and should not be considered a single architecture as you seem to take it.

    ATi's current issue is the fact that they are putting out a third generation of products on the same manufacturing node. My guess is that many of the architectural improvements they were working on for the 20nm chips can't effectively be brought to the 28nm node. You see a bunch of rebadges because they decided they would rather wait for the next node than spend cash that they probably didn't have on new top to bottom architecture updates to a node that they can't wait to get off of and probably won't recoup the expense for. They opted to update the high end where the expenses could be better covered and they needed a test vehicle for HBM anyways.

    On the other hand, nVidia, with deeper pockets and greater marketshare decided that it was worth the cost. Though, even they took their sweet time in bringing the maxwell 2.0 chips down to the lower end.
  • slickr - Friday, December 11, 2015 - link

    Nvidia's products are based on pretty much slight improvements over their 600 series graphics architecture. They haven't had any significant architectural improvements since basically their 500 series. This is because both companies have been stuck on 28nm for the pat 5 years!

    Maxwell is pretty much a small update in the same technology that Nvidia has already been using before since the 600 series.
  • Budburnicus - Wednesday, November 16, 2016 - link

    That is TOTALLY INCORRECT! Maxwell is a MASSIVE departure from Kepler! Not only does it achieve FAR higher clock speeds, but it does more with less!

    At GTX 780 Ti is effectively SLOWER than a GTX 970, even at 1080p where the extra memory makes no difference, and where the 780 Ti has 2880 CUDA cores, the 970 has just 1664!

    There are FAR too many differences to list, and that is WHY Kepler has not been seeing ANY performance gains with newer drivers! Because the programming for Kepler is totally different from Maxwell or Pascal!

    Also, now that Polaris and Pascal is released: LMFAO! The RX 480 cannot even GET CLOSE to the 1503 MHZ I have my 980 Ti running on air! And if you DO get it to 1400 it throws insane amounts of heat!

    GCN is largely THE SAME ARCHITECTURE IT HAS ALWAYS BEEN! It has seen incremental updates such as memory compression, better branch prediction, and stuff like the Primitive Discard Accelerator - but otherwise is TOTALLY unchanged on a functional level!

    Kind of like how Pascal is an incremental update to Maxwell, adding farther memory compression, Simultaneous Multi Projection, better branch prediction and so on. Simultaneous Multi Projection adds an extra 40% to 60% performance for VR and surround monitor setups, when Maxwell - particularly the GTX 980 and 980 Ti are already FAR better at VR than even the Fury X! Don't take my word for it, go check the Steam Benchmark results on LTT forums! https://linustechtips.com/main/topic/558807-post-y...

    See UNLIKE Kepler to Maxwell, Pascal is BASICALLY just Maxwell on Speed, a higher clocked Maxwell chip! And it sucks FAR less power, creates FAR less heat and provides FAR more performance, as the RX 480 is basically tied with a GTX 970 running 1400 core! And FAR behind a 980 at the same or higher!

    Meanwhile the GTX 1060 beats it with ease, while the GTX 1070 (which at even 2100 MHZ is just a LITTLE less powerful than the 980 Ti at 1500 MHZ) 1080, and Pascal Titan SHIT ALL OVER THE FURY X!

    Hell the GTX 980 regular at 1500 MHZ kicks the ASS of the Fury X in almost every game at almost every resolution!

    Oh and Maxwell as well as Pascal are both HDR capable.
  • Furzeydown - Tuesday, December 8, 2015 - link

    Both companies have been rather limited by the same manufacturing node for the past four years as well though. It limits things to tweaks, efficiency improvements, and minor features. As far as performance goes, both companies are neck and neck with monster 600mm dies.
  • ImSpartacus - Tuesday, December 8, 2015 - link

    But Nvidia's monster die is generally considered superior to amd's monster die despite using older memory tech. Furthermore, amd's monster die only maintains efficiency because it's being kept very chilly with a special water cooler.

    It's not neck and neck.
  • Asomething - Wednesday, December 9, 2015 - link

    That is down to transistor density, amd are putting more into the same space which drives minimum requirements for the cooler up.
  • Dirk_Funk - Wednesday, December 9, 2015 - link

    Neck and neck as in there's hardly a difference in how many frames are rendered per second. It's not like either one has any big advantages over the other, and they are made almost exclusively for gaming so if fps is the same then yes it is neck and neck as far as most people are concerned.
  • OrphanageExplosion - Thursday, December 10, 2015 - link

    Not at 1080p and 1440p they aren't...
  • RussianSensation - Wednesday, December 23, 2015 - link

    The reference cards are very close.

    1080p - 980Ti leads by 6.5%
    1440p - 980Ti leads by just 1.1%
    4k - Fury X leads by 4.5%

    Neither card is fast enough for 4K, while both are a waste of money for 1080p without using VSR/DSR/Super-sampling. That leaves 1440p resolution where they are practically tied.
    http://www.techpowerup.com/reviews/ASUS/R9_380X_St...

    The only reason 980Ti is better is due to its overclocking headroom. As far as reference performance goes, they are practically neck and neck as users above noted.
  • zodiacsoulmate - Tuesday, December 8, 2015 - link

    waht do u mean nvidia is moving to a gcn-like architecture?

Log in

Don't have an account? Sign up now