Second Generation GDDR5X: More Memory Bandwidth

One of the more unusual aspects of the Pascal architecture is the number of different memory technologies NVIDIA can support. At the datacenter level, NVIDIA has a full HBM 2 memory controller, which they use for the GP100 GPU. Meanwhile for consumer and workstation cards, NVIDIA equips GP102/104 with a more traditional memory controller that supports both GDDR5 and its more exotic cousin: GDDR5X.

A half-generation upgrade of sorts, GDDR5X was developed by Micron to further improve memory bandwidth over GDDR5. GDDR5X further increases the amount of memory bandwidth available from GDDR5 through a combination of a faster memory bus coupled with wider memory operations to read and write more data from DRAM per clock. And though it’s not without its own costs such as designing new memory controllers and boards that can accommodate the tighter requirements of the GDDR5X memory bus, GDDR5X offers a step in performance between the relatively cheap and slow GDDR5, and relatively fast and expensive HBM2.

With rival AMD opting to focus on HBM2 and GDDR5 for Vega and Polaris respectively, NVIDIA has ended up being the only PC GPU vendor to adopt GDDR5X. The payoff for NVIDIA, besides the immediate benefits of GDDR5X, is that they can ship with memory configurations that AMD cannot. Meanwhile for Micron, NVIDIA is a very reliable and consistent customer for their GDDR5X chips.

When Micron initially announced GDDR5X, they laid out a plan to start at 10Gbps and ramp to 12Gbps (and beyond). Now just under a year after the launch of the GTX 1080 and the first generation of GDDR5X memory, Micron is back with their second generation of memory, which of course is being used to feed the GTX 1080 Ti. And NVIDIA, for their part, is very eager to talk about what this means for them.

With Micron’s second generation GDDR5X, NVIDIA is now able to equip their cards with 11Gbps memory. This is a 10% year-over-year improvement, and a not-insignificant change given that memory speeds increase at a fraction of GPU throughput. Coupled with GP102’s wider memory bus – which sees 11 of 12 lanes enabled for the GTX 1080 Ti – and NVIDIA is able to offer just over 480GB/sec of memory bandwidth with this card, a 50% improvement over the GTX 1080.

For NVIDIA, this is something they’ve been eagerly awaiting. Pascal’s memory controller was designed for higher GDDR5X memory speeds from the start, but the memory itself needed to catch up. As one NVIDIA engineer put it to me “We [NVIDIA] have it easy, we only have to design the memory controller. It’s Micron that has it hard, they have to actually make memory that can run at those speeds!”

Micron for their part has continued to work on GDDR5X after its launch, and even with what I’ve been hearing was a more challenging than anticipated launch last year, both Micron and NVIDIA seem to be very happy with what Micron has been able to accomplish with their second generation GDDR5X memory.

As demonstrated in eye diagrams provided by NVIDIA, Micron’s second generation memory coupled with NVIDIA’s memory controller is producing a very clean eye at 11Gbps, whereas the first generation memory (which was admittedly never speced for 11Gbps) would produce a very noisy eye. Consequently NVIDIA and their partners can finally push past 10Gbps for the GTX 1080 Ti and the forthcoming factory overclocked GTX 1080 and GTX 1060 cards.

Under the hood, the big developments here were largely on Micron’s side. The company continued to optimize their metal layers for GDDR5X, and combined with improved test coverage were able to make a lot of progress over the first generation of memory. This in turn is coupled with improvements in equalization and noise reduction, resulting in the clean eye we see above.

Longer-term here, GDDR6 is on the horizon. But before then, Micron is still working on further improvements to GDDR5X. Micron’s original goal was to hit 12Gbps with this memory technology, and while they’re not there quite yet, I wouldn’t be too surprised to be having this conversation once again for 12Gbps memory within the next year.

Finally, speaking of memory, it’s worth noting that NVIDIA also dedicated a portion of their GTX 1080 Ti presentation to discussing memory capacity. To be honest, I get the impression that NVIDIA feels like they need to rationalize equipping the GTX 1080 Ti with 11GB of memory, beyond the obvious conclusions that it is cheaper than equipping the card with 12GB and it better differentiates the GTX 1080 Ti from the Titan X Pascal.

In any case, NVIDIA believes that based on historical trends, 11GB will be sufficient for 5K gaming in 2018 and possibly beyond. Traditionally NVIDIA has not been especially generous on memory – cards like the 3GB GTX 780 Ti and 2GB GTX 770 felt the pinch a bit early – so going with a less-than-full memory bus doesn’t improve things there. On the other hand with the prevalence of multiplatform games these days, one of the biggest drivers in memory consumption was that the consoles had 8GB of RAM each; and with 11GB, the GTX 1080 Ti is well ahead of the consoles in this regard.

Meet the GeForce GTX 1080 Ti Founder’s Edition Driver Performance & The Test
Comments Locked

161 Comments

View All Comments

  • eddman - Friday, March 10, 2017 - link

    So we moved from "nvidia pays devs to deliberately not optimize for AMD" to "nvidia works with devs to optimize the games for their own hardware, which might spoil them and result in them not optimizing for AMD properly".

    How is that bribery, illegal? If they did not prevent the devs from optimizing for AMD then nothing illegal happened. It was the devs own doing.
  • ddriver - Friday, March 10, 2017 - link

    Nope, there is an implicit, unspoken condition to receiving support from nvidia. To lazy slobs, that's welcome, and most devs are lazy slobs. Their line of reasoning is quite simple:

    "Working to optimize for amd is hard, I am a lazy and possibly lousy developer, so if they don't do that for me like nvidia does, I won't do that either, besides that would angry nvidia, since they only assist me in order to make their hardware look better, if I do my job and optimize for amd and their hardware ends up beating nvidia's, I risk losing nvidia's support, since why would they put money into helping me if they don't get the upper hand in performance. Besides, most people use nvidia anyway, so why even bother. I'd rather be taken to watch strippers again than optimize my software."

    Manipulation, bribery and extortion. nvidia uses its position to create situation in which game developers have a lot to profit from NOT optimizing for amd, and a lot to lose if they do. Much like intel did with its exclusive discounts. OEM's weren't exactly forced to take those discounts in exchange for not selling amd, they did what they knew would please intel to get rewarded for it. Literally the same thing nvidia does. Game developers know nvidia will be pleased to see their hardware getting an unfair performance advantage, and they know amd doesn't have the money to pamper them, so they do what is necessary please nvidia and ensure they keep getting support.
  • akdj - Monday, March 13, 2017 - link

    Where to start?
    Best not to start, as you are completely, 100% insane and I've spent two and a half 'reads' of your replies... trying to grasp WTH you're talking about and I'm lost
    Totally, completely lost in your conspiracy theories about two major GPU silicon builders while being apparently and completely clueless about ANY of it!
    Lol - Wow, I'm truly astounded that you were able to make up that much BS ...
  • cocochanel - Friday, March 10, 2017 - link

    You forgot to mention one thing. Nvidia tweaking the drivers to force users into hardware updates. Say, there is a bunch of games coming up this Christmas. If you have a card that's 3-4 years old, they release a new driver which performs poorly on your card ( on those games ) and another driver which performs way better on the newest cards. Then, if you start crying, they say: It's an old card, pal, why don't you buy a new one !
    With DX11 they could do that a lot. With DX12 and Vulkan it's a lot harder. Most if not all optimizations have to be done by the game programmers. Very little is left to the driver.
  • eddman - Friday, March 10, 2017 - link

    That's how the ENTIRE industry is. Do you really expect developers to optimize for old architectures. Everyone does it, nvidia, AMD, intel, etc.

    It is not deliberate. Companies are not going to spend time and money on old hardware with little market share. That's how it's been forever.

    Before you say that's not the case with radeons, it's because their GCN architecture hasn't changed dramatically since its first iteration. As a result, any optimization done for the latest GCN, affects the older ones to some extent too.
  • cocochanel - Friday, March 10, 2017 - link

    There is good news for the future. As DX12 and Vulkan become mainstream API's, game developers will have to roll up their sleeves and sweat it hard. Architecturely, these API's are totally different from the ground up and both trace their origin from Mantle. And Mantle was the biggest advance in graphics API's in a generation. The good days for lazy game developers is coming to an end, since these new API's put just about everything back into their hands whether they like it or not. Tweaking the driver won't make much of a difference. Read the API's documentation.
  • cmdrdredd - Monday, March 13, 2017 - link

    Yes hopefully this will be the future where games are the responsibility of the developer. Just like on Consoles. I know people hate consoles sometimes but the closed platform shows which developers have their stuff together and which are lazy bums because Sony and Microsoft don't optimize anything for the games.
  • Nfarce - Friday, March 10, 2017 - link

    Always amusing watching to tin foil hat Nvidia conspiracy nuts talk. Here's my example: working on Project Cars as an "early investor." Slightly Mad Studios gave both Nvidia and AMD each 12 copies of the beta release to work on, the same copy I bought. Nvidia was in constant communication with SMS developers and AMD was all but never heard from. After about six months, Nvidia had a demo of the racing game ready for a promotion of their hardware. Since AMD didn't take Project Cars seriously with SMS, Nvidia was able to get the game tweaked better for Nvidia. And SMS hat-tipped Nvidia with having billboards in the game showing Nvidia logos.

    Of course all the AMD fanboys claimed unfair competition and the usual whining when their GPUs do not perform as well in some games as Nvidia (they amazingly stayed silent when DiRT Rally, another development I was involved with, ran better on AMD GPUs and had AMD billboards).
  • ddriver - Friday, March 10, 2017 - link

    So was there anything preventing the actual developers from optimizing the game? They didn't have nvidia and amd hardware, so they sent betas to the companies to profile things and see how it runs?

    How silly one must be to expect that nvidia - a company that rakes in billions every year, and amd - a company is in the red most of the time and has lost billions, will have the same capacity to do game developers jobs for them?

    It is the game developer's job top optimize. Alas, as it seems, nvidia has bred a new breed of developers - those who do their job half-assedly and then wait on them to optimize, conveniently creating unfair advantage to their hardware.
  • ddriver - Friday, March 10, 2017 - link

    Also talking about fanboys - I am not that. Yes, I am running dozens of amd gpus, and I don't see myself buying any nvidia product any time soon, but that's only because the offer superior value to what I need them for.

    I don't give amd extra credit for offering a better value. I know this is not what they want. It is what they are being forced into.

    I am in a way grateful to nvidia for sandbagging amd, because this way I can get a much better value products. If things were square between the two, and all games were equally optimized, then both companies would offer products with approximately identical value.

    Which I would hate, because I'd lose the currently, 2-3x better value for the money i get with amd. I benefit and profit from nvidia being crooks, and I am happy that I can do that.

    So nvidia, keep doing what you are doing. I am not really objecting, I am simply stating the facts. Of course, nvidia fanboys would have a problem understanding that, and a problem with anyone tarnishing the good name of that helpful awesome and paying for strippers company.

Log in

Don't have an account? Sign up now