For much of the last month we have been discussing bits and pieces of AMD’s GPU plans for 2016. As part of the Radeon Technology Group’s formation last year, the leader and chief architect of the group, Raja Koduri, has set about to make his mark on AMD’s graphics technology. Along with consolidating all graphics matters under the RTG, Raja and the rest of the RTG have also set about to change how they interact with the public, with developers, and with their customers.

One of those changes – and the impetus for these recent articles – has been that the RTG wants to be more forthcoming about future product developments. Traditionally AMD always held their cards close to their chest about architectures, keeping them secret until the first products based on a new architecture launch (and even then sometimes not talking about matters in detail). With the RTG this is changing, and similar to competitors Intel and NVIDIA, the RTG wants to prepare developers and partners for new architectures sooner. As a result the RTG has been giving us a limited, high-level overview of their GPU plans for 2016.

Back in December we started things off talking about RTG’s plans for display technologies – DisplayPort, HDMI, Freesync, and HDR – and how the company would be laying the necessary groundwork in future architectures to support their goals for higher resolution displays, more ubiquitous Freesync-over-HDMI, and the wider color spaces and higher contrast of HDR. The second of RTG’s presentations that we covered was focused on their software development plans, including Linux driver improvements and the consolidation of all of RTG’s various GPU libraries and SDKs under the GPUOpen banner, which will see these resources released on GitHub as open source projects.

Last but not least among RTG’s presentations is without a doubt the most eagerly anticipated subject: the hardware. As RTG (and AMD before them) has commented on in the past couple of years, a new architecture is being developed for future RTG GPUs. Dubbed Polaris (the North Star), RTG’s new architecture will be at the heart of their 2016 GPUs, and is designed for what can now be called the current-generation FinFET processes. Polaris incorporates a number of new technologies, including a 4th generation Graphics Core Next design for the heart of the GPU, and of course the new display technologies that RTG revealed last month. Finally, the first Polaris GPUs should be available in mid-2016, or roughly 6 months from now.

First Polaris GPU Is Up and Running

But before we dive into Polaris and RTG’s goals for the new architecture, let’s talk about the first Polaris GPUs. With the first products expected to launch in the middle of this year, to no surprise RTG has their first GPUs back from the fab and up & running. To that end – and I am sure many of you are eager to hear about – as part of their presentation RTG showed off the first Polaris GPU in action, however briefly.

As a quick preface here, while RTG demonstrated a Polaris based card in action we the press were not allowed to see the physical card or take pictures of the demonstration. Similarly, while Raja Koduri held up an unsoldered version of the GPU used in the demonstration, again we were not allowed to take any pictures. So while we can talk about what we saw, at this time it’s all we can do. I don’t think it’s unfair to say that RTG has had issues with leaks in the past, and while they wanted to confirm to the press that the GPU was real and the demonstration was real, they don’t want the public (or the competition) seeing the GPU before they’re ready to show it off. That said, I do know that RTG is at CES 2016 planning to recap Polaris as part of AMD’s overall presence, so we may yet see the GPU at CES after the embargo on this information has expired.

In any case, the GPU RTG showed off was a small GPU. And while Raja’s hand is hardly a scientifically accurate basis for size comparisons, if I had to guess I would wager it’s a bit smaller than RTG’s 28nm Cape Verde GPU or NVIDIA’s GK107 GPU, which is to say that it’s likely smaller than 120mm2. This is clearly meant to be RTG’s low-end GPU, and given the evolving state of FinFET yields, I wouldn’t be surprised if this was the very first GPU design they got back from Global Foundries as its size makes it comparable to current high-end FinFET-based SoCs. In that case, it could very well also be that it will be the first GPU we see in mid-2016, though that’s just supposition on my part.

For their brief demonstration, RTG set up a pair of otherwise identical Core i7 systems running Star Wars Battlefront. The first system contained an early engineering sample Polaris card, while the other system had a GeForce GTX 950 installed (specific model unknown). Both systems were running at 1080p Medium settings – about right for a GTX 950 on the X-Wing map RTG used – and generally hitting the 60fps V-sync limit.

The purpose of this demonstration for RTG was threefold: to showcase that a Polaris GPU was up and running, that the small Polaris GPU in question could offer performance comparable to GTX 950, and finally to show off the energy efficiency advantage of the small Polaris GPU over current 28nm GPUs. To that end RTG also plugged each system into a power meter to measure the total system power at the wall. In the live press demonstration we saw the Polaris system average 88.1W while the GTX 950 system averaged 150W. Meanwhile in RTG’s own official lab tests (and used in the slide above) they measured 86W and 140W respectively.

Keeping in mind that this is wall power – PSU efficiency and the power consumption of other components is in play as well – the message RTG is trying to send is clear: that Polaris should be a very power efficient GPU family thanks to the combination of architecture and FinFET manufacturing. That RTG is measuring a 54W difference at the wall is definitely a bit surprising as GTX 950 averages under 100W to begin with, so even after accounting for PSU efficiency this implies that power consumption of the Polaris video card is about half that of the GTX 950. But as this is clearly a carefully arranged demo with a framerate cap and a chip still in early development, I wouldn’t read too much into it at this time.

Polaris: A High Level Look
Comments Locked

153 Comments

View All Comments

  • smartthanyou - Tuesday, January 5, 2016 - link

    So AMD announces their next GPU architecture that will eventually be delayed and then will disappoint when it is finally released. Neat!
  • EdInk - Tuesday, January 5, 2016 - link

    I'm surprised that RTG are comparing their upcoming tech with a GTX950..how bizzare... why not compare it to another AMD product as we all know how power hungry their cards have been.

    The game is also been tested at 1080p, capped @ 60FPS on medium setting....C'MON!!!
  • mapsthegreat - Wednesday, January 6, 2016 - link

    Good Job! AMD! It will be a really gamechanger! congrats in advance! :D
  • vladx - Wednesday, January 6, 2016 - link

    Since the press called the previous GCN 1.0, 1.1, 1.2 calling the new version GCN 2.0 sounds a lot more natural than GCN 4.
  • zodiacfml - Wednesday, January 6, 2016 - link

    It is not surprising that they employ TSMC for the latest node as it is their primary producer for their GPUs. GloFlo will continue to produce CPUs/APUs, which could mean, they might be producing them soon.
  • melgross - Thursday, January 7, 2016 - link

    I haven't been able to read all the comments, so I don't know if this has been brought up already. We read here, that FinFet will be used for generations to come. But the industry has acknowledged that FinFet doesn't look to work at 7nm. I know that a ways off yet, but it's a problem nevertheless, as the two technologies touted as its replacement at that node haven't been proven to work either.

    So what we're talking about right now, is optimization of 14nm, which hasn't yet been done, and then, sometime in 2017, for Intel, at least, the beginnings of 10nm. After that, we just don't know yet.
  • vladx - Thursday, January 7, 2016 - link

    It won't be market viable below 5nm anyway so both Intel and IBM are probably most of their R&D to nanotubes research.
  • mdriftmeyer - Friday, January 8, 2016 - link

    Moved to a carbon based solution like nanotubes and other exotic materials will be the future. Silicon is a dead end.
  • none12345 - Friday, January 8, 2016 - link

    Cept its really more like the difference between 300 watts full system and 250 watts full system. Which 8 hours a day would be 146 kwh per year. Where i live thats about $20.

    And where i live its actually less then that, because its cold climate, where most of the year i would need to run the heater for that difference anyway. My system currently draws just about 300 watts when running maxed(that covers monitor, network router, etc, everything plugged into my UPS as well), and thats not enough heat for that room in the winter. In the summer it would mean AC, if the house had AC...it does not(not common where i live because you would only really use it for a month or 2).

    So, ya at least for me i don't care at all if one card draws 50 more watts then the other. Ill buy whatever is best performance/$ at the time.
  • fequalma - Thursday, January 14, 2016 - link

    Take everything in an AMD slide deck with a HUGE grain of salt.

    There is no way AMD will deliver a GPU that will consume 86W @ 1080p. AMD doesn't have that kind of technology.

    No way.

Log in

Don't have an account? Sign up now