For much of the last month we have been discussing bits and pieces of AMD’s GPU plans for 2016. As part of the Radeon Technology Group’s formation last year, the leader and chief architect of the group, Raja Koduri, has set about to make his mark on AMD’s graphics technology. Along with consolidating all graphics matters under the RTG, Raja and the rest of the RTG have also set about to change how they interact with the public, with developers, and with their customers.

One of those changes – and the impetus for these recent articles – has been that the RTG wants to be more forthcoming about future product developments. Traditionally AMD always held their cards close to their chest about architectures, keeping them secret until the first products based on a new architecture launch (and even then sometimes not talking about matters in detail). With the RTG this is changing, and similar to competitors Intel and NVIDIA, the RTG wants to prepare developers and partners for new architectures sooner. As a result the RTG has been giving us a limited, high-level overview of their GPU plans for 2016.

Back in December we started things off talking about RTG’s plans for display technologies – DisplayPort, HDMI, Freesync, and HDR – and how the company would be laying the necessary groundwork in future architectures to support their goals for higher resolution displays, more ubiquitous Freesync-over-HDMI, and the wider color spaces and higher contrast of HDR. The second of RTG’s presentations that we covered was focused on their software development plans, including Linux driver improvements and the consolidation of all of RTG’s various GPU libraries and SDKs under the GPUOpen banner, which will see these resources released on GitHub as open source projects.

Last but not least among RTG’s presentations is without a doubt the most eagerly anticipated subject: the hardware. As RTG (and AMD before them) has commented on in the past couple of years, a new architecture is being developed for future RTG GPUs. Dubbed Polaris (the North Star), RTG’s new architecture will be at the heart of their 2016 GPUs, and is designed for what can now be called the current-generation FinFET processes. Polaris incorporates a number of new technologies, including a 4th generation Graphics Core Next design for the heart of the GPU, and of course the new display technologies that RTG revealed last month. Finally, the first Polaris GPUs should be available in mid-2016, or roughly 6 months from now.

First Polaris GPU Is Up and Running

But before we dive into Polaris and RTG’s goals for the new architecture, let’s talk about the first Polaris GPUs. With the first products expected to launch in the middle of this year, to no surprise RTG has their first GPUs back from the fab and up & running. To that end – and I am sure many of you are eager to hear about – as part of their presentation RTG showed off the first Polaris GPU in action, however briefly.

As a quick preface here, while RTG demonstrated a Polaris based card in action we the press were not allowed to see the physical card or take pictures of the demonstration. Similarly, while Raja Koduri held up an unsoldered version of the GPU used in the demonstration, again we were not allowed to take any pictures. So while we can talk about what we saw, at this time it’s all we can do. I don’t think it’s unfair to say that RTG has had issues with leaks in the past, and while they wanted to confirm to the press that the GPU was real and the demonstration was real, they don’t want the public (or the competition) seeing the GPU before they’re ready to show it off. That said, I do know that RTG is at CES 2016 planning to recap Polaris as part of AMD’s overall presence, so we may yet see the GPU at CES after the embargo on this information has expired.

In any case, the GPU RTG showed off was a small GPU. And while Raja’s hand is hardly a scientifically accurate basis for size comparisons, if I had to guess I would wager it’s a bit smaller than RTG’s 28nm Cape Verde GPU or NVIDIA’s GK107 GPU, which is to say that it’s likely smaller than 120mm2. This is clearly meant to be RTG’s low-end GPU, and given the evolving state of FinFET yields, I wouldn’t be surprised if this was the very first GPU design they got back from Global Foundries as its size makes it comparable to current high-end FinFET-based SoCs. In that case, it could very well also be that it will be the first GPU we see in mid-2016, though that’s just supposition on my part.

For their brief demonstration, RTG set up a pair of otherwise identical Core i7 systems running Star Wars Battlefront. The first system contained an early engineering sample Polaris card, while the other system had a GeForce GTX 950 installed (specific model unknown). Both systems were running at 1080p Medium settings – about right for a GTX 950 on the X-Wing map RTG used – and generally hitting the 60fps V-sync limit.

The purpose of this demonstration for RTG was threefold: to showcase that a Polaris GPU was up and running, that the small Polaris GPU in question could offer performance comparable to GTX 950, and finally to show off the energy efficiency advantage of the small Polaris GPU over current 28nm GPUs. To that end RTG also plugged each system into a power meter to measure the total system power at the wall. In the live press demonstration we saw the Polaris system average 88.1W while the GTX 950 system averaged 150W. Meanwhile in RTG’s own official lab tests (and used in the slide above) they measured 86W and 140W respectively.

Keeping in mind that this is wall power – PSU efficiency and the power consumption of other components is in play as well – the message RTG is trying to send is clear: that Polaris should be a very power efficient GPU family thanks to the combination of architecture and FinFET manufacturing. That RTG is measuring a 54W difference at the wall is definitely a bit surprising as GTX 950 averages under 100W to begin with, so even after accounting for PSU efficiency this implies that power consumption of the Polaris video card is about half that of the GTX 950. But as this is clearly a carefully arranged demo with a framerate cap and a chip still in early development, I wouldn’t read too much into it at this time.

Polaris: A High Level Look
Comments Locked

153 Comments

View All Comments

  • Cinnabuns - Monday, January 4, 2016 - link

    Power:performance translates into $:performance if you're the one footing the utility bill.
  • nikaldro - Tuesday, January 5, 2016 - link

    This is a common myth. Unless your PC drinks A LOT of Watts, your power bill won't change much.
  • HollyDOL - Tuesday, January 5, 2016 - link

    Depends where you live (cost per MWh), how long do you run the PC and how loaded it is... It ain't that hard to calculate... where I live I pay ~ 62Eur per MWh (75 with tax)... so running 600W power hog vs 300W 8 hours a day (wife@home, so prolly it's even more) puts you on 108 vs 54 Eur a year (plus tax) on computer alone. It's not tragic, but also not that little to just plain neglect it...
  • Peter2k - Monday, January 4, 2016 - link

    Because AMD runs too hot to be cooled easily compared to NVidia
    Well high end cards anyway
    Less heat = less noise/or more frequency (=more performance)
  • Ramon Zarat - Monday, January 4, 2016 - link

    Thanks to the destruction of the middle class, the erosion of purchasing power and price rise of all sources of energy in the last 40 years, the Watt per frame equation is actually more important than ever. Unless you are part of the privileged 1% and wipe your ass with a Benjamin...
  • Spoelie - Tuesday, January 5, 2016 - link

    Actually, no.

    You may want to watch this if you want to improve your world-view beyond that of a chimp.
    https://t.co/kpnCsLDidb
  • anubis44 - Thursday, January 14, 2016 - link

    I watched your TED Talk, and I don't see how what Ramon Zarat said is refuted in this TED Talk.

    Obviously, when he refers to the 'destruction of the middle class', he means the one in the United States. And he's right. there has bee a net destruction of good-paying, solid, middle class jobs in the United States (and Canada, where I live, for that matter). He's also right that the optimistic goal espoused after WWII to use nuclear power to produce cheap energy for all has been completely co-opted by a corrupt cabal of wealthy manipulators, and for some mysterious reason, we're STILL burning fossil fuels to produce most of our energy (including transportation). Of course, the expensive energy economy these greedy fools have tried to impose on us is unsustainable, and is unravelling at the seams now that so many countries are dependent on oil revenues, and have allowed their manufacturing to flow to China, there is no solidarity among oil producers, and all they're ALL pumping the stuff out of the ground as fast as they can because they're all dependent on the oil revenues to survive. Either most of these countries get out of the oil business and realize they need to have a manufacturing-based economy once again while they can, or we'll descend into a desperate, anarchic world, with countries simply invading each other for oil in order to try to maintain their falling oil incomes by controlling more and more of the production. Disgusting.
  • ASEdouardD - Tuesday, January 5, 2016 - link

    Like Spoelie said, kinda, the world as a whole is living much more materially comfortably than 40 years ago, thanks in a large part to the rise of Asian middle-classes and a significant improvement in South America. Purchasing power and living standards are also higher in Europe and the US than they were 40 years ago, even though the gains from economic growth has been disproportionately concentrated in the hands of the richest, especially in the US.

    Now, there is clearly a stagnation of income for the American middle-class, but things are not really going worse than they were 40 years ago.

    We can debate the economic fortunes of the US and Europe in the last 10 years, but the world average is much, much better than it was in 1975.
  • boozed - Monday, January 4, 2016 - link

    If your parents are paying for your electricity and air conditioning, sure, go ahead and ignore power consumption.

    Meanwhile, in the real world...
  • looncraz - Tuesday, January 5, 2016 - link

    Meanwhile, in the real world... people who pay their own bills know that a 40W GPU difference is effectively zero.

    I replaced all of my lighting with LEDs and saved an estimated average of 500W 24/7. The extra 40W, or even 140W, under load for the few hours of gaming a day any normal person has time for will not make any type of impact on the electric bill that is noticeable beyond the noise of a two degree temperatures swing outside.

Log in

Don't have an account? Sign up now