AMD's Path to Polaris

With the benefit of hindsight, I think in reflection that the 28nm generation started out better for AMD than it ended. The first Graphics Core Next card, Radeon HD 7970, had the advantage of launching more than a quarter before NVIDIA’s competing Kepler cards. And while AMD trailed in power efficiency from the start, at least for a time there they could compete for the top spot in the market with products such as the Radeon HD 7970 GHz Edition, before NVIDIA rolled out their largest Kepler GPUs.

However I think where things really went off of the rails for AMD was mid-cycle, in 2014, when NVIDIA unveiled the Maxwell architecture. Kepler was good, but Maxwell was great; NVIDIA further improved their architectural and energy efficiency (at times immensely so), and this put AMD on the back foot for the rest of the generation. AMD had performant parts from the bottom R7 360 right up to the top Fury X, but they were never in a position to catch Maxwell’s efficiency, a quality that proved to resonate with both reviewers and gamers.

The lessons of the 28nm generation were not lost on AMD. Graphics Core Next was a solid architecture and opened the door to AMD in a number of ways, but the Radeon brand does not exist in a vacuum, and it needs to compete with the more successful NVIDIA. At the same time AMD is nothing if not scrappy, and they can surprise us when we least expect it. But sometimes the only way to learn is the hard way, and for AMD I think the latter half of the 28nm generation was for the Radeon Technologies Group learning the hard way.

So what lessons did AMD learn for Polaris? First and foremost, power efficiency matters. It matters quite a lot in fact. Every vendor – be it AMD, Intel, or NVIDIA – will play up their strongest attributes. But power efficiency caught on with consumers, more so than any other “feature” in the 28nm generation. Though its importance in the desktop market is forum argument fodder to this day, power efficiency and overall performance are two sides of the same coin. There are practical limits for how much power can be dissipated in different card form factors, so the greater the efficiency, the greater the performance at a specific form factor. This aspect is even more important in the notebook space, where GPUs are at the mercy of limited cooling and there is a hard ceiling on heat dissipation.

As a result a significant amount of the work that has gone into Polaris has been into improving power efficiency. To be blunt, AMD has to be able to better compete with NVIDIA here, but AMD’s position is more nuanced than simply beating NVIDIA. AMD largely missed the boat on notebooks in the last generation, and they don’t want to repeat their mistakes. At the same time, starting now with an energy efficient architecture means that when they scale up and scale out with bigger and faster chips, they have a solid base to work from, and ultimately, more chances to achieve better performance.

The other lesson AMD learned for Polaris is that market share matters. This is not an end-user problem – AMD’s market share doesn’t change the performance or value of their cards – but we can’t talk about what led to Polaris without addressing it. AMD’s share of the consumer GPU market is about as low as it ever has been; this translates not only into weaker sales, but it undermines AMD’s position as a whole. Consumers are more likely to buy what’s safe, and OEMs aren’t much different, never mind the psychological aspects of the bandwagon effect.

Consequently, with Polaris AMD made the decision to start with the mainstream market and then work up from there, a significant departure from the traditional top-down GPU rollouts. This means developing chips like Polaris 10 and 11 first, targeting mainstream desktops and laptops, and letting the larger enthusiast class GPUs follow. The potential payoff for AMD here is that this is the opposite of what NVIDIA has done, and that means AMD gets to go after the high volume mainstream market first while NVIDIA builds down. Should everything go according to plan, then this gives AMD the opportunity to grow out their market share, and ultimately shore up their business.

As we dive into Polaris, its abilities, and its performance, it’s these two lessons we’ll see crop up time and time again, as these were some of the guiding lessons in Polaris’s design. AMD has taken the lessons of the 28nm generation to heart and have crafted a plan to move forward with the FinFET generation, charting a different, and hopefully more successful path.

Though with this talk of energy efficiency and mainstream GPUs, let’s be clear here: this isn’t AMD’s small die strategy reborn. AMD has already announced their Vega architecture, which will follow up on the work done by Polaris. Though not explicitly stated by AMD, it has been strongly hinted at that these are the higher performance chips that in past generations we’d see AMD launch with first, offering performance features such as HBM2. AMD will have to live with the fact that for the near future they have no shot at the performance crown – and the halo effect that comes with it – but with any luck, it will put AMD in a better position to strike at the high-end market once Vega’s time does come.

The AMD Radeon RX 480 Preview The Polaris Architecture: In Brief
Comments Locked

449 Comments

View All Comments

  • basroil - Thursday, June 30, 2016 - link

    "Or both the mobo and the PSU are supplying the same voltage and the power input is combined into a single bus... y'know... preventing the unlikely scenario you describe from ever possibly happening."

    1) The two do NOT have the same voltage. Ideally they do but that's not how things actually work in practice.
    2) The folks at tomshardware did bus level analysis of power draws and put their results into their review. Their tests for various cards will prove to you that power draw can indeed be modified to either PCIe slot or power cable and is not 50-50 like you claim.
    3) Even assuming that your point was valid (which it most certainly is NOT), it wouldn't change the fact that a single card already draws more power from the PCIe slot than allowable by ATX specifications, and that two cards will be far more than the specs allow (double the spec for PCIe3.0)
  • schulmaster - Thursday, June 30, 2016 - link

    Lol. The PSU is the source for all board power AND PCIE Aux. The board design and PSU will negotiate how much 12V power is reliably sourced from the 24pin. A 6pin PCIe aux is rated for an additional 75W, and that limit could be down to the cable itself, let alone the card interface and/or the PSU. Even high-end OC boards have a supplemental molex connector for multi GPU configs to supplement available bus power, which is the burden of the 24pin. It is not outlandish to have concern if a single RX480 is overdrawing from the entire PCIe bus wattage allotted in the spec, especially when the fall back is a PCIe 6 pin already being overdrawn from as well. Tomshardware was literally unwilling to due further multiGPU testing due to the numbers they were physically seeing, not paranoia.
  • pats1111 - Thursday, June 30, 2016 - link

    @binarydissonance: Don't confuse these fanboys with the facts, they're NVIDIA goons, it's a waste of time because they are TROLLS
  • AbbieHoffman - Wednesday, June 29, 2016 - link

    Actually most motherboards support crossfire. There are many that support only crossfire. Because it is cheaper to make crossfire support than SLI.
  • Gigaplex - Thursday, June 30, 2016 - link

    But they don't support the excessive power consumption on the PCIe bus, which is a specification violation.
  • jospoortvliet - Monday, July 4, 2016 - link

    Luckily every motherboard except for cheap ones that are quite old can handle easily 100+ watt over the PCIe port, as any over clocking would need that, too.
  • beck2050 - Thursday, June 30, 2016 - link

    I just laugh when I see people talking about Crossfire
  • fanofanand - Thursday, June 30, 2016 - link

    "when even 2x1080 wouldn't hit 75W"

    Your post is so full of FUD it should be deleted.
  • basroil - Thursday, June 30, 2016 - link

    "Your post is so full of FUD it should be deleted. "

    I'm not responsible for your ignorance. Check tomshardware /reviews/nvidia-geforce-gtx-1080-pascal,4572-10.html and you'll see I'm right
  • fanofanand - Thursday, June 30, 2016 - link

    I checked, you are wrong. Stop spreading FUD, you Nvidiot.

Log in

Don't have an account? Sign up now