GPU Performance

For 3D graphics and games the Kirin 970 is the first GPU to make use of ARM’s second generation Bifrost GPU architecture, Heimdall / G72. The new IP is an evolutionary update over last year’s Mali G71 with density and efficiency updates. 

The density increase as well as the process node shrink allowed HiSilicon to increase the GPU core count by 50% from 8 to 12 while still reducing the GPU block complex in terms of absolute silicon area. There is no mincing around with words on last year’s G71 performance: The GPU unfortunately came nowhere near the projected efficiency goals stated by ARM in neither the Exynos 8895 nor the Kirin 960. The Kirin 960 especially was remarkable in terms of how we saw devices powered by it reach until then unheard of average power figures at the peak performance states, ranging at around the 9W mark for the Mate 9. I still remember 2 years ago I had praised HiSilicon for implementing a GPU conservative enough that it could properly sustain its maximum performance state within the device thermal envelope, staying below 4W. Nevertheless before continuing the power argument any power figures of the Kirin 970, let’s go over the peak performance figures of the most commonly used industry 3D benchmarks.

3DMark Sling Shot 3.1 Extreme Unlimited - Overall

3DMark Sling Shot 3.1 Extreme Unlimited - Graphics

3DMark Sling Shot 3.1 Extreme Unlimited - Physics

In 3DMark Sling Shot 3.1 Extreme Unlimited we see the G72 on the Kirin 970, oddly enough, not improving at all. I ran the benchmark several times and made sure thermals weren’t the causen but still the phone wasn’t able to increase performance over the Kirin 960 save for a small increase in the physics score. I’m not yet sure what the cause is here – I wasn’t able to monitor GPU frequency as I haven’t rooted the device yet so I can’t be sure that it’s using some kind of limitation mechanism.

GFXBench Car Chase ES 3.1 / Metal (Off Screen 1080p)

GFXBench Manhattan ES 3.1 / Metal (Off Screen 1080p)

GFXBench T-Rex HD (Offscreen)

Moving on to Kishonti’s GFXBench we see the Kirin 970 achieve its theoretical gains of 15-20%. As a reminder while the GPU core count increased 50% from 8 to 12 cores, the frequency has been vastly reduced from the maximum 1033MHz down to 746MHz, leaving only a more marginal performance upgrade to be expected.

The Kirin 970’s G71MP12 ends up slightly below the Exynos 8895’s G71MP20 and the Snapdragon 835’s Adreno 540 in more compute bound workloads such as Manhattan 3.1 or Car Chase. In TRex the GPU has a slight lead over the Exynos 8895, but only when the device is cool as it quickly starts throttling down from its maximum frequencies at slightly more elevated temperatures.

GPU Power Efficiency

 

GFXBench Manhattan 3.1 Offscreen Power Efficiency
(System Active Power)
  Mfc. Process FPS Avg. Power
(W)
Perf/W
Efficiency
Galaxy S8 (Snapdragon 835) 10LPE 38.90 3.79 10.26 fps/W
LeEco Le Pro3 (Snapdragon 821) 14LPP 33.04 4.18 7.90 fps/W
Galaxy S7 (Snapdragon 820) 14LPP 30.98 3.98 7.78 fps/W
Huawei Mate 10 (Kirin 970) 10FF 37.66 6.33 5.94 fps/W
Galaxy S8 (Exynos 8895) 10LPE 42.49 7.35 5.78 fps/W
Meizu PRO 5 (Exynos 7420) 14LPE 14.45 3.47 4.16 fps/W
Nexus 6P (Snapdragon 810 v2.1) 20Soc 21.94 5.44 4.03 fps/W
Huawei Mate 8 (Kirin 950) 16FF+ 10.37 2.75 3.77 fps/W
Huawei Mate 9 (Kirin 960) 16FFC 32.49 8.63 3.77 fps/W
Huawei P9 (Kirin 955) 16FF+ 10.59 2.98 3.55 fps/W

In terms of average platform active power consumption, the Mate 10 shows as significant improvement over last year’s Mate 9. In Manhattan we go down from 8.6W to 6.33W. In terms of efficiency at similar peak performance the Kirin 970 managed only slightly outpace the Exynos 8895 and Mali G71. The architectural improvements that the G72 is promised to bring is counter-acted by the fact that the Exynos uses more cores at lower frequencies (and efficient voltages), with both ending up at a similar performance and efficiency point. The same effect applies between the Kirin 960 and 970, but in reverse. Here the addition of more cores at a lower frequency amplifies the process and architectural efficiency gains versus the G71, resulting in an absolute efficiency gain of 57% at peak performance, which comes near to Huawei’s stated claims of 50% efficiency gain. It’s to be noted that the true efficiency gain at same performance points is likely near the 100% mark, meaning for the same peak Kirin 960 performance levels the Kirin 970 and G72 implementation will be nearly double its efficiency.

Whilst this all might sound optimistic in terms of performance and efficiency gains, it’s all rather meaningless as the Mate 10 and Kirin 970 average power drains are still far above sustainable thermal envelopes at 6.3W.

GFXBench T-Rex Offscreen Power Efficiency
(System Active Power)
  Mfc. Process FPS Avg. Power
(W)
Perf/W
Efficiency
Galaxy S8 (Snapdragon 835) 10LPE 108.20 3.45 31.31 fps/W
LeEco Le Pro3 (Snapdragon 821) 14LPP 94.97 3.91 24.26 fps/W
Galaxy S7 (Snapdragon 820) 14LPP 90.59 4.18 21.67 fps/W
Galaxy S8 (Exynos 8895) 10LPE 121.00 5.86 20.65 fps/W
Galaxy S7 (Exynos 8890) 14LPP 87.00 4.70 18.51 fps/W
Huawei Mate 10 (Kirin 970) 10FF 127.25 7.93 16.04 fps/W
Meizu PRO 5 (Exynos 7420) 14LPE 55.67 3.83 14.54 fps/W
Nexus 6P (Snapdragon 810 v2.1) 20Soc 58.97 4.70 12.54 fps/W
Huawei Mate 8 (Kirin 950) 16FF+ 41.69 3.58 11.64 fps/W
Huawei P9 (Kirin 955) 16FF+ 40.42 3.68 10.98 fps/W
Huawei Mate 9 (Kirin 960) 16FFC 99.16 9.51 10.42 fps/W

Again on T-Rex, which is less ALU heavy and more texture, fill-rate and triangle rate bound we see the Kirin 970 reach impressive performance levels at impressively bad power figures. At 7.93W the phone doesn’t seem to be able to sustain the peak frequencies for long as even on a second consecutive run we see performance go down as thermal throttling kicks in. So while the Kirin 970 slightly outpaces the Exynos 8895 in performance it does so at 25% lower efficiency.

Against the Kirin 960 as again the previous paragraph might sound dire, it’s a vast improvement in comparison. So disastrous was the peak power of the Mate 9 that still at 28% higher peak performance, the Mate 10 still manages to be 53% more efficient, again validating Huawei’s marketing claims. At iso-performance again I estimate that the Kirin 970 is likely near twice as efficient over the Kirin 960.

In all this you’ll have probably noticed Qualcomm consistently at the top of the charts. Indeed over the last few generations it seems Qualcomm is the only company which has managed to increase performance by architectural and process node improvements without ever increasing and exploding the power budget. On the contrary, Qualcomm seems to steadily able to lower the average power generation after generation, reaching an extremely impressive 3.5-3.8W on the Snapdragon 835. It’s widely quoted that mobile GPU’s power budget is 1.5-2W, but over the last few years the only high-end GPU able to achieve that seems to be Adreno, and this gap seems to be ever increasing generation after generation.

In my review of the Mate 8 there were a lot of users in the comments section who still deemed the performance of the T880MP4 in the Kirin 950 unsatisfactory and uncompetitive. Unfortunately this view is the common widespread notion among most users and most media, and was one of main complaints of Huawei devices in the past. Today Huawei is able to compete at the top of the benchmarks, but at a rather ghastly hidden cost of efficiency and unsustainable power that is perfectly honest a lot harder to test and to communicate to users.

AnandTech is also partly guilty here; you have to just look at the top of the page: I really shouldn’t have published those performance benchmarks as they’re outright misleading and rewarding the misplaced design decisions made by the silicon vendors. I’m still not sure what to do here and to whom the onus falls onto. As long as vendors keep away from configuring devices with unreachable and unsustainable performance states on 3D workloads and keep within reasonable levels then the whole topic becomes a non-issue. If things don’t improve then we’ll have to have a hard look on how to handle these situations I’m considering simply no longer posting any GPU peak performance figures in device reviews and keeping them in separate more technical SoC pieces such as this one.

Overall I think we’re at a critical point in time for the mobile GPU landscape. Qualcomm currently holds such an enormous lead in performance, density and efficiency that other silicon vendors who rely on IP vendors for their GPUs are in a tight and precarious situation in terms of their ability to offer competitive products. I see this as a key catalyst as to why Apple has stated to planning to abandon Imagination as their GPU IP provider in upcoming SoCs and why Samsung has accelerated efforts to replace Mali and also introduce their in-house S-GPU maybe as early as 2019. Over the course of the next 2 years we’ll be seeing some exciting shake-ups of the SoC GPU space, that’s for sure.

SPEC2006 - The Results An Introduction to Neural Network Processing
Comments Locked

116 Comments

View All Comments

  • HStewart - Monday, January 22, 2018 - link

    One thing I would not mind Windows for ARM - if had the following

    1. Cheaper than current products - 300-400 range
    2. No need for x86 emulation - not need on such product - it would be good for Microsoft Office, email and internet machine. But not PC apps
  • StormyParis - Monday, January 22, 2018 - link

    But then why do you need WIndows to do that ? Android iOS and CHromme already do it, with a lot more other apps.
  • PeachNCream - Monday, January 22, 2018 - link

    It's too early in the Win10 on ARM product life cycle to call the entire thing a failure. I agree that it's possible we'll be calling it failed eventually, but the problems aren't solely limited to the CPU of choice. Right now, Win10 ARM platforms are priced too high (personal opinion) and _might_ be too slow doing the behind-the-scenes magic necessary to run x86 applications. Offering a lot more battery life, which Win10 on ARM does, isn't enough of a selling point to entirely offset the pricing and limitations. While I'd like to get 22 hours of battery life doing useful work with wireless active out of my laptops, it's more off mains time than I can realistically use in a day so I'm okay with a lower priced system with shorter life (~5 hours) since I use my phone for multi-day, super light computing tasks already. That doesn't mean everyone feels that way so let's wait and see before getting out the hammer and nails for that coffin.
  • jjj - Monday, January 22, 2018 - link

    The CPU is the reason for the high price, SD835 comes at a high premium and LTE adds to it.
    That's why those machines are not competitive in price with Atom based machines.
    Use a 25$ SoC and no LTE and Windows on ARM becomes viable with an even longer battery life.
  • PeachNCream - Monday, January 22, 2018 - link

    I didn't realize the 835 accounted for so much of the BOM on those ARM laptops. Since Intel's tray pricing for their low end chips isn't exactly cheap (not factoring in OEM/volume discounts), it didn't strike me as a significant hurdle. I'd thought most of the price as due to low production volume and attempts to make the first generation's build quality attractive enough to have a ripple effect on subsequently cheaper models.
  • tuxRoller - Monday, January 22, 2018 - link

    I'm not sure they do.
    A search indicated that in 2014 the average price of a Qualcomm solution for a platform was $24. The speculation was that the high-end socs were sold in the high $30s to low $40s.

    https://www.google.com/amp/s/www.fool.com/amp/inve...
  • jjj - Monday, January 22, 2018 - link

    It's likely more like 50-60$ for the hardware and 15$ for licensing for a 700$ laptop- although that includes only licenses to Qualcomm and they are not the only ones getting payed.
    Even a very optimistic estimate can't go lower than 70$ total and that's a large premium vs my suggestion of a 25$ SoC with no LTE.
    An 8 cores A53 might go below 10$, something like Helio X20 was around 20$ at it's time, one would assume that SD670 will be 25-35$, depending on how competitive Mediatek is with P70.
  • jjj - Monday, January 22, 2018 - link

    Some estimates will go much higher though (look at LTE enabling components too ,not just SoC for the S8). http://www.techinsights.com/about-techinsights/ove...
    Don't think costs are quite that high but they are supposed to know better.
  • tuxRoller - Monday, January 22, 2018 - link

    That's way higher than I've seen.

    http://mms.businesswire.com/media/20170420006675/e...

    Now, that's for the exynos 8895, but is imagine prices are similar for Snapdragon.
    Regardless, these are all estimates. I'm not aware of anyone who actually knows the real prices of these (including licenses) we has come out and told us.
  • jjj - Monday, January 22, 2018 - link

    On licensing you can take a look at the newest 2 pdfs here https://www.qualcomm.com/invention/licensing.
    Those are in line with the China agreement they have at 3.5% and 5% out of 65% of the retail value. There would be likely discounts for exclusivity and so on. So ,assuming multinode, licensing would be 22.75$ for a 700$ laptop, before any discounts (if any) BUT that's only to Qualcomm and not others like Nokia, Huawei, Samsung, Ericsson and whoever else might try to milk this.

    As for SoC, here's IHS for a SD835 phone https://technology.ihs.com/584911/google-pixel-xl-...

Log in

Don't have an account? Sign up now