Artificial Intelligence

For the readers that are not too familiar with the new wave of neural networks and artificial intelligence, there are essentially two main avenues to consider: training, and inference.

Training involves putting the neural network in front of a lot of data, and letting the network improve its decision-making capabilities with a helpful hand now and again. This process is often very computationally expensive, and done in data centers.

Inference is actually using the network to do something once it has been trained. For a network that is trained to recognize pictures of flowers and determine their species, for example, the ‘inference’ part is showing the network a new picture and it calculates what said picture could possibly be. The accuracy of the neural network is its ability to succeed in inference testing, and the typical way of making an inference network better is to train it more.

The mathematics behind training and inference are pretty much identical, but on different scales. There are methods and tricks, such as reducing the precision of the numbers flowing through the calculations, that can make tradeoffs in memory consumption, power, speed and accuracy.

Huawei’s NPU is an engine designed for inference. The idea is that software developers, using either Android’s Neural Network APIs or Huawei’s own Kirin AI APIs, can apply their own pre-trained networks to the NPU and then use it for their software. This is basically the same as how we run video games on a smartphone: the developers use a common API (OpenGL, Vulkan) that leverages the hardware underneath.

The fact that Huawei is stating that the NPU supports Android NN APIs is going to be a plus. If Huawei had locked it down to its own API (which it will likely use for first-party apps), then as many analysts had predicted, it would have died with Huawei. By opening it up to a global platform, we are more likely to see NPU accelerated apps come to the Play Store, possibly even as ubiquitously as video games do now.  Much like video games however, we are likely to see different levels of AI performance with different hardware, so some features may require substantial hardware to get on board.

What Huawei will have a problem with regarding the AI feature set is marketing. Saying that the smartphone supports Artificial Intelligence, or that it is an ‘AI-enabled’ smartphone is not going to be the primary reason for buying the device. Most common users will not understand (or care) if a device is AI capable, much in the same way that people barely discuss the CPU or GPU in a smartphone. This is AnandTech, so of course we will discuss it, but the reality is that most buyers do not care.

The only way that Huawei will be able to mass market such a feature is through the different user experiences it enables.

The First AI Applications for the Mate 10

Out of the gate, Huawei is supporting two primary applications that use AI – one of its own, and a major collaboration with Microsoft. I’ll start with the latter, as it is a pretty big deal.

With the Mate 10 and Mate 10 Pro, Huawei has collaborated with Microsoft to enable offline language translation using neural networks. This will be with the Microsoft Translate app, and consists of two main portions: word detection and then the translation itself. Normally both of these features would happen in the cloud and require a data connection, but this is the next evolution of the idea. It also leans on the idea of moving more of the functionality that commonly exists in the cloud (better compute, but ‘higher’ costs and data required) do the device or ‘edge’ (which is power and compute limited, but ‘free’). Now theoretically this could have been done offline many years ago, but Huawei is citing the use of the NPU to allow for it to be done more quickly and with less power consumed.

The tie-in with Microsoft has potential, especially if it works well. I have personally used tools like Google Translate to converse in the past, and it kind of worked. Having something like that which works offline is a plus, the only question would be around how much storage space is required and how accurate it will be. Both questions might be answered during the presentations today announcing the device, although it will be interesting to hear what metrics they will use.

The second application to get the AI treatment is in photography. This uses image and scene detection to apply one of fourteen presets to get ‘the best’ photo. When this feature was originally described, it sounded like that the AI was going to be the ultimate Pro photographer, and change all the pro-mode like settings based on what it thought was right – instead, it distils the scene down to one of fourteen potential scenes and runs a predefined script for the settings to use.

Nominally this isn’t a major ‘wow’ use-case for AI. It becomes a marketable feature for scene detection that, if enabled automatically, could significantly help the quality of auto-mode photography. But that is the feature that which users will experience without knowing AI is behind it: the minute Huawei starts to advertise it with the AI moniker, it is likely to get overcomplicated fast for the general public.

An additional side note. Huawei states that it is using the AI engine for two other parts of the device under the hood. The first is in battery power management, and recognizing which parts of the day typically need more power and responding through the DVFS curve to do so. The idea is that the device can work out what power can be expended and when in order to provide a day of full charge. Personally, I’m not too hopeful about this, given the light-touch explanation, but the results will be interesting to see.

The second under-the-hood addition is in general performance characteristics. In the last generation Huawei promoted that it had tested its hardware and software to provide 18 months of consistent performance. The details were (annoyingly) light but it related to memory fragmentation (shouldn’t be an issue with DRAM), storage fragmentation (shouldn’t be an issue with NAND) and other functionality. When pressing Huawei for more details, none was forthcoming. What the AI hardware inside the chip should do, according to Huawei, is enable the second generation of this feature, leading to performance retention.

Killer Applications for AI, and Application Lag

One of the problems Huawei has is that while these use cases are, in general, good, none of them is a killer application for AI. The translate feature is impressive, however as we move into a better-connected environment, it might be better to offload that sort of compute to servers if it will be more accurate. The problem AI for smartphones has is that it is a new concept: with both Huawei and Apple announcing their dedicated hardware to applying AI neural networks, and Samsung not far behind, there is going to be some form of application lag between implementing the hardware and getting the software right. Ultimately it ends up a big gamble on the state of the semiconductor designers to dedicate so much silicon to it.

When we consider how app developers will approach AI, there are two main directions. Firstly, we start with applications that already exist adding in AI to their software. They have a hammer and are looking for a nail: the first ones out of the gate publicly are likely to be the social media apps, though I would not count out professional apps to be too far behind. The second segment of developers will be those that are creating new apps with the AI requirement – their application would not work otherwise. Part of the issue here is having an application idea that is AI limited in the first place, and then having a system that defaults back down to the GPU (or CPU) if dedicated neural network hardware is not present.

Then comes the performance discussion. Huawei was keen to point out that their solution is capable of running an image recognition network at 2000 images per minute, around double that of the nearest competition. While that is an interesting metric, it ultimately becomes a synthetic – no-one is going to need 2000 images per minute every minute being identified. Perhaps this can extend to video, e.g. real-time processing and image recognition combined with audio transcription for later searching, but the application that does that is not currently on smartphones (or if one exists, is not using the new AI hardware).

One of the questions I put to Huawei and HiSilicon is around performance: if Huawei is advertising up to 2x the raw performance in FLOPS compared to say, Apple, how as a user is that going to affect my day-to-day use with the hardware? How is that extra horsepower going to generate new experiences that the Apple hardware cannot? Not only did Huawei not have a good answer for this, they didn’t have an answer at all. The only answer I can think of that might be appropriate is that the ideas required haven’t been thought of yet. There’s that Henry Ford quote about ‘if you ask the customer, all they want is faster horses’ which means that sometimes a paradigm shift is needed to generate new experiences; a new technology needs its killer application. Then comes the issue about the lag of app development behind these new features.

The second question to Huawei on this was about benchmarking. We already extensively benchmark the CPU and the GPU, and now we are going to have to test the NPU. Currently no real examples exist, and the applications using the AI hardware are not sufficient enough to get an accurate comparison on the hardware available, because the feature either works or it does not. Again, Huawei didn’t have a good answer for this, outside their 2000 images/minute metric. To a certain extent, they don’t need an answer for this right now – the raw appeal of AI dedicated hardware is the fact that it is new. The newness is the wow factor. The analysis of that factor is something that typically occurs in the second generation. I made it quite clear that as technical reviewers, we would be looking to see how we can benchmark the hardware (if not this generation, then perhaps next generation) and I actively encouraged Huawei to synchronize with the common industry standard benchmark software tools in order to do so. Again, Huawei has given itself a step up by supporting Android’s Neural Network APIs, which should open the software up to these developers.

On a final thought, last week at GTC Europe, NVIDIA's keynote mentioned an understated yet interesting feature where using AI could help improve it. Ray tracing, to provide realistic scene interpretation over polygon modeling, is usually a very computationally intensive task, however the benefit is an extreme payoff in visual fidelity. What NVIDIA showed was AI assisted ray-tracing: predicting the colors of nearby pixels based on the information already computed and then updating as more computation was performed. While true ray-tracing for interactive video (and video games) might still be a far-away wish, AI-assisted ray tracing looked like an obvious way to accelerate the problem. Could this be something applied to smartphones? If there is dedicated AI hardware, such as the NPU, it could be a good fit to enable better user experiences.

The Look, The Silicon, & Mate 10 As a Desktop
Comments Locked

103 Comments

View All Comments

  • melgross - Monday, October 16, 2017 - link

    Saturation is affected.
  • Aberamati - Monday, October 16, 2017 - link

    RGBW does affect the grayscale accuracy of the display, since one white pixel is responsible for 4 pixels
  • Valantar - Tuesday, October 17, 2017 - link

    While it might just be a poor implementation, look up any review of the Lenovo Yoga 2 Pro - one of the most prominent RGBW LCD implementations. The white subpixel kills contrast, makes for poor grayscale accuracy, and in general hurts color saturation.
  • Trixanity - Monday, October 16, 2017 - link

    Any word on the camera sensors? Is it the same sensors they've used for a while now?

    Also, interesting that they flipped the fingerprint sensors on the 10 series compared to the 9. The Pro had it on the front then and regular on the back. I wonder what the thought process is.
  • jjj - Monday, October 16, 2017 - link

    AI adjusting the camera settings is a big deal as users are terrible at it.The sensors are pretty great but the users can't use them at full potential and fixing that is a huge deal- not saying that Huawei is there yet as i got no clue how well it works right now.
    Other small things like battery life, keyboard, maybe accidental touch detecting can matter quite a bit.

    You do take a weird position here on AI, you practically ask what is AI good for and than somewhat dismiss it. If you look at the GPU with the same mentality, it's even less of an asset.
    And it's not about only new experiences , it's also about better experiences and using less power for a given task.

    At the end of the day, it is true that the functionality of a smartphone doesn't have much room to evolve, we'll still do a few core things and not much else.That's why it's all about design and display now, that's the low hanging fruit. First smaller bezels and then foldable.
  • Valantar - Tuesday, October 17, 2017 - link

    I understand the article's approach to be more of "what's the point of an AI applying pre-defined, dumb filters" rather than a clear dismissal - a question I find key to this whole thing. Also: what's the difference between this "AI" implementation (that isn't really anything more than a fancy algorithm generated by training a neural network) and the algorithms that generally do this job? Okay, so they can adapt to the subject of your photos, rather than measurements like light and focus distance. But it won't adapt to the user's input over time - like, say, how I prefer noisier flash-free indoor photos to flash-lit ones - which makes this just as dumb and non-intelligent as any other algorithm.
  • FreidoNumeroUno - Saturday, October 21, 2017 - link

    If it deserves being called AI it needs to adapt to the user patterns and experience. To do this, systematic user feedback is necessary.
    Until that point, its a placebo. We don't know the difference. We can only assume there could be some difference.
    To successfully implement AI the language and logic should change. The binary logic of dumb, should be replaced with multi-valent logic like ternary logic and the semiconductor blocks should be replaced with appropriate methods.
    Companies are in advertisement-profit path. This should be changed.
  • Aberamati - Monday, October 16, 2017 - link

    No headphone jack, No expandable storage, smaller, lower resolution PenTile display. Why is that called a Pro? It's worse than the standard model!
    And the cat. 18 LTE is only on the Pro, so you've got it wrong.
  • halcyon - Tuesday, October 17, 2017 - link

    You forgot that only the non-pro RGBW display goes to brught 730 nits. So , the 'pro' display is also dimmer. Huawei tried to handwave through this too, but it is clearly there in thr presentation snd the specs.

    The only thing missing id LPDDR/emmc/UFS lottery, which I am sure Huawei can pull off.
  • Cliff34 - Monday, October 16, 2017 - link

    When I think about AI on the phone, I imagine like an assistant who can remind me when something is due or tell me I got a sms from my wife.

    The problem with AI right now is that it is at its infancy. Who really cares about my phone can find cats in my photos?

    Until someone develop an AI where it can help you do things (instead of you checking on the phone), AI is still more of a marketing fad than anything.

    I remembered ten years ago AR (augmented reality) is all the hype. People were predicting that one day we will have AR everywhere. Ten years later and the only thing AR made a big dent is the mobile Pokémon game.

    Until AI can make our lives easier, I won't be getting hyped having it on my phone.

Log in

Don't have an account? Sign up now