This article is still fresh off the presses - I'm sure there's still typos/grammar to be fixed in the coming hours.
Just a general note; this is my first iPhone review, and hopefully it makes up for AnandTech not having a review of the iPhone 8's/X last year. Unfortunately that happened in a time when there was no mobile editor with the site anymore, and I only rejoined after a few months after that.
As always, feel free to reach out in the comments or per emails.
one thing was not clear in the display section ... the Xs display (OLED) uses more power on a black screen compared to 8's LCD at the same brightness?!! this doesn't sound right! or is this power draw attributable to something else (face detection sensors, etc.)?
You asked about the display's matrix controlling logic. It was found out through patent research that Apple has developed LTPO TFT Technology for that, however it is so far is confirmed to only used in the displays LG Display makes for Apple Watch. For smartphone OLEDs Apple has older simpler technology designed for the transistor layer as these big OLEDs are already super pricy anyway. And additional manufacturers like LG Display where Apple has started to implement manufacturing of iPhone XS/Max displays only has very few Tokki Canon equipment sets so they can not really low the manufacturing pricing down. (And Apple can not move to a different equipment as it has designed iPhone X/XS/Max displays only for Tokki Canon equipment).
I understand the regression on the battery test you run, it is basically browsing the web, which is by and large a white background and generally above the 60-65% threshold for the crossover of efficiency between OLEDs and LCDs. it actually makes sense for OLEDs to be less efficient in web browsing-based battery tests. however, I was surprised that you measured less efficiency on a pure black screen!
Common misconception - OLEDS don't just turn off fully for pure blacks, it takes active power to create a black. When they're off they're a murky grey.
OLEDs are simply much, much less efficient than the crystalline LEDs used as LCD backlights. They only have an advantage when the image is mostly black, which just isn't the case almost anywhere on the web or elsewhere.
And the OLED displays in the X/XS are PenTile, so red and blue only have half the nominal resolution. The indicated resolution actually applies to green only. But the GPU will probably need to work harder for the PenTile compensation algorithm.
It still looks smudged to me especially at character edges (they're lined with tiny brownish/blueish pustules due to PenTile), and scrolling looks horribly janky, as if it was an old-time interlaced display, which is apparently due to the necessary PWM re-scanning.
Neither of these püroblems exist with the excellent LCDs Apple has been using since the iPhone 6, which still have proper full-resolution RGB pixels and due to the LCD inertia scroll buttery smooth.
So if any of the devices, it'll be the XR for me or none.
I thought the the buttery smooth looking scrolling on LCD was due to the more ghosting happening on the screen, no? When you’re scrolling text and images look clearer on OLEDS thanks to the lower amount of ghosting, but on LCDs pixels take more time to switch colors and that creates that smooth scrolling effect on the screen.
Yeah, I would also expect that the display controller should be able to go into low-power mode if it has all black pixels and it doesn't even need to scan in that state.
Maybe there was some mistake in the measurement or the display wasn't actually completely black but just relatively dark with still some pixels on at lower brightness.
It could also be that the controller needs some startup time so it might not be able to shut down unless it can really know for sure there won't suddenly be some bright pixels again.
Fantastic Review! But you missed the biggest item: The 8-Core Neural Processing Cores. These are used by Apple for Magic and huge acceleration of several tasks including realtime photo processing, Face ID, etc. These can be used in apps. The A12 has 18 cores - 2 Large CPU, 4 Small CPU, 4 GPU, and 8 NPU Cores.
If you're going to count like that, you need to throw in at least 7 Chinook cores (small 64-bit Apple-designed cores that act as controllers for various large blocks like the GPU or NPU). [A Chinook is a type of non-Vortex wind, just like a Zephyr, Tempest, or Mistral...]
There's nothing that screams their existence on the die shots, but what little we know about them has been established by looking at the OS binaries for the new iPhones. Presumably if they really are minimal and require little to no L2 and smaller L1s (ie regular memory structures that are more easily visible), they could look like pretty much any of that vast sea of unexplained territory on the die.
It's unclear what these do today apart from the obvious tasks like initialization and power tracking. (On the GPU it handles thread scheduling.) Even more interesting is what Apple's long term game here is? To move as much of the OS as possible off the main cores onto these auxiliary cores (and so the wheel of reincarnation spins back to System/360 Channel Controllers?) For example (in principle...) you could run the file system stack on the flash controller, or the network on a controller living near the radio basebands, and have the OS communicate with them at a much more abstract level.
Does this make sense for power, performance, or security reasons? Not a clue! But in a world where transistors are cheap, I'm glad to see Apple slowly rethinking OS and system design decisions that were basically made in the early 1970s, and that we've stuck with ever since then regardless of tech changes.
Great job as always Andrei! I would only have hoped for a more thorough exploration of the limits of the portrait mode, to see if Apple really makes proper use of the depth map, taking a photo in portrait mode in a tunnel to see if the amount of blur is applied according to distance for example.
Thanks for this comment--Now I know why nothing last year. Great Anandtech standard of a review! Always above my intellect of understanding w/ info overload that teaches me a lot of the product.
Thank you for this great detailed review. I have been coming back to this review before making a purchase. Please make an comparison article of Iphone XS gaming vs other smartphones in market. How much does thermal make difference over longer periods before it starts to throttle or heat up. Would be able to give an approx time before you noticed heat while gaming on XS? I don't mind investing in an expensive phone as long as thermals doesn't limit the performance. There are phones like Razor 2 or Rog out. People make an comparison with an iPhone as it doesn't require much cooling. I wonder if gaming for above 20+ mins makes it challenging for Iphone to heat up enough that you should be worried about?
And the exynos m3 had 12 execution ports right? Can you elaborate on the major differences between the design of the vortex core in the a12 and the meerkat core in the m3? I would deeply appreciate it if you could.
And the exynos m3 has 12 execution ports right? Can you elaborate on the main differences between the design of the exynos m3 and the vortex core,that'll be really helpful and informative as well. Also,are you planning on writing a piece about the a12x soc, it'll be really interesting to hear how far apple has come with the soc on the 2018 ipad pro.
"Now what is very interesting here is that this essentially looks identical to Apple’s Swift microarchitecture from Apple's A6 SoC." This comparison doesn't make sense and it seems like you took the same execution ports to determine whether the chips are identical, when the ports could be arbitrary for each release. Rather I took the specifications (feature size, revision) and dates of each from these pages: https://en.wikipedia.org/wiki/Comparison_of_ARMv7-... and https://en.wikipedia.org/wiki/Comparison_of_ARMv8-... to come up with these matches: Cortex-A15-A9 A6, Cortex-A15-A9 A6X, Cortex-A57-A53 A7, Cortex-A57-A53 A8, Cortex-A57-A53 A8X, Cortex-A57-A53 A9, Cortex-A57-A53 A9X, Cortex-A73 A10, Cortex-A73 A10X, Cortex-A75 A11, Cortex-A76 A12, Cortex-A76 A12X. For exemplum 5 execution ports could be gotten (I'm no computer engineer so this is a SWAG.) from the 3 in Cortex-A9 subtracted from the 8 in Cortex-A15 but the later big.LITTLEs with 9 and 5 ports could be split from 7 or 8 as (7+2)+(7−2) or (8+1)+(8−3). You need to correct the Anandtech and Wikipedia pages.
faster -> swifter, swiftlier ISO -> iso -> idem 's !-> they; 1 != 2 great:small::big:lite::mickel:littel
It is weird because it is warmer in bright light and bleaker in dim light. Why can not they just even it out, make the photos less yellowish in bright light and less bleak in dim light?
Great review. I loved the SoC analysis. There's definitely something spooky going on in an SoC with three caches that are scattered throughout the die. You do mention that there are two more fixed point ALUs but when analyzing a SPEC test result that relies on execution units you said that the A12 didn't have any execution improvements. Aren't the extra ALUs more execute?
It's clearly a nice device and there are areas that saw massive improvements and other areas that are more of the same. I really appreciate that your conclusion isn't "it's a great device so buy it" but "it's a great device but really expensive".
More than half the die shot was unlabeled. I found it strange that over 50% of the die wasn't worth discussing... what does it do? Are these fixed function units, modems, ISPs, ect.?
It's really amazing how the CPU and GPU are taking less and less space on a SoC.
I returned my MAX because of antenna signal issues. I upgraded from the 8 plus and while it was super fast it definitely has issues. I drive the same route to work everyday and in the few days I had the phone I had 4 dropped calls in the middle of conversations and when I looked at the screen is said call failed. One call to my wife I had 2 calls failed within 5 minutes. From my research the dropped calls are related to the new antenna system that Apple is using. Unless you are in a strong signal area you will receive a lot of dropped calls. From what I'm reading this has nothing to do with switching to Intel from 3Com it is directly related to the choice of antennas. Had 3Com had a chip ready to go with the same specs it too who have similar signal issues because of the antennas. The other issue I was having was network connectivity. I would be connected to my wireless at home or at work and often get page cannot be displayed errors and I need to check my network. I was clearly connected to my wireless network. I would turn the wireless on and off and it would start working. Speeds were crazy too. One minute you'd get really fat throughput and the next it would be crazy slow. I'd hold off on purchasing the new phones until Apple sorts out the bugs.
ehh?? everybody knows that if you want to use a telephone, you get a landline. mobile phones ceased being about phone calls at least a decade ago. what?? for a Grand$ you want to talk to someone??? how Neanderthal.
On a serious note though, I do wonder how long we'll even have a phone network or carriers that treat voice, text, and data as individual entities. We can and have already been able to do VoIP for what feels like ever and calling over WiFi is a thing. It'd make sense to just buy data and wrap up voice and text inside 4G or whatever.
"It'd make sense to just buy data and wrap up voice and text inside 4G or whatever."
I suspect that some scientists/engineers have run the numbers to see what that does to data capacity. which may be why it hasn't happened. or, it could also be that the data scientists at the carriers (and phone vendors) have found that smartphone users really don't make enough phone calls to warrant supporting decent quality.
Exactly. In India we have a new Carrier service called Jio doing exactly the same. They built their network group up on VoIP and only sell data. Calls and SMS are totally free. And man can you imagine I am using their plan of 2GB data per DAY for 90 days at just USD 8 !!!
What, your mobile data speed has been only 11mbps, where do you live? How on earth do you use your phone like that! I regularly get over 100mbps where I live in London, with the highest I have recorded being 134mbps
Little early to jump ship. Do you remember antennagate? Steve Jobs answer to a design issue, “you are holding the phone wrong”. Sorry not forming out 1500 for a phone that has cellular as well as network related issues. How this ever passed inspection in testing is beyond me. I’m an Apple fan, have owned every iPhone since they started making them. This is without a doubt the worst performing iPhone I ever purchased. Yes it was snappy and apps flew open. But if I can’t use the phone aspect or wireless networking at home, I may as well hold a paperweight to my ear. I’m heavily entrenched into the Apple ecosystem. Watches, TV, iPads and MacBook Pro. Not to mention all the movies, music and apps that I’ve purchased. This pisses me off. 1500 for a piece a shit phone that was not properly tested before being released.
The thing that jumps out to me is the power usage and performance gains at the same time. TSMC's 7nm process looks really good. I wonder if this will also play out on CPUs/GPUs on the same process coming soon.
It’s hard to say how much of what Apple gets out of these processes others will get. Apple has bought a lot of equipment for TSMC, as they’ve done for other companies, including Samsung, over the years. What they get out of it is early access to new processes, as well as some more specialized features that they have often developed themselves, as well as extra discounts.
I still think the star of the show may end up being the neural processor, up from 660 billion ops/sec to 5 trillion (9x the speed at a tenth the energy usage) - now available to developers via Core ML 2, and now pipelined through the ISP.
Probably nowhere as good as it is projected here. What is the bloody point of these supposedly super fast SOCs with massive caches and execution units when the OS running on it will be the highly limited, crippled and locked OS like the IOS. No emulators, no support for USB peripherals like gamepads, mice or keyboards, poor excuse for file management, no access to outside apps, no ability to run non webkit based browsers, no ability to seamlessly transfer data to and from windows desktops and multiple other limitations.
All and well getting excited over this SOC but its like Having a 450 BHP ferrari running on a 80 mile speed limiter with an automatic transmission. All that power is useless and will remain unutilised.
Until they allow other OS to run on their hardware, the comparisons are meaningless. I can do a hell lot more on a pithy 835 than I will ever be able to do on an A11 or A12.
It'll be interesting to see what comes of that effort - so much of the Android market is feature phone replacements - the actual percentage which represents high end flagship phones is pretty slim.
If a bigger percentage of the Android market is willing to pay for silicon with that increased compute capacity, there may be hope.
But will they? Microprosser Reports says that each new SoC from Apple cost, the year of introduction, between $34-38. Top Android SoCs cost between $24-28. That’s a big difference. These phones compete wither each other, and price while Apple mostly competes against itself.
The reason, or I should say, one of the reasons why other manufacturers are copying Apple’s notch, is because it allows a full screen phone. Notice that Samsung has edge to edge, but still retains a shorter screen on both the top and bottom. The notch removes 2-3% of the screen area, depending on screen size, which is very little. Now take the Galaxy 9+, and you’ll notice that when the entire top and bottom cutoffs are added together, it amounts to about 10% of the possible screen size.
It’s why the new Pixel and others are moving to the notch. In the cases where the notch is just being copied, it’s ridiculous, because those companies are still using a chin on the bottom, and the notch is just there to copy the iPhone.
No. They are copying the iphone for the look. If they were copying to maximize screen area they would copy Essential. Apple removed the chin and the notch is needed for face id - they have a good reason. Some copycats(not all) ape the look and keep the chin. A good compromise is the impending OnePlus T. It has teardrop notch and minimized chin that makes it look actually good. Asus is on the other side of the spectrum. They even bragged they wanted to copy the look of the iphone X.
Not on an S release! It bugs me a little that people want these evolutionary changes in all aspects of the phone when it's an S release. A trend that has long been established to be more about a few internal changes.
Also the X was a pretty big change, took everyone a bit to calibrate to the new hardware, but now with the S release, nothing is enough, when an S iphone is never meant for the n-1, non-S iphone buyer. These people, if upgrading would probably go with a newer phone rather than a year old phone.
Great review! I have a question regarding the off angle color shift of the screen. Given even small off angle viewing (5-10 degrees), I noticed a considerable blue shift in the color. Is this normal for this screen? It's not a big deal in practice, as I rarely look at my phone off angle when using it, but it'd be nice to know if this is expected behavior or not.
I upgraded from a regular iPhone 7 to the XS Max. The biggest surprise for me is Face ID. It's far more reliable than a finger print scanner. I do lots of home improvement and car repair projects and the finger print scanner would always fail due to roughed up hands. Face ID is so fast, you barely notice it. And it works from harsh angles - I can pull the phone from my pocket, look down at it, and it unlocks... every time. The "notch" is not ideal .. but it does enable a feature that ACTUALLY works better (for me) than TouchID ever did.
There are less expensive products for people that don't have or don't want to spend as much, but I do agree in principal. The XS and Max are amazing phones that come with an outrageous price tag. The A12 is an impressive SoC, but it should be given that the handset its inside of costs more than the retail price for most desktops and laptops.
Pretty typical with any high end products. The top 10% pave the way for the rest to have these products at a reasonable price a few years later. You can get an iPhone 7 pretty cheap now.
It’s still cheaper than my first PC, a 486sx2 running at 50mhz. RAM and hard drives were still measured in megabytes, and the internet made noise before connecting and it tied up your phone line when you used it. There has also been about 20 years of inflation. Flagship smartphones are expensive, but they sure do a lot. That doesn’t mean I’m buying one, but we’ve come a long way in my hopefully-less-than-half-a-lifetime.
OMG! Exactly what I was thinking as I read this review on my $225 T-Mobile Rev VL Plus. I may not be able to afford such a technological marvel as the iPhone XS MAX, but I bet I get anywhere from 80-to-90% of the overall functionality for one-fifth the price. I've bought many premium smart phones over the years, starting with the HTC EVO 4G LTE many years ago, followed by Samsung Galaxy S3, then the S4, and even the gigantic Asus Zenfone 3 Ultra. Each phone was better than the one before, and yet each were major disappointments to me for various reasons which I won't go into here. Suffice to say that the ever increasing cost of each phone raised my expectations about what they should be able to do, and thus contributed to my sense of disappointment when they failed to live up to the hype. So when the Zenphone 3 crapped out on me after less than a year of use and I saw this cheap Rev VL Plus, I decided to stop wasting so much money on these overpriced devices and buy something that wouldn't leave me feeling robbed or cheated it it didn't turn out to be the "next best thing". Now, after almost a year of use, I feel like it was a good decision. And if something better comes along in a few months at a similar price point, I can buy it without feeling remorse for having wasted so much money on a phone that didn't last very long. So all you 10-percenters - go ahead and throw away $1,200 on a phone. I'm quite content to have a 2nd rate phone and save a thousand dollars.
You say you spent $225 on your phone less than (but almost) a year ago and then say that you would be willing to replace it immediately if some other phone interested you. So you are apparently willing to spend around $225 for one year of ownership of a phone.
By this metric you should be willing to spend $1000 on a phone provided you keep it for 4 years or more.
Now it may the the case that you don’t want to keep any phone for four years, and so the iPhone X[S] is not for you. But here I am with an four year old iPhone 6+, that still works great (thanks to iOS 12). I similarly expect the iPhone X[S] to be good for four years at least, so, although I am not a “10%er”, I am seriously considering purchasing one.
It’s simply a fallacy to assert that only the wealthy would be interested in the latest iPhone models.
"Now it may the the case that you don’t want to keep any phone for four years, and so the iPhone X[S] is not for you. But here I am with an four year old iPhone 6+, that still works great (thanks to iOS 12). "
ergo, Apple's problem. unfulfilled TAM for iPhones is disappearing faster than kegs at a Georgetown Prep gathering. keeping one longer than a cycle is a real problem for them. they will figure out a way to stop such disloyalty.
Hi Andrei Frumusanu, Thanks for extraordinary review of iPhone Xs!
in page one you said A12 GPU 4-core "G11P" @ >~1.1GHz, i have several question. 1. how do you estimate that clockspeed? 2. if you know that clockspeed can you estimate how many GFLOPs FP32 and FP16 on A12 GPU?
Great review of the SoC. Please, when you review the Pixel 3, or (in 2019), updated Snapdragons, hold them to this bar. I get really frustrated when I see your (or other) reviews complimenting the latest Snapdragon even though they're miles behind the Ax. As an Android user, I find it very unfortunate that to get my OS of choice I must get inferior hardware.
So you would expect them to use that powerful SOC to deliver real battery improvements, but somehow they can't. No one I speak to complains that their modern smartphone is slow, but everyone complains about battery life.
Andrei, thanks for the review! Yes, these are outstanding phones at outrageous prices. I appreciate the in-depth testing and detailed background, especially on the A12's CPU and GPU. While I don't own an iPhone and don't like iOS, I also believe that, phone-wise, the XS and XS Max are the new kings of the hill. The A12's performance is certainly in PC laptop class, and I wonder if (or how) the recent Apple-Qualcomm spat that kept QC's modem tech out of the new iPhones has helped Intel to keep its status as CPU provider for Apple's laptops, at least for now. One final comment, and one question: Andrei, I agree with you 100% that Apple missed an opportunity when they decided on a rather middling battery capacity for the XS Max. If I buy a big phone, I expect a big battery. Give the XS Max a 5000 mAh or larger battery, and it really is "the Max", at least among iPhones. At that size, a few mm additional thickness are not as important as run time. Maybe Apple kept that upgrade for its mid-cycle refresh next year - look, bigger batteries. @Andrei. I remember reading somewhere that the iPhone X and 8 used 128 bit wide memory buses. Questions: Is that the case here, and how does the memory system and bus compare to Android phones? And, in your estimate, how much of the A12's speed advantages are due to Apple's memory speeds and bus width ?
A12 is definitely not in the laptop class unless you're looking at the extreme low power usage tier.
Just because it is quite a but faster than the equivalent on mobile does Not mean it can compete at a different power envelope. If that were true, Intel would already have dominated the SoC market. It requires a completely different CPU design. It's wwhy they can use it for the touchbar on the macbook but not as a main processor.
This review did not compare the A12 with “mobile” Intel chips but rather with server chips. The A12 is on par with Skylake server CPUs on a single threaded basis. Let that sink in.
As to why Intel doesn’t dominate the SoC space, Intel’s designs haven’t been energy efficient enougj and also the x86 instruction set offers no advantage on mobile.
It's already competing with laptop and desktop class chips, not just mobile fare. It's up there per core with Skylake server, and NOT normalized per clock, just core vs core.
It's like people don't read these articles year over year and are still using lines from when A7 arrived...
I know this is almost a side point, but this really goes to show what a mess Android (Google/Qualcomm) is compared to iOS. At the rate Snapdragon is improving, it'll be 2020/2021 before Qualcomm sells a chip as fast as 2017's A11, and Google is shooting itself in the foot by not having APIs available that take advantage of Snapdragon's (relative) GPU strength.
That's on top of other long-term Android issues (like how in 2018, Android phones still can't handle a 1:1 match of finger movement to scrolling, which the iPhone could in 2008). Honestly, if I wasn't so invested in Android at this point, I really consider switching now.
I switched. Long time nexus user and I feel Google letting the value orientated base behind. It took me saying screw it and paying more for sure, but it has been worth it. Hardware I found lacking and software too since they went t Pixel. Sure on paper it sounded great or fine, but issue after issue would creep up. I never had that many problems with iOS but I feel it's mature enough coming from someone that likes to change settings. The X sold it for me too hardware wise. It was where I saw things going years ago and glad that it's here. (waiting on the Max now however, as I want the real estate!)
Android just doesn't have direct CPU optimization for latency with scrolling. That said, look at phones with Android 8+, that "issue" is pretty much fixed.
For me, most past iOS7 is not really smooth scroll anymore, was one of the first things I noticed back then. There were dropped frames. You'd probably blame hardware though as it was sorted when upgrading to an iPad Air later. Still not a fan of a lot of the UI changes in 11 tbh
I complained about frame drops from iOS7-11, often to crowds of people who would just deny it and say I was seeing things, but Apple addressed it in a WWDC talk and it's much much better in iOS12.
I can still notice a frame drop here and there if I'm being picky, but it's vastly improved, I'm guessing 1 frame drop to every 10 on iOS11.
Since the Pixel 1 it felt pretty well as tight as my iPhone on keeping up with my finger imo (though 120hz touch sensing iPhones may have pulled ahead again).
So the A12 is basically a Skylake. Also on Anandtech I read that the first ipad pro was almost a Broadwell(like on the 12" MB). Makes sense. A-series powered macbooks surely are in the future.
Color management system is again what puts ios above android. Samsung tries with color profiles but it's not a solution. Google fails its ecosystem yet again. Also OpenCL is a mess, no wonder Apple dropped it. It's unreasonable for you to expect Google to throw its weight around it.
The only thing better than A12 is this review. Absolutely SMASHING review Andrei! Your SoC analysis in particular, off the charts awesome; way more descriptive than lowly Geekbench.
Andrei, Great work, and a welcome surprise to see these thorough AnandTech reviews return. I found the photo and video discussions particularly informative and compelling. Thanks so much for all your hard work and detailed analysis. Barry
Andrei, could you expand on a few camera issues: - Is it correct that the wider video color range is ONLY at 30fps? Why would this be? - I have always videoed at 60fps, finding 30fps very noticeable with much movement. If this is correct, it seems like this 30fps color improvement only works in a limited number of situations, with very little movement - Given the A12 performance, why can’t Apple have 480fps or 960fps SloMo like Samsung? - Finally, with the Neural Engine, will Apple potentially be able to improve the camera system by re-programming this?
> - Is it correct that the wider video color range is ONLY at 30fps?
Probably the sensor only works at a certain speed and the HDR works by processing multiple frames. Btw, it's wider dynamic range, not colour range.
> Given the A12 performance, why can’t Apple have 480fps or 960fps SloMo like Samsung?
Likely the sensor might be missing dedicated on-module DRAM - which seems to be a requirement for those high framerates.
> - Finally, with the Neural Engine, will Apple potentially be able to improve the camera system by re-programming this?
They can improve the camera characteristics (choice of exposure, ISO, etc) but I find it unlikely they'll get into things that actively improve image quality - that's something next-gen.
Wider color range only works at 30fps because the camera actually records 2 frames at different exposures (at 60fps) and combines them.
As far as higher FPS slow mo I’m sure it boils down to taking in enough light at high frame rates to be usable enough or just not something enough users care about. Anecdotally I’ve only used it just to test it and never again.
All of my iPhones also have a white point that appears significantly lower than 6500k judging by my eye. I do not have a colorimeter, but they do seem significantly off with certain content.
Unless the review models are cherry picked, I do not see retail units reaching the same quality.
The reference image is a screen shot of the content to be displayed so source content should be preserved.
The color differences in the camera photo do reflect what I see with my eyes to a significant degree.
That is, on my iPhone7 and all my monitors in the house, the screenshot appears deep red at the top with a greyish red on the listed content.
On all the iPhone XS's in household, the content appears light red at the top, with the listed content becoming a distinct shade of brown instead of a greyish red.
I appreciate reading my post. The displays on my models do not seem all that accurate to me.
Have you properly calibrated the other phones and monitors in your house with a colorimeter? You'd be surprised at the awful D.e on most monitors out of the box.
When the displays were tested for colour accuracy they were marked very highly, so it could very well be that the XS is showing the CORRECT colours.
So you are saying that all my previous iPhones I have retired, my current macbook, my spouses iMac, all the TV's in the household, and my two (non-Apple) desktop are all incorrect?
AMC's brand color is red. The XS screen shows something similar to dried blood or a brown-red.
Did the app designers also have incorrectly calibrated monitors? The XS calibration is the problem, not the other way around.
And you've got night shift and true tone off? It could just be you have a dodgy OLED panel, check it against the ones in a store and exchange it if so.
But you'd be surprised how awful most consumer electronics are for colour calibration, people like "pop" and "vivid" even when it's totally oversaturated and nowhere near accurate.
The calibration charts in this article should point to the XS having exceedingly accurate colours.
I compared against the 4 other iPhones XS’ in my household and all of them failed the red reproduction test.
I also noticed all the monitors have a warm whitepoint (looks noticeably less than 6500k). When I was next to a tmobile store I stepped in, turned off tru tone, and observed the same.
I will have to airdrop the screenshot to an Apple Store display, but with so many witnessed off spec displays, aka 7 out of 7 out of spec, I am convinced anything in spec is cherry picked.
Off course tru tone is off.
Maybe this reviewer can take a photo of the AMC red next to a calibrated display.
The website can choose to send different content to different models of iPhone. or also if the color is in the wide color range, the phones would show the same color differently if one has P3 and one doesn’t.
My iPhone XS and my iMac 5K show essentially identical images. It's in between your 7 and XS, but closer to the 7. I think your XS might have a defective display, unless True Tone is doing something funky due to your lighting.
Are you trying to insinuate that only certain colors, eg reference shades, are supposed to match and any colors in between are allowed to be supposed to be a free for all?
What difference is it if I am comparing an app with many colors on a gradient, or a few shades from the gradient itself?
If you really want to be that pedantic I can pick up 10 shades from the amc app and rectangles of pure color to act as a better reference.
With a discrepancy so large, I fail how this will produce different results.
Given the problems being discussed on other sites, I was also hoping to see a deep dive on the WiFi and LTE capabilities/issues. Is that something that you are looking into?
- the detailed Vortex and GPU die shots seem to bear no resemblance to the full SoC die shot. I cannot figure out the relationship no matter how I try to twist and reflect...
Because I can't place them, I can't see the physical relationship of the "new A10 cache" to the rest of the SoC. If it's TIGHTLY coupled to one core, one possibility is value prediction? Another suggested idea that requires a fair bit of storage is instruction criticality tracking.
If it's loosely coupled to both cores, one possibility is it's a central repository for prefetch? Some sort of total prefetching engine that knows the usage history of the L1s AND L2s and is performing not just fancy prefetch (at both L1s and L2s) but additional related services like dead block prediction or drowsiness prediction?
The Vortex and GPU are just crops of the die shot at the top of the page. The Vortex shot is the bottom core rotated 90° counter-clockwise, and the GPU core is either top left or bottom right core, again rotated 90° ccw so that I could have them laid out horizontally.
The "A10 cache" has no relationship with the SoC, it's part of the front-end.
OK, I got ya. Thanks for the clarification. I agree, no obvious connection to L2 and the rest of the SoC. So value prediction or instruction criticality? VP mostly makes sense for loads, so we'd expect it near LS, but criticality makes sense near the front end. It's certainly something I'm partial to, though it's been mostly ignored in the literature compared to other topics. I agree it's a long shot, but, like you said, what else is that block for?
"The benchmark is characterised by being instruction store limited – again part of the Vortex µarch that I saw a great improvement in."
Can you clarify this? There are multiple possible improvements. - You state that A12 supports 2-wide store. The impression I got was that as of A11, Apple supported the fairly tradition 2-wide load/1-wide store per cycle. Is your contention that even as of A11, 2 stores/cycle were possible? Is there perhaps an improvement here along the lines of: previously the CPU could sustain 3 LS ops/cycle (pick a combination from up to 2 loads and up to 2 stores) and now it can sustain 4 LS ops/cycle?
- Alternatively, are the stores (and loads) wider? As of A11, the width of one path to the L1 cache was 128 bits wide. It was for this reason that bulk loads and stores could run as fast using pair load-store integer as using vector load stores (and there was no improvement in using the multi-vector load-stores). When I spoke to some Apple folks about this, the impression I got was that they were doing fancy gathering in the load store buffers before the cache, and so there was no "instruction" advantage to using vector load/stores, whatever instruction sequence you ran, it would as aggressively and as wide as possible gather before hitting the cache. So if the LS queue is now gathering to 256 bits wide, that's going to give you double the LS bandwidth (of course for appropriately written, very dense back to back load/stores).
- alternatively do you simply mean that non-aligned load/stores are handled better (eg LS that crossed cache lines were formerly expensive and now are not)? You'd hope that code doesn't do much of these, but nothing about C-code horrors surprises me any more...
BTW, it's hard to find exactly comparable numbers, but https://www.anandtech.com/show/11544/intel-skylake... shows the performance of a range of different server class CPUs on SPEC2006 INT, compiled under much the same conditions. A12 is, ballpark, about at the level of Skylake-SP at 3.8 to 4GHz... (Presumably Skylake would do a lot better in *some* FP bcs of AVX512, but FP results aren't available.) This gives insight across a wider range of x86 servers than the link Andrei provided. The ideal would be to have SPEC2006 compiled using XCode for say the newest iMac and iMac Pro, and (for mobile space) MacBook Pro...
But it is unclear that the benchmark is especially useful. In particular if it's just generic C code (as opposed to making special use of the Apple NN APIs) then it is just testing the CPU, not the NPU or even NN running on GPU.
You scored 2162. iPhone 6S scores 642 (according to the picture). That sort of 3.5x difference to me looks like a lot less than the boost I'd expect from an NPU, and may just reflect basically 2x better CPU plus availability of the small cores (not present on iPhone 6S).
"Apple promised a significant performance improvement in iOS12, thanks to the way their new scheduler is accounting for the loads from individual tasks. The operating system’s kernel scheduler tracks execution time of threads, and aggregates this into an utilisation metric which is then used by for example the DVFS mechanism."
This is not the only changes in the newest Darwin. There are also changes in GCD scheduling. There was a lot of cruft surrounding that in earlier Darwins (issues of lock implementations, how priority inversion was handled, the heuristics of when a task was so short it's cheaper to just complete it than give up the CPU "for fairness --- but everyone then pays the switching cost"). These are even more difficult to tease out (and certainly won't present in single-threaded benchmarking) but are considered to be significant. There's also been a lot of thinking inside Apple about best practices for GCD (and the generic problem of "how to use multiple cores") and this has likely been translated into new designs within at least some frameworks and Apple's tier1 apps. You can see this discussed here: https://gist.github.com/tclementdev/6af616354912b0...
I just can't wait for Apple to FINALLY flesh out their in-house Mac chips. Not because I love Apple, but simply because I think the end result will be spectacular and outright shocking for Intel..and I do hate Intel. They are disgustingly overrated.
"I do hope Samsung and Apple alike would be able to focus more on optimising this, as like we’re about to see, it will have an impact on battery life."
Presumably Apple's longer term plan (next year?) is to move to the LTPO technology in the aWatch4 display?
This appears to be an Apple specific (ie non-Samsung) tech, though who knows exactly what the web of patents and manufacturing agreements ultimately specifies...
Similar to mobile processors performance surpassing some desktop ones, the mobile devices image quality also surpassing 5-7 years old DSLR tech. Your photo test suite would be great if it adds the Still Life, Resolution and Portrait tests from the pro photo testing sites like Imaging-resource dot com. I'd just send the phones to these folks and they do their tests so we would see the progress in comparison with all cameras of all times and manufacturers. Right now only the fields tests of Anandtech look poor and are not enough to tell what is good and what is missing in the cameras. Due to the great progress, the time for such new deep testing of mobile devices came right now.
No way my Xs Max is touching my 10 year old D700 with a cheap $100 50 1.8 lens in anything over base iso. The iPhone starts applying some serious noise reduction at higher ISO’s making it watercolor like. Even at base iso, the D700 has much better fine detail.
Not to dispute your fundamental point (bigger lenses and bigger sensors can't really be replaced), but there are several low-light photo apps for the iPhone which support much longer exposure than the standard one and this can of course help at least with static images when you've got a fixed position for the iPhone.
I was surprised you said excellent viewing angle, I disagree! If you tilt the screen just a little up or down, left or right, the color changes a lot, it becomes more blue as you tilt more and I couldn’t get used to this so I returned the phone, my iPhone 7 Plus had no such behavior.
I can't tell if this is a serious comment. Apple makes something like 85% of the profit in the mobile phone market. They have absolutely no reason to even consider putting out an Android phone.
Oh, that rubbish stat again. That is only in the narrowly defined mobile market. Apple makes its own OS, designs many of its own chips and controllers, sells its own insurance, runs its own app store and its own retail shops. It has many more parts of the overall value chain. For an accurate comparison, we would need to include what Qualcomm and Samsung make from SOCs, what Samsung and LG make from screens, what Samsung makes from memory etc.
I am a little confused here by your rather bold assertion of desktop-class performance. Correct me if I am wrong, but aren’t the test results here for SPECint2006 inclusive of all cores whereas the Intel processor’s score you linked to is for a single thread. If so, then comparing six cores versus a single thread in a multithreaded desktop CPU is not very fair or valid. Again, I could be totally wrong but I just wanted to point this out.
Thank you for making that clear. See my below comment with links, I put your data into a chart to compare. Is it fair to say IPC for the A12 in integers, is up to 64 percent faster?
Awesome! Thanks for the clarification. So my own education, is SPECint2006 a single-threaded test by its very nature? Can it be configured for multithreaded as well? If so, do you have multithreaded results forthcoming as well? I would love to see where Apple falls in this benchmark in that area as well.
SPECspeed is the score of running a single instance, SPECrate is running as many as there are cores on a system.
It's not possible to run SPECrate on iOS anyhow without some very major hacky modifications of the benchmark because of the way it's ported without running in its own process. For Android devices it's possible, but I never really saw much interest or value for it.
What about joules and watts? This is what is confusing me now. Something that seems a bit suspect to me is that the average energy and joule totals are not consistently lower together. That is, expect a device that has a lower average power usage to over the duration of the test to draw less total joules of energy. Mathematically speaking, I would naturally expect this especially in the case of higher benchmark scores since the lower power would be over a shorter space of time, meaning less total energy drawn.
The amount of Joules drawn depends on the average power and performance (time) of the benchmark.
If a CPU uses 4 watts and gives 20 performance (say 5 minute to complete), it will use less energy than a CPU using 2 watts and giving 8 performance (12.5 minutes to complete), for example.
I live in France and have to deal with intel radios since a while. I'm totally baffled to read all the comments about poor reception, dropped calls, wifi going crazy like this is news.
This problem has been there since the beginning of Apple using intel chips ! It's just now that Americans have to use them also that it might finallybe adressed (please keep complaining, raise awareness, file a class action, whatever it takes) but you don't realize the pain these past years have been using an iphone elsewhere in the world...
For the record, I just gave my 7 (intel chip) to my daughter who broke here phone and reverted back to an old 6plus I kept for such cases (it also uses qualcomm). In the family I'm now the only only getting rock solid wifi, no dropped calls, consistent signal strenght, on a 4+ years old phone. And all this time I thought my carrier was messing around, my orbis were crap, or I just fumbled with my router settings.
Shame on you Apple, shame on you.
Oh and don't get me started on the camera: is this really what people want ? I think it's getting worse and worse, not better. Completely artifical images, looking more like cartoons than photographs. Same for Samsung and Google's phones. Hav a look at the P20 pro for a reality check. This is what photos should look like (contrast, colors - not really dependant on the sensor size).
The Halite app devs showed off an interesting technique they’re using or working on for the XS they call SmartRAW that may offer more control over the over saturation / lower contrast issues, etc.
"Hav a look at the P20 pro for a reality check."? Your reality is the smeared textureless crap with oversharpening contours along object edges? Unbelievable.
Could these amazing CPU gains be translated to a midrange chip? Technically Android could run on Apple SoCs because they're all ARM licensed cores but porcines would gain flying abilities before that ever happened.
It's too bad Android users are stuck with the best that ARM and Qualcomm can do, which isn't much compared to Apple's semi design team.
Apple's strength (supremacy) in the performance of their SoCs really lies in the fine-tuned match of apps and especially low-level software that make good use of excellent hardware. What happens when that doesn't happen was outlined in detail by Andrei in his reviews of Samsung's Mongoose M3 SoC - to use a famous line from a movie that "could've been a contender", but really isn't. Apple's tight integration is the key factor that a more open ecosytem (Android) has a hard time matching; however, Google and (especially) Qualcomm leave a lot of possible performance improvements on the table by really poor collaboration; for example, GPU-assisted computing is AWOL for Android - not a smart move when you try to compete against Apple.
The Xeon Platinum 8176 is a 28 core, $9000 Intel server CPU, based on Skylake. In single threaded performance, the iPhone XS outperforms it by 12 percent for integers, despite its lower clock speed. If the iPhone were to run at 3.8ghz, the Apple A12 would outperform Intel's CPU by 64 percent on average for integer tests.
Think about that, the iPhone's CPU IPC (performance per clock) is already higher in integer performance now. Those tests include: spam filter, compression, compiling, vehicle scheduling, game ai, protein seq. analyses, chess, quantum simulation, video encoding, network sim, pathfinding, and xml processing. Test takes hours to run.
It might be faster in single thread, but in MT it gets toasted by the Xeon. The Xeon is 9000$ for a few reasons: - it is an enterprise chip; - it supports ecc; - it supports up to 8 cpus on a board; - it supports tons of ram, a LOT of memory channels; - it has almost 40MB of L3 cache, compared to 8mb in a12; - it has a ring bus architecture meaning all those cores have very low latency between them and to memory; - it has CISC instructions, meaning that when you get out of basic phone apps and you start doing scientific/database/HPC stuff, you will see a lot of benefits and performance improvements from executing a single instruction for a specific operation, compared to the RISC nature of A12; - it supports AVX512, needed for high performance computing. In this, the A12 would get smashed; - and many more; So the Xeon 8180 is still an mighty impressive chip and Intel has invested some real thought and experience into making it. Things that Apple doesn't have. I get it, it is nice to see Apple having a chip with this much compute power in such a low TDP and it is due to the fact that x86 chips have a lot of extra stuff added in for legacy. But don't get carried away with this, what Apple is doing now from uArch point of view is not new. Desktop chip have had this stuff 15 years ago. The difference is that Apple works on the latest fabrication process and doesn't care about x86 legacy.
"It might be faster in single thread, but in MT it gets toasted by the Xeon"
That is totally irrelevant. Obviously Apple could easily make a chip with more cores. Just like Cavium's Thunder. 8 x A12 Vortex cores would beat an 8 core Xeon in integer calculations easily enough.
Agree on your points re. the XEON. However, I'd still like to see Apple launch CPUs/iGPUs based on their design especially in the laptop space, where Intel still rules and charges premium prices. If nothing else, Apple getting into that game would fan the flames under Intel's chair that AMD is trying to kindle (started to work for desktop CPUs). In the end, we all benefit if Chipzilla either gets off its enormous bottom(line) and innovates more, or gets pushed to the side by superior tech. So, even as a non-Apple user: go Apple, go!
- it has CISC instructions, meaning that when you get out of basic phone apps and you start doing scientific/database/HPC stuff, you will see a lot of benefits and performance improvements from executing a single instruction for a specific operation, compared to the RISC nature of A12;
CISC instructions generally don't really do much more than RISC ones do – they just have more addressing modes while RISC is almost always register-to-register with separate Load & Store.
That just doesn' make any difference any more because the bottleneck is not instruction fetching (as it once was in the old times) but actually execution unit pipeline congestion, including of the Load & Store units.
- it supports AVX512, needed for high performance computing. In this, the A12 would get smashed;
There's already a scalable vector extention for ARM which Apple could adopt if that was actually a bottleneck. And even the existing vector units aren't anything to scoff at – the issue is more that Intel CPUs are forced to drop down to half their nominal clock once you actually use AVX512; It could actually be more efficient to optimize the regular vetor units for ful lspeed operation to make up for it.
So the Xeon 8180 is still an mighty impressive chip and Intel has invested some real thought and experience into making it. Things that Apple doesn't have.
We actually have no clue what Apple is investing in behind closed doors until they slam it on the table as a finished product ready for sale!
Just to add, ARM RISC instructions actually do more than Intel CISC instructions in one respect: ARM is a 3-address machine (dst = src1 + src2) while Intel is a 2-address machine (dst += src). And, of course, due to the much larger logical register file there is much less fiddling with the stack for local variable storage.
Thanks. This kind of info really should have been in the article, rather than just the unsubstantiated claim, "we’re just margins off the best desktop CPUs".
Those anandtech results for the XEON are well below the result on the SPEC2006 website for a number of tests. The results for the 462 test on the website seems strange for all the XEON as a multiple of the A12, but even excluding this the XEON 8176 has a 18% performance advantage, rather than the other way round.
What is the state of Vulkan support on varios Android handsets? I'm asking because GFX Bench is making use of a Vulkan/Metal workload. Apple is all in on Metal but last time I read Vulkan is not fully supported on all Android handsets. In this aspect isn't the comparison of GPU numbers lopsided as the API is not equal? Don't some of the smartphones revert to OpenGL?
Thanks Andrei. Wonderful review and great to see in depth phone reviews again. Also you must have worked extremely hard to get such a comprehensive and in-depth review out so quickly. Thanks again.
Am I correct in understanding this figures in base "system" power? And not the display? There could be a case for more memory from iPhone 8, where the X had 3GB and Xs had 4GB.
It's the power of the whole phone, meaning idle SoC, power delivery, and the screen and its controllers at an equivalent 0 nits brightness. Out of that figure some part will be the display, but we can't really separate it without tearing down the phone.
The base power for the X and XS is near identical - the big difference is to the LCD iPhones.
Arh. That makes things clear. But why would OLED consume more power compared to LCD when it is completely off. This goes against common wisdom / knowledge OLED were suppose to be extremely efficient in all black display.
"But why would OLED consume more power compared to LCD when it is completely off."
This is addressed obliquely in the article - because the OLED display has high colour depth and extremely high DPI, it requires extra power to control all the levels of brightness for each and every one of those pixels.
A few years ago, I loaded a "black Background" app on my trusty old Samsung S3. A black background always made a lot of sense to me for OLED-type screens. I might have missed it, but does Apple provide a black background setting in its settings for the XS and XS max? Also, I haven't seen much discussion about possible burn-in for OLED screens here or in other reviews of phones with OLED screens. Is this now a non-issue, maybe as many replace their phones before burn-in shows?
No dark mode yet except for an incomplete one in accessibility settings. You end up with some graphics turned negative image. Hopefully, we’ll get a proper dark mode in iOS 13. The Mac finally got one.
If you activate Voice Over in Accessibility settings, you can triple-tap the display with three fingers to activate the „screen courtroom“, which makes the whole screen dark. I believe this turned off the backlight on LCD iPhones. Maybe it deactivates more than just displaying a black screen, and can help lower base power usage. Of course Voice Over being on may affect other behavior/benchmarks.
Thanks for the information (I don't own an iPhone). The dark/black background I mean (and used to use on my S3 and later on a Blackberry Priv) didn't turn off the entire screen, so the icons were still bright, and appeared even brighter against the black background. I really wonder how much power one can save on phones like the iPhone X, XS and XS Max by using an unlit background. In theory, every lit pixel adds a little power drain, and every little LED not lit up should save that.
For me the most interesting part of the review is the SPECint results. Looking at the official SPEC report of the system with the highest base result (Cisco UCS B200 M5 (Intel Xeon Gold 6146, 3.20 GHz), 4.2GHz Turbo), the A12 is really close; e.g., 44.56 (A12) vs. 48.3 (Xeon) for 403.gcc. An exception is 456.hmmer, but there Cisco (and many others) are using some compiler trick that you apparently are not using.
And the A12 is using quite a bit less power for these results than the Xeon. Very impressive of Apple. Maybe they should go into the server CPU business and produce the ARM-based server CPU that others have aimed for these last years.
Well, it would be largely reusing the R&D they already do for iOS chips, making the high performance cores is the hardest part, scaling them up to more cores would be a fraction the work.
What are the other die area used for? The labels only has ~half of the die. I could add image signal processing, video encode and decode if that is not included in GPU. Some FPGA we know Apple had included in their SoC. But all that accounted that is likely less than 25% of that due space. What about the other 25%?
Hardware accelerators for anything and everything that can be hardware accelerated.
Plus the "secure enclave" is also on there somewhere - a fenced off, cut down SOC within the SOC for handling logins/unlocking and other security stuff.
Andrei - This is an awesome review. Do you think Apple could roll out a low end laptop with 6 Vortex cores - or are there still SoC design areas that Apple still needs to address?
I'm not Andrei, but my speculation on this would be:
• It would make no sense to start with the weakest Macs because that would put the transition to Apple's own CPUs in a bad light from the start. As in the Intel transition 12 years ago they would need to start with the middle of their lineup (with iMacs and MacBook Pros) in order to demonstrate the strength of the new CPU platform and to motivate software developers to jump on board, including actually working on the new machines full time if possible.
• They would need to have an emulation infrastructure for Intel legacy code in place like they did with Rosetta back then (also for Windows/Linux VMs!). And even in emulation that legacy code cannot be much slower than natively on then-current Intel machines, so their own CPUs already need to be a good bit faster than the corresponding Intel ones at the time in order to compensate for most of the emulation cost.
• As in 2006, this would have a significant impact on macOS so at announcement they would need to push at least developer versions of the new macOS to developers. Back in 2006 they had Intel-based developer systems ready before the actual Intel Macs came out – this time they could actually provide a macOS developer version for the then top-of-the-line iPads until the first ARM-based Macs were available (which already support Blutooth keyboards now and could then just support Bluetooth mice and trackpads as well). But this also means that as back then, they would need to announce the transition at WWDC to explain it all and to get the developers into the boat.
• Of course Apple would need to build desktop/notebook capable versions of their CPUs with all the necessary infrastructure (PCIe, multiple USB, Thunderbolt) but on the other hand they'd have more power and active cooling to work with, so they could go to more big cores and to higher clock speeds.
Again: This is sheer speculation, but the signs are accumulating that something this that may indeed be in the cards with Intel stagnating and Apple still plowing ahead.
I just don't think that it would be practical to put the current level of Apple CPUs into a Mac just like that even though from sheer CPU performance it looks feasible. These transitions have always been a massive undertaking and can't just be shot from the hip, even though the nominal performance seems almost there right now.
Not to mention they’d have to maintain two processor architectures for an extended period. By that, I mean, I doubt they’d transition high-end Macs for a long, long time to avoid angering pros... again.
A real left field move would be for Apple to release a MacOS tablet running ARM, like a Qualcomm Windows tablet. I wouldn't rule it out considering how Apple went from a single product for the iPhone and iPad to making multiple sizes.
I don't see that happening at all because Apple has explicitly maintained a clear distinction between Macs and iOS exactly along the lines of different interaction paradigms (point-based vs. touch).
Windows with touch continues to be a mess and I don't see Apple following Microsoft into that dead end.
If they'd have a replacement offering noticeably higher performance than any Intel Mac Pro and if legacy software at least ran decently until new ARM recompiles were available, I don't think most users would mind all that much.
Going from PowerMacs to Mac Pros was also not entirely painless, but most users still thought it was worth it overall.
Andrei, I remember you mentioning in the comment sections of an article - maybe the S9 review - that the A11 cannot maintain it's freq and drops by as much as 40% while the Snapdragon drops only 10% on sustained workloads.
You made your testing on a bench fan. You tested the potential of the A12, and it is incredible, but not the real life performance of the iphone. When used for prolonged sessions the A12 might reach its threshold faster than the Snapdragon and drop performance. What are your musings on this? Throttling matters and identifying it is very important, especially considering Apple's recent history. The CPU is great but is that top performance sustainable and for how long?
What you mention is in regards to the GPU performance, it's addressed in that section in this piece.
And of course it's the real performance of a phone. The duration of a benchmark doesn't change the fact that the CPU is capable of that throughput. Real world workloads are transactional and are a few seconds at best in the majority of use-cases. In such scenarios, the performance is well exhibited.
"folding your smartphone is really hard, and doesn't end well for the phone"
I don't recall (too lazy to confirm :) ) which company, but a patent was awarded a couple or so years ago for a flexible display, such that it would (according to the drawing I saw) make a cylindrical bend the hinge when closed. still hasn't appeared, so far as I know. let's see... looks like Samsung and LG have some, and more recently than when I first saw..
Yes, that comment was in jest. I believe both Samsung, LG and Huawei have folding smartphones with folding screens underdevelopment. If those work out and aren't too pricey, I'd be interested. Nice to be able to fold a phone with a 7 inch display to a little more then half its full size.
Hi, do you think the current power draw of the CPU (in watt) is sustainable for the battery, expecially in the long term? In this review you cite a case where a GPU benchmark made crash the phone because the power required is too high.. Any chance to see this behavior on real life scenario? Moreover do you think that the power draw (watt) is sustainable in smartphone envelope? Or other aspects like overall power consumption or leakage count?
Smartphones can sustain 3-3.5W. There's a window of several minutes where you can maintain peak. However because CPU workloads are bursty, it's rare that you would hit thermal issues in usual scenarios. Sustained scenarios are games, but there the GPU is the bottleneck in terms of power.
Batteries easily sustain 1C discharge rate, for the XS' that's 2.6 and 3.1A, or rounded to around 10 to 12W. Temperature becomes a natural limitations before the battery discharge rate becomes any concern.
Thanks for the reply. My look on the battery was long term, in the past there has been the 'throttling fiasco' in which the performance of the device was cut down because batteries seemingly could not hold the power requested.. If the power draw increased on the A12, I could wonder if the problem will get worse
The whole panic about the iPhone 6 on old, softening batteries was basically just this, but after one relatively crude attempt they got the power management optimized that I had very little if any slowdown even on my 4 years old battery.
The crash during the extreme load test on the XS will of course need to get fixed with another power management update, just that it needs to apply even to factory-fresh batteries already.
But when the battery ages and its current delivery capability erodes over time it will only become more and more necessary for power management to deal with such spikes, possibly by ramping up the clock less aggressively.
I am interested to know more the recent issues with the poor wifi & LTE speeds and signal strength. A smartphone having great cpu/gpu is good and all, but some of the basics such as wifi & LTE should also be up to snuff if apple wants to phone to be a total package. I feel that the wifi & LTE issues are overlooked when they designed the phone.
I was sure that the GPU used on A11 and A12 are basically Imagination Tech solutions and Apple can't design a completely new GPU solution from scratch. I am wondering if Apple will use the future Furian solution or not in future chips; if they won't as they don't partner anymore they will regret it. Question is what's the current agreement with Imagination as they are still using PowerVR solutions.
Even with the A10, their GPU was 2/3rds custom, their arrangement with Imagination seems similar to their ARM licence. Calling it custom as of the A11 even though not that much seemed to change (but for grouping more ALUs into a core) seems like a formality.
You could do some tests in "Airplane" mode to eliminate the WiFi and telephony stack power use.I'm not sure there's a way to eliminate the telephony stack but still keep the WiFi active.
In the third and fifth low light sample I think it's pretty clear that the XS beat the P20P, again P20P's retention of texture is horrible and only in extreme low light (second and last sample) and scenes with many man-made objects which lack obvious texture (third sample) does its excessive sharpening and NR have a advantage. The first sample is a surprise, there either the scene is actually darker than I'm lead to believe or there's something else I've not taken into account.
Isn't this a too bold affirmation that A12 is on par with current desktop CPUs? So why didn't we stack them in order to have 5x the processing power of a common laptop/workstation for 1/2 of the power? What am I missing here? Because this doesn't make much sense to me.
If I have 2 or 3 of these SoC chips and photoshop running on them will I have my filters running faster? Chrome loading webpages faster? Creating zip archives or converting flac files 3x times faster than an Intel I7?
There is widespread speculation (and at this point, it is considered more likely than not) that apple will transition to their own chips exclusively for their Mac products in the near future (within 2 years).
What exactly doesn’t make sense to you?
Ps: will be very interesting to see results of new iPad Pro this year too.
What is strange to me is why aren't we already stacking this SoCs. I suppose I can fit 6 or 8 of these processors on the same die size. This means I would have a lot faster PC for less power? It they are that great why aren't on desktops already?
Does this mean if I manage to install Windows 10 ARM edition on my PC photoshop will run faster and Chrome will load pages two or three times faster since I have 8 of these SoCs on my desktop pc?
Because just increasing the core count is not quite enough.
As I have explained above, a platform transition is a massive undertaking which among many other things needs actually superior performance to afford the initially still necessary legacy code emulation.
This will almost certainly start at a WWDC (next year or the year after that, most likely) with an announcement about not just coming hardware but also about those new mechanisms in macOS itself, and actual Axx-powered Macs may take months after that to emerge while developers start porting their code to the new platform.
It's not as if this was the first time for Apple – they've done it twice already!
I just pulled an old test of a i3-6320, a 3 year old dual-core, and the A12 does good on some of the tests, but in many tests it's quite a bit behind, so it's not ready for server-duty yet.
I know that the i3 has higher frequencies but it's also only two cores, compared to 2+4. There is no question that the big cores are very efficient, but they are optimized for their current job.
If Apple would build a 6- or 8-core processor based on their large cores and throw away most of the other SoC and add other parts (stronger SIMD etc), yes then we might have a good desktop/serverchip, but the A12 is not that chip.
Also remember that we are comparing 14nm Intel chips with the newest "7nm" process, if Intel ever gets their 10nm process up and running for real, then we could do a better comparison.
Multicore A12 performance seems mostly limited by the passive cooling in a handheld device. That is where the much higher power availability and active cooling in a notebook or desktop makes the biggest difference.
It's single-core performance where you see the most of the actual core performance. By allowing for higher power consumption and using active cooling Apple should be able to scale up multicore performance relatively easily (and some of the iPads with additional CPU cores, notebook-sized batteries and at least improved passive cooling have already demonstrated that).
-"Apple’s CPU have gotten so performant now, that we’re just margins off the best desktop CPUs" That sentence alone discredits your whole article, this has to be one of the most stupid things I've ever read in a review the past years. A mobile ARM CPU isn't even faster than a Pentium 4 in pure IPC, and they perform in completely different instruction sets...
That statement was so moronic that it forced me to create an account just to call you out on this.
People who deny that ARM designs especially from Apple have closed in on x86 performance, and in Apples case often beaten it, are starting to remind me of flat earthers.
Regarding processor power. Apart from gamers, is the increase in processing power perceptible to the user, for which applications and is it noticeable? I have a 2.5 year old phone with a SnaDragon 810 and its performances still suit me just fine. In a future purchase, I would mostly look for improvements on battery autonomy.
I thought a Core 2 Duo felt fine until I got a Haswell system, I suspect it would be similar for you going to this. The improvement just in web page loading speed alone would be significant.
"we see four new smaller efficiency cores named “Mistral”. The new small cores bring some performance improvements, but it’s mostly in terms on power and power efficiency where we see Tempest make some bigger leaps"
"upgrade in sensor size from an area of 32.8mm² to 40.6mm²"
These are not sensor sizes, these are total image chip sizes. Sensor (as in "sensor", the part which actually "senses" light) sizes are not hard to calculate, and are MUCH smaller.
12MP is approx 4000x3000 pixels. The old sensor had 1.22 µm pixel pitch. 1.22*4=4.88mm. 1.22*3=3.66mm. So old sensor was 4.88x3.66mm = 17.9mm².
The new sensor is 5.6mm x 4.2mm = 23.5mm².
This is is comparison to
- typical cheap P&S camera sensor size (so-called '1/2.3" type') of 6mm x 4.5mm = 27mm² - high-end P&S camera sensor, (1" type) of 13.2mm x 8.8mm = 116mm² - Four Thirds camera sensor size of 17.2 x 13mm = 225mm² - Modern pro camera sensor size of about 36x24mm = 864mm².
Please do not confuse your readers by calling total image chip sizes as "sensor size".
"The performance measurement was run in a synthetic environment (read: bench fan cooling the phones) where we assured thermals wouldn’t be an issue for the 1-2 hours it takes to complete a full suite run."
Which makes the whole thing useless. Of course wider (read hotter and less efficient due to higher overhead of often-useless blocks) will run faster in this environment, unlike in user hands (literally, ~36C/97F plus blanketing effect).
It changes absolutely nothing. It will still reach that performance even in your hands. The duration of a workload is not orthogonal to its complexity.
Fantastic and in-depth work! Thanks for the data and analysis. I would like to know a little more about your method for energy and power measurement. Thanks!
L2 cache latency is 8.8ns, Core clock speed is 2.5GHz, each cycle is around 0.4ns, then the l2 cache latency is 8.8ns/0.4=22 cycles. This is much longer than Skylake, which is around 12 cycles (taking i7-6700 Skylake 4.0 GHz at https://www.7-cpu.com/cpu/Skylake.html as an example, it equals to 3ns L2 cache latency).
So L2 latency is 8.8ns versus 3ns in skylake. Is this comparison correct?
I cannot tell the precise L1 latency from the graph "Much improved memory latency". Can you give the number? According to Figure 3 in https://www.spec.org/cpu2006/publications/SIGARCH-... the working set size of 80% SPEC2K6 workload is larger than 8MB, A12 's L2 cache (8MB) won't hold the working set. Compared with 32MB L3 cache Skylake configuration.
So overall the memory hierarchy of A12 seems not comparable to Skylake. What else helps it to deliver a comparable SPEC2K6 performance?
Hello. I just wanted to criticize the way this site works. It’s hard to read while listening to music when your intrusive ads follow my screen and interrupt my audio consistently. Please fix this as this has been really annoying. Thanks.
As a tech geek, I agree with your analysis. I have an X because the XR looks like a cheap design and I appreciate AMOLED. I think this is a minority sentiment, though. The XR is a stellar handset, perhaps the best available for most people. Between the battery life, SoC, and Apple's generous iOS update policy, the longevity will be extraordinary. I expect to see "still happily rolling along" comments even into 2025.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
253 Comments
Back to Article
Andrei Frumusanu - Friday, October 5, 2018 - link
Hello all,This article is still fresh off the presses - I'm sure there's still typos/grammar to be fixed in the coming hours.
Just a general note; this is my first iPhone review, and hopefully it makes up for AnandTech not having a review of the iPhone 8's/X last year. Unfortunately that happened in a time when there was no mobile editor with the site anymore, and I only rejoined after a few months after that.
As always, feel free to reach out in the comments or per emails.
OMGitsShan - Friday, October 5, 2018 - link
As always, this is the iPhone review to wait for! Thank you Andreiversesuvius - Friday, October 5, 2018 - link
Potato.Jetcat3 - Friday, October 5, 2018 - link
Wonderful review Andrei! Just a few minor quibbles for the capacities of each li-ion battery.Xs is 2658 mAh
Xs Max is 3174 mAh
Keep up the good work!
wicasapa - Friday, October 5, 2018 - link
one thing was not clear in the display section ... the Xs display (OLED) uses more power on a black screen compared to 8's LCD at the same brightness?!! this doesn't sound right! or is this power draw attributable to something else (face detection sensors, etc.)?Andrei Frumusanu - Saturday, October 6, 2018 - link
No, that's accurate. It's part of why the new phones regress in battery over the LCD iPhones.DERSS - Saturday, October 6, 2018 - link
You asked about the display's matrix controlling logic. It was found out through patent research that Apple has developed LTPO TFT Technology for that, however it is so far is confirmed to only used in the displays LG Display makes for Apple Watch. For smartphone OLEDs Apple has older simpler technology designed for the transistor layer as these big OLEDs are already super pricy anyway. And additional manufacturers like LG Display where Apple has started to implement manufacturing of iPhone XS/Max displays only has very few Tokki Canon equipment sets so they can not really low the manufacturing pricing down. (And Apple can not move to a different equipment as it has designed iPhone X/XS/Max displays only for Tokki Canon equipment).wicasapa - Saturday, October 6, 2018 - link
I understand the regression on the battery test you run, it is basically browsing the web, which is by and large a white background and generally above the 60-65% threshold for the crossover of efficiency between OLEDs and LCDs. it actually makes sense for OLEDs to be less efficient in web browsing-based battery tests. however, I was surprised that you measured less efficiency on a pure black screen!Andrei Frumusanu - Sunday, October 7, 2018 - link
Actually the web test is varied, I have a few lower APL sites and even some black ones.tipoo - Sunday, October 14, 2018 - link
Common misconception - OLEDS don't just turn off fully for pure blacks, it takes active power to create a black. When they're off they're a murky grey.alysdexia - Friday, May 10, 2019 - link
ODEDsHenk Poley - Saturday, October 6, 2018 - link
The OLEDs have a +41% higher pixel density as the LCD iPhones. So that's one reason why it could use more power.Constructor - Sunday, October 7, 2018 - link
OLEDs are simply much, much less efficient than the crystalline LEDs used as LCD backlights. They only have an advantage when the image is mostly black, which just isn't the case almost anywhere on the web or elsewhere.And the OLED displays in the X/XS are PenTile, so red and blue only have half the nominal resolution. The indicated resolution actually applies to green only. But the GPU will probably need to work harder for the PenTile compensation algorithm.
It still looks smudged to me especially at character edges (they're lined with tiny brownish/blueish pustules due to PenTile), and scrolling looks horribly janky, as if it was an old-time interlaced display, which is apparently due to the necessary PWM re-scanning.
Neither of these püroblems exist with the excellent LCDs Apple has been using since the iPhone 6, which still have proper full-resolution RGB pixels and due to the LCD inertia scroll buttery smooth.
So if any of the devices, it'll be the XR for me or none.
caribbeanblue - Thursday, September 24, 2020 - link
I thought the the buttery smooth looking scrolling on LCD was due to the more ghosting happening on the screen, no? When you’re scrolling text and images look clearer on OLEDS thanks to the lower amount of ghosting, but on LCDs pixels take more time to switch colors and that creates that smooth scrolling effect on the screen.Mic_whos_right - Tuesday, October 9, 2018 - link
That makes sense. still. isn't the OLED designs suppose to use zero batt at times during black? Maybe for movie borders?Constructor - Tuesday, October 9, 2018 - link
Yeah, I would also expect that the display controller should be able to go into low-power mode if it has all black pixels and it doesn't even need to scan in that state.Maybe there was some mistake in the measurement or the display wasn't actually completely black but just relatively dark with still some pixels on at lower brightness.
It could also be that the controller needs some startup time so it might not be able to shut down unless it can really know for sure there won't suddenly be some bright pixels again.
alysdexia - Friday, May 10, 2019 - link
aren't, supposedwrkingclass_hero - Friday, October 5, 2018 - link
I don't mind the minor typos, but I would have liked to have seen rec.2020 color gamut testing and sustained gameplay battery life.melgross - Saturday, October 6, 2018 - link
Nobody has a rec2020 monitor, so there’s no point in testing for it.jameskatt - Saturday, October 6, 2018 - link
Fantastic Review! But you missed the biggest item: The 8-Core Neural Processing Cores. These are used by Apple for Magic and huge acceleration of several tasks including realtime photo processing, Face ID, etc. These can be used in apps. The A12 has 18 cores - 2 Large CPU, 4 Small CPU, 4 GPU, and 8 NPU Cores.name99 - Saturday, October 6, 2018 - link
If you're going to count like that, you need to throw in at least 7 Chinook cores (small 64-bit Apple-designed cores that act as controllers for various large blocks like the GPU or NPU).[A Chinook is a type of non-Vortex wind, just like a Zephyr, Tempest, or Mistral...]
There's nothing that screams their existence on the die shots, but what little we know about them has been established by looking at the OS binaries for the new iPhones. Presumably if they really are minimal and require little to no L2 and smaller L1s (ie regular memory structures that are more easily visible), they could look like pretty much any of that vast sea of unexplained territory on the die.
It's unclear what these do today apart from the obvious tasks like initialization and power tracking. (On the GPU it handles thread scheduling.)
Even more interesting is what Apple's long term game here is? To move as much of the OS as possible off the main cores onto these auxiliary cores (and so the wheel of reincarnation spins back to System/360 Channel Controllers?) For example (in principle...) you could run the file system stack on the flash controller, or the network on a controller living near the radio basebands, and have the OS communicate with them at a much more abstract level.
Does this make sense for power, performance, or security reasons? Not a clue! But in a world where transistors are cheap, I'm glad to see Apple slowly rethinking OS and system design decisions that were basically made in the early 1970s, and that we've stuck with ever since then regardless of tech changes.
ex2bot - Sunday, October 7, 2018 - link
Much appreciate the review!s.yu - Monday, October 8, 2018 - link
Great job as always Andrei!I would only have hoped for a more thorough exploration of the limits of the portrait mode, to see if Apple really makes proper use of the depth map, taking a photo in portrait mode in a tunnel to see if the amount of blur is applied according to distance for example.
Mic_whos_right - Tuesday, October 9, 2018 - link
Thanks for this comment--Now I know why nothing last year. Great Anandtech standard of a review! Always above my intellect of understanding w/ info overload that teaches me a lot of the product.Moh Qadee - Thursday, November 1, 2018 - link
Thank you for this great detailed review. I have been coming back to this review before making a purchase. Please make an comparison article of Iphone XS gaming vs other smartphones in market. How much does thermal make difference over longer periods before it starts to throttle or heat up. Would be able to give an approx time before you noticed heat while gaming on XS? I don't mind investing in an expensive phone as long as thermals doesn't limit the performance. There are phones like Razor 2 or Rog out. People make an comparison with an iPhone as it doesn't require much cooling. I wonder if gaming for above 20+ mins makes it challenging for Iphone to heat up enough that you should be worried about?Ahadjisavvas - Monday, November 19, 2018 - link
And the exynos m3 had 12 execution ports right? Can you elaborate on the major differences between the design of the vortex core in the a12 and the meerkat core in the m3? I would deeply appreciate it if you could.Ahadjisavvas - Monday, November 19, 2018 - link
And the exynos m3 has 12 execution ports right? Can you elaborate on the main differences between the design of the exynos m3 and the vortex core,that'll be really helpful and informative as well. Also,are you planning on writing a piece about the a12x soc, it'll be really interesting to hear how far apple has come with the soc on the 2018 ipad pro.alysdexia - Monday, May 13, 2019 - link
"Now what is very interesting here is that this essentially looks identical to Apple’s Swift microarchitecture from Apple's A6 SoC."This comparison doesn't make sense and it seems like you took the same execution ports to determine whether the chips are identical, when the ports could be arbitrary for each release. Rather I took the specifications (feature size, revision) and dates of each from these pages: https://en.wikipedia.org/wiki/Comparison_of_ARMv7-... and https://en.wikipedia.org/wiki/Comparison_of_ARMv8-... to come up with these matches: Cortex-A15-A9 A6, Cortex-A15-A9 A6X, Cortex-A57-A53 A7, Cortex-A57-A53 A8, Cortex-A57-A53 A8X, Cortex-A57-A53 A9, Cortex-A57-A53 A9X, Cortex-A73 A10, Cortex-A73 A10X, Cortex-A75 A11, Cortex-A76 A12, Cortex-A76 A12X. For exemplum 5 execution ports could be gotten (I'm no computer engineer so this is a SWAG.) from the 3 in Cortex-A9 subtracted from the 8 in Cortex-A15 but the later big.LITTLEs with 9 and 5 ports could be split from 7 or 8 as (7+2)+(7−2) or (8+1)+(8−3). You need to correct the Anandtech and Wikipedia pages.
faster -> swifter, swiftlier
ISO -> iso -> idem
's !-> they; 1 != 2
great:small::big:lite::mickel:littel
RSAUser - Friday, October 5, 2018 - link
I still don't like iOS tendency towards warmer photos than it is irl.DERSS - Saturday, October 6, 2018 - link
It is weird because it is warmer in bright light and bleaker in dim light.Why can not they just even it out, make the photos less yellowish in bright light and less bleak in dim light?
willis936 - Friday, October 5, 2018 - link
Great review. I loved the SoC analysis. There's definitely something spooky going on in an SoC with three caches that are scattered throughout the die. You do mention that there are two more fixed point ALUs but when analyzing a SPEC test result that relies on execution units you said that the A12 didn't have any execution improvements. Aren't the extra ALUs more execute?It's clearly a nice device and there are areas that saw massive improvements and other areas that are more of the same. I really appreciate that your conclusion isn't "it's a great device so buy it" but "it's a great device but really expensive".
Andrei Frumusanu - Friday, October 5, 2018 - link
The A11 had two more ALUs over the A10, the A12 doesn't improve in this regard.3DoubleD - Friday, October 5, 2018 - link
More than half the die shot was unlabeled. I found it strange that over 50% of the die wasn't worth discussing... what does it do? Are these fixed function units, modems, ISPs, ect.?It's really amazing how the CPU and GPU are taking less and less space on a SoC.
shabby - Friday, October 5, 2018 - link
It's not like Apple gives out these die shots with everything labeled, we're basically guessing what everything is.melgross - Saturday, October 6, 2018 - link
Nobody knows what the entire chip does. Since Apple doesn’t sell their chips they’re not obligated to tell us all of the secret sauce that’s in there.Ironchef3500 - Friday, October 5, 2018 - link
Thanks for the review!bull2760 - Friday, October 5, 2018 - link
I returned my MAX because of antenna signal issues. I upgraded from the 8 plus and while it was super fast it definitely has issues. I drive the same route to work everyday and in the few days I had the phone I had 4 dropped calls in the middle of conversations and when I looked at the screen is said call failed. One call to my wife I had 2 calls failed within 5 minutes. From my research the dropped calls are related to the new antenna system that Apple is using. Unless you are in a strong signal area you will receive a lot of dropped calls. From what I'm reading this has nothing to do with switching to Intel from 3Com it is directly related to the choice of antennas. Had 3Com had a chip ready to go with the same specs it too who have similar signal issues because of the antennas. The other issue I was having was network connectivity. I would be connected to my wireless at home or at work and often get page cannot be displayed errors and I need to check my network. I was clearly connected to my wireless network. I would turn the wireless on and off and it would start working. Speeds were crazy too. One minute you'd get really fat throughput and the next it would be crazy slow. I'd hold off on purchasing the new phones until Apple sorts out the bugs.FunBunny2 - Friday, October 5, 2018 - link
ehh?? everybody knows that if you want to use a telephone, you get a landline. mobile phones ceased being about phone calls at least a decade ago. what?? for a Grand$ you want to talk to someone??? how Neanderthal.PeachNCream - Friday, October 5, 2018 - link
Hah! I love the sarcasm!On a serious note though, I do wonder how long we'll even have a phone network or carriers that treat voice, text, and data as individual entities. We can and have already been able to do VoIP for what feels like ever and calling over WiFi is a thing. It'd make sense to just buy data and wrap up voice and text inside 4G or whatever.
FunBunny2 - Friday, October 5, 2018 - link
"It'd make sense to just buy data and wrap up voice and text inside 4G or whatever."I suspect that some scientists/engineers have run the numbers to see what that does to data capacity. which may be why it hasn't happened. or, it could also be that the data scientists at the carriers (and phone vendors) have found that smartphone users really don't make enough phone calls to warrant supporting decent quality.
patel21 - Saturday, October 6, 2018 - link
Exactly. In India we have a new Carrier service called Jio doing exactly the same. They built their network group up on VoIP and only sell data. Calls and SMS are totally free.And man can you imagine I am using their plan of 2GB data per DAY for 90 days at just USD 8 !!!
phoenix_rizzen - Friday, October 5, 2018 - link
You mean Qualcomm, not 3Com. :)bull2760 - Sunday, October 7, 2018 - link
Yes thank you sorry meant Qualcomm, my bad.varase - Friday, October 5, 2018 - link
I just downloaded 12.1 Developer Beta 2 and in my 2 bar AT&T household, I went up from about 11 mbps to 38 mbps.My local cell tower probably doesn't have 4x4 mimo, though my Orbis are pushing 298 mpbs through wifi and saturating my internet link.
It's a little early to jump ship - there's bound to be a few rough edges that'll have to get filed down :-).
Speedfriend - Monday, October 8, 2018 - link
What, your mobile data speed has been only 11mbps, where do you live? How on earth do you use your phone like that! I regularly get over 100mbps where I live in London, with the highest I have recorded being 134mbpsbull2760 - Sunday, October 14, 2018 - link
Little early to jump ship. Do you remember antennagate? Steve Jobs answer to a design issue, “you are holding the phone wrong”. Sorry not forming out 1500 for a phone that has cellular as well as network related issues. How this ever passed inspection in testing is beyond me. I’m an Apple fan, have owned every iPhone since they started making them. This is without a doubt the worst performing iPhone I ever purchased. Yes it was snappy and apps flew open. But if I can’t use the phone aspect or wireless networking at home, I may as well hold a paperweight to my ear. I’m heavily entrenched into the Apple ecosystem. Watches, TV, iPads and MacBook Pro. Not to mention all the movies, music and apps that I’ve purchased. This pisses me off. 1500 for a piece a shit phone that was not properly tested before being released.Marlin1975 - Friday, October 5, 2018 - link
The thing that jumps out to me is the power usage and performance gains at the same time. TSMC's 7nm process looks really good. I wonder if this will also play out on CPUs/GPUs on the same process coming soon.melgross - Saturday, October 6, 2018 - link
It’s hard to say how much of what Apple gets out of these processes others will get. Apple has bought a lot of equipment for TSMC, as they’ve done for other companies, including Samsung, over the years. What they get out of it is early access to new processes, as well as some more specialized features that they have often developed themselves, as well as extra discounts.KPOM - Friday, October 5, 2018 - link
Intel should be worried.varase - Friday, October 5, 2018 - link
I still think the star of the show may end up being the neural processor, up from 660 billion ops/sec to 5 trillion (9x the speed at a tenth the energy usage) - now available to developers via Core ML 2, and now pipelined through the ISP.NICOXIS - Friday, October 5, 2018 - link
I wonder how A12 would do on Android or Windows... I'm not into Apple but they have done a fantastic job with their SoCLiverpoolFC5903 - Tuesday, October 9, 2018 - link
Probably nowhere as good as it is projected here.What is the bloody point of these supposedly super fast SOCs with massive caches and execution units when the OS running on it will be the highly limited, crippled and locked OS like the IOS. No emulators, no support for USB peripherals like gamepads, mice or keyboards, poor excuse for file management, no access to outside apps, no ability to run non webkit based browsers, no ability to seamlessly transfer data to and from windows desktops and multiple other limitations.
All and well getting excited over this SOC but its like Having a 450 BHP ferrari running on a 80 mile speed limiter with an automatic transmission. All that power is useless and will remain unutilised.
Until they allow other OS to run on their hardware, the comparisons are meaningless. I can do a hell lot more on a pithy 835 than I will ever be able to do on an A11 or A12.
iSeptimus - Tuesday, October 9, 2018 - link
Wow, so much wrong with this statement.bulber - Friday, October 12, 2018 - link
You are wrong and I know you are too narcissistic to realize that. I would pity you but your username tells me you don't deserve any pity.Barilla - Friday, October 5, 2018 - link
Fantastic hardware (well, except the notch), shame it's only locked to one OS. Would love to see an Android phone with hardware like that.Matte_Black13 - Friday, October 5, 2018 - link
Well Qualcomm is working on quite a few things, new SoC, NPU, and Samsung is totally redesigning their GPU, so...maybe sooner than later (S10)varase - Friday, October 5, 2018 - link
It'll be interesting to see what comes of that effort - so much of the Android market is feature phone replacements - the actual percentage which represents high end flagship phones is pretty slim.If a bigger percentage of the Android market is willing to pay for silicon with that increased compute capacity, there may be hope.
melgross - Saturday, October 6, 2018 - link
But will they? Microprosser Reports says that each new SoC from Apple cost, the year of introduction, between $34-38. Top Android SoCs cost between $24-28. That’s a big difference. These phones compete wither each other, and price while Apple mostly competes against itself.melgross - Saturday, October 6, 2018 - link
The reason, or I should say, one of the reasons why other manufacturers are copying Apple’s notch, is because it allows a full screen phone. Notice that Samsung has edge to edge, but still retains a shorter screen on both the top and bottom. The notch removes 2-3% of the screen area, depending on screen size, which is very little. Now take the Galaxy 9+, and you’ll notice that when the entire top and bottom cutoffs are added together, it amounts to about 10% of the possible screen size.It’s why the new Pixel and others are moving to the notch. In the cases where the notch is just being copied, it’s ridiculous, because those companies are still using a chin on the bottom, and the notch is just there to copy the iPhone.
id4andrei - Saturday, October 6, 2018 - link
No. They are copying the iphone for the look. If they were copying to maximize screen area they would copy Essential. Apple removed the chin and the notch is needed for face id - they have a good reason. Some copycats(not all) ape the look and keep the chin. A good compromise is the impending OnePlus T. It has teardrop notch and minimized chin that makes it look actually good. Asus is on the other side of the spectrum. They even bragged they wanted to copy the look of the iphone X.id4andrei - Saturday, October 6, 2018 - link
I didn't catch the 2nd part of your comment. It seems we agree. Anandtech needs an editing system.warrenk81 - Friday, October 5, 2018 - link
Great to see mobile phone reviews return! Hope you can keep up the momentum with the Pixel 3Ikefu - Friday, October 5, 2018 - link
And LG V40!leo_sk - Friday, October 5, 2018 - link
The way the globe in the default wallpaper just misses the notch, apple if you acknowledge that it is ugly then at least try to reduce its sizeEnzoFX - Friday, October 5, 2018 - link
Not on an S release! It bugs me a little that people want these evolutionary changes in all aspects of the phone when it's an S release. A trend that has long been established to be more about a few internal changes.Also the X was a pretty big change, took everyone a bit to calibrate to the new hardware, but now with the S release, nothing is enough, when an S iphone is never meant for the n-1, non-S iphone buyer. These people, if upgrading would probably go with a newer phone rather than a year old phone.
cmaximus - Friday, October 5, 2018 - link
Great review! I have a question regarding the off angle color shift of the screen. Given even small off angle viewing (5-10 degrees), I noticed a considerable blue shift in the color. Is this normal for this screen? It's not a big deal in practice, as I rarely look at my phone off angle when using it, but it'd be nice to know if this is expected behavior or not.melgross - Saturday, October 6, 2018 - link
All OLEDs look flush off Angeles to the side. Some are a bit better. The LG screens are a lot worse.TEAMSWITCHER - Friday, October 5, 2018 - link
I upgraded from a regular iPhone 7 to the XS Max. The biggest surprise for me is Face ID. It's far more reliable than a finger print scanner. I do lots of home improvement and car repair projects and the finger print scanner would always fail due to roughed up hands. Face ID is so fast, you barely notice it. And it works from harsh angles - I can pull the phone from my pocket, look down at it, and it unlocks... every time. The "notch" is not ideal .. but it does enable a feature that ACTUALLY works better (for me) than TouchID ever did.Golgatha777 - Friday, October 5, 2018 - link
Wow, we live in the age of $1200 phones and video cards. Truly an age of wonders for the 10%ers!PeachNCream - Friday, October 5, 2018 - link
There are less expensive products for people that don't have or don't want to spend as much, but I do agree in principal. The XS and Max are amazing phones that come with an outrageous price tag. The A12 is an impressive SoC, but it should be given that the handset its inside of costs more than the retail price for most desktops and laptops.FreckledTrout - Friday, October 5, 2018 - link
Pretty typical with any high end products. The top 10% pave the way for the rest to have these products at a reasonable price a few years later. You can get an iPhone 7 pretty cheap now.MonkeyPaw - Friday, October 5, 2018 - link
It’s still cheaper than my first PC, a 486sx2 running at 50mhz. RAM and hard drives were still measured in megabytes, and the internet made noise before connecting and it tied up your phone line when you used it. There has also been about 20 years of inflation. Flagship smartphones are expensive, but they sure do a lot. That doesn’t mean I’m buying one, but we’ve come a long way in my hopefully-less-than-half-a-lifetime.keith3000 - Friday, October 5, 2018 - link
OMG! Exactly what I was thinking as I read this review on my $225 T-Mobile Rev VL Plus. I may not be able to afford such a technological marvel as the iPhone XS MAX, but I bet I get anywhere from 80-to-90% of the overall functionality for one-fifth the price. I've bought many premium smart phones over the years, starting with the HTC EVO 4G LTE many years ago, followed by Samsung Galaxy S3, then the S4, and even the gigantic Asus Zenfone 3 Ultra. Each phone was better than the one before, and yet each were major disappointments to me for various reasons which I won't go into here. Suffice to say that the ever increasing cost of each phone raised my expectations about what they should be able to do, and thus contributed to my sense of disappointment when they failed to live up to the hype. So when the Zenphone 3 crapped out on me after less than a year of use and I saw this cheap Rev VL Plus, I decided to stop wasting so much money on these overpriced devices and buy something that wouldn't leave me feeling robbed or cheated it it didn't turn out to be the "next best thing". Now, after almost a year of use, I feel like it was a good decision. And if something better comes along in a few months at a similar price point, I can buy it without feeling remorse for having wasted so much money on a phone that didn't last very long. So all you 10-percenters - go ahead and throw away $1,200 on a phone. I'm quite content to have a 2nd rate phone and save a thousand dollars.ws3 - Sunday, October 7, 2018 - link
You say you spent $225 on your phone less than (but almost) a year ago and then say that you would be willing to replace it immediately if some other phone interested you. So you are apparently willing to spend around $225 for one year of ownership of a phone.By this metric you should be willing to spend $1000 on a phone provided you keep it for 4 years or more.
Now it may the the case that you don’t want to keep any phone for four years, and so the iPhone X[S] is not for you. But here I am with an four year old iPhone 6+, that still works great (thanks to iOS 12). I similarly expect the iPhone X[S] to be good for four years at least, so, although I am not a “10%er”, I am seriously considering purchasing one.
It’s simply a fallacy to assert that only the wealthy would be interested in the latest iPhone models.
FunBunny2 - Sunday, October 7, 2018 - link
"Now it may the the case that you don’t want to keep any phone for four years, and so the iPhone X[S] is not for you. But here I am with an four year old iPhone 6+, that still works great (thanks to iOS 12). "ergo, Apple's problem. unfulfilled TAM for iPhones is disappearing faster than kegs at a Georgetown Prep gathering. keeping one longer than a cycle is a real problem for them. they will figure out a way to stop such disloyalty.
ex2bot - Sunday, October 7, 2018 - link
They’ll find a way, like supporting the 5S and later with iOS 12. /sicalic - Friday, October 5, 2018 - link
Hi Andrei Frumusanu,Thanks for extraordinary review of iPhone Xs!
in page one you said A12 GPU 4-core "G11P" @ >~1.1GHz, i have several question.
1. how do you estimate that clockspeed?
2. if you know that clockspeed can you estimate how many GFLOPs FP32 and FP16 on A12 GPU?
syxbit - Friday, October 5, 2018 - link
Great review of the SoC.Please, when you review the Pixel 3, or (in 2019), updated Snapdragons, hold them to this bar.
I get really frustrated when I see your (or other) reviews complimenting the latest Snapdragon even though they're miles behind the Ax.
As an Android user, I find it very unfortunate that to get my OS of choice I must get inferior hardware.
edzieba - Friday, October 5, 2018 - link
Phone reviews are a review of the phone, not just the SoC.syxbit - Friday, October 5, 2018 - link
I know that, but the SoC is the area where Apple are completely dominant.Speedfriend - Monday, October 8, 2018 - link
So you would expect them to use that powerful SOC to deliver real battery improvements, but somehow they can't. No one I speak to complains that their modern smartphone is slow, but everyone complains about battery life.melgross - Saturday, October 6, 2018 - link
It’s both. The deep dive isolates the SoC to a great extent. It can be done with any phone.eastcoast_pete - Friday, October 5, 2018 - link
Andrei, thanks for the review! Yes, these are outstanding phones at outrageous prices. I appreciate the in-depth testing and detailed background, especially on the A12's CPU and GPU. While I don't own an iPhone and don't like iOS, I also believe that, phone-wise, the XS and XS Max are the new kings of the hill. The A12's performance is certainly in PC laptop class, and I wonder if (or how) the recent Apple-Qualcomm spat that kept QC's modem tech out of the new iPhones has helped Intel to keep its status as CPU provider for Apple's laptops, at least for now.One final comment, and one question: Andrei, I agree with you 100% that Apple missed an opportunity when they decided on a rather middling battery capacity for the XS Max. If I buy a big phone, I expect a big battery. Give the XS Max a 5000 mAh or larger battery, and it really is "the Max", at least among iPhones. At that size, a few mm additional thickness are not as important as run time. Maybe Apple kept that upgrade for its mid-cycle refresh next year - look, bigger batteries.
@Andrei. I remember reading somewhere that the iPhone X and 8 used 128 bit wide memory buses. Questions: Is that the case here, and how does the memory system and bus compare to Android phones? And, in your estimate, how much of the A12's speed advantages are due to Apple's memory speeds and bus width ?
dudedud - Friday, October 5, 2018 - link
I was sure that only the A$X were 128bit, but i would also want to know if this had changed.RSAUser - Saturday, October 6, 2018 - link
A12 is definitely not in the laptop class unless you're looking at the extreme low power usage tier.Just because it is quite a but faster than the equivalent on mobile does Not mean it can compete at a different power envelope. If that were true, Intel would already have dominated the SoC market. It requires a completely different CPU design. It's wwhy they can use it for the touchbar on the macbook but not as a main processor.
ws3 - Sunday, October 7, 2018 - link
This review did not compare the A12 with “mobile” Intel chips but rather with server chips. The A12 is on par with Skylake server CPUs on a single threaded basis. Let that sink in.As to why Intel doesn’t dominate the SoC space, Intel’s designs haven’t been energy efficient enougj and also the x86 instruction set offers no advantage on mobile.
tipoo - Thursday, October 18, 2018 - link
It's already competing with laptop and desktop class chips, not just mobile fare. It's up there per core with Skylake server, and NOT normalized per clock, just core vs core.It's like people don't read these articles year over year and are still using lines from when A7 arrived...
tipoo - Thursday, October 18, 2018 - link
Only the A10X and A8X were 128 bit, on mobile that's still power limited for memory bandwidth.juicytuna - Friday, October 5, 2018 - link
Apple's big cores are like magic at this point. 2-3x the performance per watt of the nearest competitors is just ridiculous.sing_electric - Friday, October 5, 2018 - link
I know this is almost a side point, but this really goes to show what a mess Android (Google/Qualcomm) is compared to iOS. At the rate Snapdragon is improving, it'll be 2020/2021 before Qualcomm sells a chip as fast as 2017's A11, and Google is shooting itself in the foot by not having APIs available that take advantage of Snapdragon's (relative) GPU strength.That's on top of other long-term Android issues (like how in 2018, Android phones still can't handle a 1:1 match of finger movement to scrolling, which the iPhone could in 2008). Honestly, if I wasn't so invested in Android at this point, I really consider switching now.
EnzoFX - Friday, October 5, 2018 - link
I switched. Long time nexus user and I feel Google letting the value orientated base behind. It took me saying screw it and paying more for sure, but it has been worth it. Hardware I found lacking and software too since they went t Pixel. Sure on paper it sounded great or fine, but issue after issue would creep up. I never had that many problems with iOS but I feel it's mature enough coming from someone that likes to change settings. The X sold it for me too hardware wise. It was where I saw things going years ago and glad that it's here. (waiting on the Max now however, as I want the real estate!)RSAUser - Saturday, October 6, 2018 - link
Android just doesn't have direct CPU optimization for latency with scrolling.That said, look at phones with Android 8+, that "issue" is pretty much fixed.
For me, most past iOS7 is not really smooth scroll anymore, was one of the first things I noticed back then. There were dropped frames. You'd probably blame hardware though as it was sorted when upgrading to an iPad Air later. Still not a fan of a lot of the UI changes in 11 tbh
tipoo - Thursday, October 18, 2018 - link
I complained about frame drops from iOS7-11, often to crowds of people who would just deny it and say I was seeing things, but Apple addressed it in a WWDC talk and it's much much better in iOS12.I can still notice a frame drop here and there if I'm being picky, but it's vastly improved, I'm guessing 1 frame drop to every 10 on iOS11.
tipoo - Thursday, October 18, 2018 - link
Since the Pixel 1 it felt pretty well as tight as my iPhone on keeping up with my finger imo (though 120hz touch sensing iPhones may have pulled ahead again).id4andrei - Friday, October 5, 2018 - link
So the A12 is basically a Skylake. Also on Anandtech I read that the first ipad pro was almost a Broadwell(like on the 12" MB). Makes sense. A-series powered macbooks surely are in the future.Color management system is again what puts ios above android. Samsung tries with color profiles but it's not a solution. Google fails its ecosystem yet again. Also OpenCL is a mess, no wonder Apple dropped it. It's unreasonable for you to expect Google to throw its weight around it.
The only thing better than A12 is this review. Absolutely SMASHING review Andrei! Your SoC analysis in particular, off the charts awesome; way more descriptive than lowly Geekbench.
tipoo - Thursday, October 18, 2018 - link
Problem then is Google just has no good GPGPU toolchain if they don't get behind OpenCL. What else is there?They did try Renderscript to limited uptake.
tecsi - Friday, October 5, 2018 - link
Andrei,Great work, and a welcome surprise to see these thorough AnandTech reviews return.
I found the photo and video discussions particularly informative and compelling.
Thanks so much for all your hard work and detailed analysis.
Barry
tecsi - Friday, October 5, 2018 - link
Andrei, could you expand on a few camera issues:- Is it correct that the wider video color range is ONLY at 30fps? Why would this be?
- I have always videoed at 60fps, finding 30fps very noticeable with much movement. If this is correct, it seems like this 30fps color improvement only works in a limited number of situations, with very little movement
- Given the A12 performance, why can’t Apple have 480fps or 960fps SloMo like Samsung?
- Finally, with the Neural Engine, will Apple potentially be able to improve the camera system by re-programming this?
Thanks, Barry
Andrei Frumusanu - Friday, October 5, 2018 - link
> - Is it correct that the wider video color range is ONLY at 30fps?Probably the sensor only works at a certain speed and the HDR works by processing multiple frames. Btw, it's wider dynamic range, not colour range.
> Given the A12 performance, why can’t Apple have 480fps or 960fps SloMo like Samsung?
Likely the sensor might be missing dedicated on-module DRAM - which seems to be a requirement for those high framerates.
> - Finally, with the Neural Engine, will Apple potentially be able to improve the camera system by re-programming this?
They can improve the camera characteristics (choice of exposure, ISO, etc) but I find it unlikely they'll get into things that actively improve image quality - that's something next-gen.
s.yu - Monday, October 8, 2018 - link
The on-module DRAM reduces SNR, AFAIK.Glindon-P - Friday, October 5, 2018 - link
Wider color range only works at 30fps because the camera actually records 2 frames at different exposures (at 60fps) and combines them.As far as higher FPS slow mo I’m sure it boils down to taking in enough light at high frame rates to be usable enough or just not something enough users care about. Anecdotally I’ve only used it just to test it and never again.
varase - Tuesday, October 23, 2018 - link
480 or 960fps slow mo is basically a gimmicky misnomer - how long can they sustain that frame rate before all the buffers fill up?How many takes does it require to actually capture the action you're trying to film within the window that that high frame rate actually operates?
Star_Hunter - Friday, October 5, 2018 - link
In past iPhone reviews NAND performance was looked at, I assume since it wasn't included this year that it remains the same?Andrei Frumusanu - Friday, October 5, 2018 - link
NAND is something on the to-do list in terms of revamping the test methodology - currently it's a mess both on iOS and Android.whiskeysips - Friday, October 5, 2018 - link
Can Anandtech take an photo of how the iPhone XS reproduces the following image on their review model?https://i.postimg.cc/q7wty6zY/Image-1.jpg
I have (5) iPhone XS has very poor color production on the following image, especially compared to my older iPhone7, see the example below:
https://i.postimg.cc/fbgFNbb1/226_E617_D-_A701-43_...
All of my iPhones also have a white point that appears significantly lower than 6500k judging by my eye. I do not have a colorimeter, but they do seem significantly off with certain content.
Unless the review models are cherry picked, I do not see retail units reaching the same quality.
Andrei Frumusanu - Friday, October 5, 2018 - link
How exactly are you expecting to test colour accuracy of an image through a photo?As far as I'm aware, the phones aren't cherry-picked and they were sealed and the battery was uninitialised.
whiskeysips - Friday, October 5, 2018 - link
The reference image is a screen shot of the content to be displayed so source content should be preserved.The color differences in the camera photo do reflect what I see with my eyes to a significant degree.
That is, on my iPhone7 and all my monitors in the house, the screenshot appears deep red at the top with a greyish red on the listed content.
On all the iPhone XS's in household, the content appears light red at the top, with the listed content becoming a distinct shade of brown instead of a greyish red.
I appreciate reading my post. The displays on my models do not seem all that accurate to me.
PhilJohn - Friday, October 5, 2018 - link
Have you properly calibrated the other phones and monitors in your house with a colorimeter? You'd be surprised at the awful D.e on most monitors out of the box.When the displays were tested for colour accuracy they were marked very highly, so it could very well be that the XS is showing the CORRECT colours.
whiskeysips - Friday, October 5, 2018 - link
So you are saying that all my previous iPhones I have retired, my current macbook, my spouses iMac, all the TV's in the household, and my two (non-Apple) desktop are all incorrect?AMC's brand color is red. The XS screen shows something similar to dried blood or a brown-red.
Did the app designers also have incorrectly calibrated monitors? The XS calibration is the problem, not the other way around.
PhilJohn - Friday, October 5, 2018 - link
And you've got night shift and true tone off? It could just be you have a dodgy OLED panel, check it against the ones in a store and exchange it if so.But you'd be surprised how awful most consumer electronics are for colour calibration, people like "pop" and "vivid" even when it's totally oversaturated and nowhere near accurate.
The calibration charts in this article should point to the XS having exceedingly accurate colours.
whiskeysips - Friday, October 5, 2018 - link
I compared against the 4 other iPhones XS’ in my household and all of them failed the red reproduction test.I also noticed all the monitors have a warm whitepoint (looks noticeably less than 6500k). When I was next to a tmobile store I stepped in, turned off tru tone, and observed the same.
I will have to airdrop the screenshot to an Apple Store display, but with so many witnessed off spec displays, aka 7 out of 7 out of spec, I am convinced anything in spec is cherry picked.
Off course tru tone is off.
Maybe this reviewer can take a photo of the AMC red next to a calibrated display.
genkihito - Sunday, October 7, 2018 - link
The website can choose to send different content to different models of iPhone. or also if the color is in the wide color range, the phones would show the same color differently if one has P3 and one doesn’t.whiskeysips - Tuesday, October 9, 2018 - link
The iPhone7 was the first iPhone DCI-P3 color gamut.This is an apples to apples comparison.
tim1724 - Friday, October 5, 2018 - link
My iPhone XS and my iMac 5K show essentially identical images. It's in between your 7 and XS, but closer to the 7. I think your XS might have a defective display, unless True Tone is doing something funky due to your lighting.whiskeysips - Friday, October 5, 2018 - link
If my display is defective, then all 5 of my iPhone XS’ in the household are defective.See the following picture that pits 2 iPhone7’s against 2 iPhoneXS’.
https://i.postimg.cc/dJn4hLsK/output.png?dl=1
The iPhones are different colors and have varying production dates using the serial number decoder.
Trutone is off.
Pneumothorax - Sunday, October 7, 2018 - link
you need to be using reference photos to compare colors. Not a movie app.whiskeysips - Sunday, October 7, 2018 - link
Are you trying to insinuate that only certain colors, eg reference shades, are supposed to match and any colors in between are allowed to be supposed to be a free for all?What difference is it if I am comparing an app with many colors on a gradient, or a few shades from the gradient itself?
If you really want to be that pedantic I can pick up 10 shades from the amc app and rectangles of pure color to act as a better reference.
With a discrepancy so large, I fail how this will produce different results.
whiskeysips - Sunday, October 7, 2018 - link
If anything a larger collection of colors is easier to discern compared to only comparing two shades at a time.mmcmah - Friday, October 5, 2018 - link
Thanks for the great, in-depth review!Given the problems being discussed on other sites, I was also hoping to see a deep dive on the WiFi and LTE capabilities/issues. Is that something that you are looking into?
Many thanks again.
saleri6251 - Friday, October 5, 2018 - link
Hey Andrei,So what other phones will you be reviewing for this year? Any chance you'll be reviewing the Pixel 3/3XL?
Andrei Frumusanu - Friday, October 5, 2018 - link
Pixels and Mate 20 are next in line.name99 - Friday, October 5, 2018 - link
Hi Andrei,A few comments/questions.
- the detailed Vortex and GPU die shots seem to bear no resemblance to the full SoC die shot. I cannot figure out the relationship no matter how I try to twist and reflect...
Because I can't place them, I can't see the physical relationship of the "new A10 cache" to the rest of the SoC. If it's TIGHTLY coupled to one core, one possibility is value prediction? Another suggested idea that requires a fair bit of storage is instruction criticality tracking.
If it's loosely coupled to both cores, one possibility is it's a central repository for prefetch? Some sort of total prefetching engine that knows the usage history of the L1s AND L2s and is performing not just fancy prefetch (at both L1s and L2s) but additional related services like dead block prediction or drowsiness prediction?
Andrei Frumusanu - Friday, October 5, 2018 - link
The Vortex and GPU are just crops of the die shot at the top of the page. The Vortex shot is the bottom core rotated 90° counter-clockwise, and the GPU core is either top left or bottom right core, again rotated 90° ccw so that I could have them laid out horizontally.The "A10 cache" has no relationship with the SoC, it's part of the front-end.
name99 - Friday, October 5, 2018 - link
OK, I got ya. Thanks for the clarification. I agree, no obvious connection to L2 and the rest of the SoC. So value prediction or instruction criticality? VP mostly makes sense for loads, so we'd expect it near LS, but criticality makes sense near the front end. It's certainly something I'm partial to, though it's been mostly ignored in the literature compared to other topics. I agree it's a long shot, but, like you said, what else is that block for?name99 - Friday, October 5, 2018 - link
"The benchmark is characterised by being instruction store limited – again part of the Vortex µarch that I saw a great improvement in."Can you clarify this? There are multiple possible improvements.
- You state that A12 supports 2-wide store. The impression I got was that as of A11, Apple supported the fairly tradition 2-wide load/1-wide store per cycle. Is your contention that even as of A11, 2 stores/cycle were possible? Is there perhaps an improvement here along the lines of: previously the CPU could sustain 3 LS ops/cycle (pick a combination from up to 2 loads and up to 2 stores) and now it can sustain 4 LS ops/cycle?
- Alternatively, are the stores (and loads) wider? As of A11, the width of one path to the L1 cache was 128 bits wide. It was for this reason that bulk loads and stores could run as fast using pair load-store integer as using vector load stores (and there was no improvement in using the multi-vector load-stores). When I spoke to some Apple folks about this, the impression I got was that they were doing fancy gathering in the load store buffers before the cache, and so there was no "instruction" advantage to using vector load/stores, whatever instruction sequence you ran, it would as aggressively and as wide as possible gather before hitting the cache. So if the LS queue is now gathering to 256 bits wide, that's going to give you double the LS bandwidth (of course for appropriately written, very dense back to back load/stores).
- alternatively do you simply mean that non-aligned load/stores are handled better (eg LS that crossed cache lines were formerly expensive and now are not)? You'd hope that code doesn't do much of these, but nothing about C-code horrors surprises me any more...
BTW, it's hard to find exactly comparable numbers, but
https://www.anandtech.com/show/11544/intel-skylake...
shows the performance of a range of different server class CPUs on SPEC2006 INT, compiled under much the same conditions. A12 is, ballpark, about at the level of Skylake-SP at 3.8 to 4GHz...
(Presumably Skylake would do a lot better in *some* FP bcs of AVX512, but FP results aren't available.) This gives insight across a wider range of x86 servers than the link Andrei provided.
The ideal would be to have SPEC2006 compiled using XCode for say the newest iMac and iMac Pro, and (for mobile space) MacBook Pro...
Andrei Frumusanu - Friday, October 5, 2018 - link
> Is your contention that even as of A11, 2 stores/cycle were possible?Yes.
> - Alternatively, are the stores (and loads) wider?
Didn't verify, and very unlikely.
> - alternatively do you simply mean that non-aligned load/stores are handled better
Yes.
remedo - Friday, October 5, 2018 - link
Can you please review the massive NPU? It seems like NPU deserves a lot more attention given the industry trend.Andrei Frumusanu - Friday, October 5, 2018 - link
I don't have any good way to test it at the moment.Ansamor - Friday, October 5, 2018 - link
Aren't these (https://itunes.apple.com/es/app/aimark/id137796825... tests cross-platform or comparable with the ones of Master Lu? I remember that you used it to compare the Kirin 970 against the Qualcomm DSP.Andrei Frumusanu - Friday, October 5, 2018 - link
Wasn't aware it was available on iOS, I'll look into it.Ansamor - Friday, October 5, 2018 - link
Same app for Android https://play.google.com/store/apps/details?id=com....tim1724 - Friday, October 5, 2018 - link
My iPhone XS scored 2162. :)DERSS - Saturday, October 6, 2018 - link
Is it much versus Kirin and Qualcomm or not?shank2001 - Saturday, October 6, 2018 - link
2868 on my XS Maxname99 - Saturday, October 6, 2018 - link
But it is unclear that the benchmark is especially useful. In particular if it's just generic C code (as opposed to making special use of the Apple NN APIs) then it is just testing the CPU, not the NPU or even NN running on GPU.You scored 2162. iPhone 6S scores 642 (according to the picture). That sort of 3.5x difference to me looks like a lot less than the boost I'd expect from an NPU, and may just reflect basically 2x better CPU plus availability of the small cores (not present on iPhone 6S).
edwpang - Friday, October 5, 2018 - link
There are no storage, network, and phone tests. Hopefully, these tests will included in future update.name99 - Friday, October 5, 2018 - link
"Apple promised a significant performance improvement in iOS12, thanks to the way their new scheduler is accounting for the loads from individual tasks. The operating system’s kernel scheduler tracks execution time of threads, and aggregates this into an utilisation metric which is then used by for example the DVFS mechanism."This is not the only changes in the newest Darwin. There are also changes in GCD scheduling. There was a lot of cruft surrounding that in earlier Darwins (issues of lock implementations, how priority inversion was handled, the heuristics of when a task was so short it's cheaper to just complete it than give up the CPU "for fairness --- but everyone then pays the switching cost"). These are even more difficult to tease out (and certainly won't present in single-threaded benchmarking) but are considered to be significant. There's also been a lot of thinking inside Apple about best practices for GCD (and the generic problem of "how to use multiple cores") and this has likely been translated into new designs within at least some frameworks and Apple's tier1 apps.
You can see this discussed here:
https://gist.github.com/tclementdev/6af616354912b0...
sheltem - Friday, October 5, 2018 - link
Can we chalk up the improvements of the 2x lens to computational HDR or is there a hardware improvement as well?darkich - Friday, October 5, 2018 - link
I just can't wait for Apple to FINALLY flesh out their in-house Mac chips.Not because I love Apple, but simply because I think the end result will be spectacular and outright shocking for Intel..and I do hate Intel.
They are disgustingly overrated.
varase - Tuesday, October 23, 2018 - link
I hope it's a good while ... I *need* VMWare and the ability to run Windows in a VM (for work).Not to mention, I'd be really disappointed if I couldn't boot Windows for game play.
name99 - Friday, October 5, 2018 - link
"I do hope Samsung and Apple alike would be able to focus more on optimising this, as like we’re about to see, it will have an impact on battery life."Presumably Apple's longer term plan (next year?) is to move to the LTPO technology in the aWatch4 display?
https://appleinsider.com/articles/18/08/24/future-...
https://www.reddit.com/r/apple/comments/9ga4wa/dis...
This appears to be an Apple specific (ie non-Samsung) tech, though who knows exactly what the web of patents and manufacturing agreements ultimately specifies...
SanX - Friday, October 5, 2018 - link
Similar to mobile processors performance surpassing some desktop ones, the mobile devices image quality also surpassing 5-7 years old DSLR tech. Your photo test suite would be great if it adds the Still Life, Resolution and Portrait tests from the pro photo testing sites like Imaging-resource dot com. I'd just send the phones to these folks and they do their tests so we would see the progress in comparison with all cameras of all times and manufacturers. Right now only the fields tests of Anandtech look poor and are not enough to tell what is good and what is missing in the cameras. Due to the great progress, the time for such new deep testing of mobile devices came right now.Pneumothorax - Sunday, October 7, 2018 - link
No way my Xs Max is touching my 10 year old D700 with a cheap $100 50 1.8 lens in anything over base iso. The iPhone starts applying some serious noise reduction at higher ISO’s making it watercolor like. Even at base iso, the D700 has much better fine detail.Constructor - Sunday, October 7, 2018 - link
Not to dispute your fundamental point (bigger lenses and bigger sensors can't really be replaced), but there are several low-light photo apps for the iPhone which support much longer exposure than the standard one and this can of course help at least with static images when you've got a fixed position for the iPhone.yvn - Friday, October 5, 2018 - link
I was surprised you said excellent viewing angle, I disagree! If you tilt the screen just a little up or down, left or right, the color changes a lot, it becomes more blue as you tilt more and I couldn’t get used to this so I returned the phone, my iPhone 7 Plus had no such behavior.Notmyusualid - Friday, October 5, 2018 - link
So... dual-sim finally (only phone type I buy), 4x4 MIMO, 7nm, but still no Android?!?Would it KILL Apple to offer an Android version, I mean really, would it? Even Sony broke in the end...
Guess they don't need my money, but nice handset, if a bit pricey.
cfenton - Saturday, October 6, 2018 - link
I can't tell if this is a serious comment. Apple makes something like 85% of the profit in the mobile phone market. They have absolutely no reason to even consider putting out an Android phone.Speedfriend - Tuesday, October 9, 2018 - link
Oh, that rubbish stat again. That is only in the narrowly defined mobile market. Apple makes its own OS, designs many of its own chips and controllers, sells its own insurance, runs its own app store and its own retail shops. It has many more parts of the overall value chain. For an accurate comparison, we would need to include what Qualcomm and Samsung make from SOCs, what Samsung and LG make from screens, what Samsung makes from memory etc.varase - Tuesday, October 23, 2018 - link
What, you kidding?I'm sure one reason for the push into custom silicon was because it was one thing Android OEMs couldn't copy.
Hifihedgehog - Saturday, October 6, 2018 - link
I am a little confused here by your rather bold assertion of desktop-class performance. Correct me if I am wrong, but aren’t the test results here for SPECint2006 inclusive of all cores whereas the Intel processor’s score you linked to is for a single thread. If so, then comparing six cores versus a single thread in a multithreaded desktop CPU is not very fair or valid. Again, I could be totally wrong but I just wanted to point this out.Andrei Frumusanu - Saturday, October 6, 2018 - link
Eh no, it's single thread comparisons everywhere.Alistair - Saturday, October 6, 2018 - link
Thank you for making that clear. See my below comment with links, I put your data into a chart to compare. Is it fair to say IPC for the A12 in integers, is up to 64 percent faster?Hifihedgehog - Saturday, October 6, 2018 - link
Awesome! Thanks for the clarification. So my own education, is SPECint2006 a single-threaded test by its very nature? Can it be configured for multithreaded as well? If so, do you have multithreaded results forthcoming as well? I would love to see where Apple falls in this benchmark in that area as well.Andrei Frumusanu - Saturday, October 6, 2018 - link
SPECspeed is the score of running a single instance, SPECrate is running as many as there are cores on a system.It's not possible to run SPECrate on iOS anyhow without some very major hacky modifications of the benchmark because of the way it's ported without running in its own process. For Android devices it's possible, but I never really saw much interest or value for it.
Hifihedgehog - Saturday, October 6, 2018 - link
What about joules and watts? This is what is confusing me now. Something that seems a bit suspect to me is that the average energy and joule totals are not consistently lower together. That is, expect a device that has a lower average power usage to over the duration of the test to draw less total joules of energy. Mathematically speaking, I would naturally expect this especially in the case of higher benchmark scores since the lower power would be over a shorter space of time, meaning less total energy drawn.Andrei Frumusanu - Saturday, October 6, 2018 - link
The amount of Joules drawn depends on the average power and performance (time) of the benchmark.If a CPU uses 4 watts and gives 20 performance (say 5 minute to complete), it will use less energy than a CPU using 2 watts and giving 8 performance (12.5 minutes to complete), for example.
Dredd67 - Saturday, October 6, 2018 - link
I live in France and have to deal with intel radios since a while. I'm totally baffled to read all the comments about poor reception, dropped calls, wifi going crazy like this is news.This problem has been there since the beginning of Apple using intel chips ! It's just now that Americans have to use them also that it might finallybe adressed (please keep complaining, raise awareness, file a class action, whatever it takes) but you don't realize the pain these past years have been using an iphone elsewhere in the world...
For the record, I just gave my 7 (intel chip) to my daughter who broke here phone and reverted back to an old 6plus I kept for such cases (it also uses qualcomm). In the family I'm now the only only getting rock solid wifi, no dropped calls, consistent signal strenght, on a 4+ years old phone. And all this time I thought my carrier was messing around, my orbis were crap, or I just fumbled with my router settings.
Shame on you Apple, shame on you.
Oh and don't get me started on the camera: is this really what people want ? I think it's getting worse and worse, not better. Completely artifical images, looking more like cartoons than photographs. Same for Samsung and Google's phones. Hav a look at the P20 pro for a reality check. This is what photos should look like (contrast, colors - not really dependant on the sensor size).
Great review though.
ex2bot - Sunday, October 7, 2018 - link
The Halite app devs showed off an interesting technique they’re using or working on for the XS they call SmartRAW that may offer more control over the over saturation / lower contrast issues, etc.s.yu - Monday, October 8, 2018 - link
"Hav a look at the P20 pro for a reality check."?Your reality is the smeared textureless crap with oversharpening contours along object edges? Unbelievable.
serendip - Saturday, October 6, 2018 - link
Could these amazing CPU gains be translated to a midrange chip? Technically Android could run on Apple SoCs because they're all ARM licensed cores but porcines would gain flying abilities before that ever happened.It's too bad Android users are stuck with the best that ARM and Qualcomm can do, which isn't much compared to Apple's semi design team.
eastcoast_pete - Sunday, October 7, 2018 - link
Apple's strength (supremacy) in the performance of their SoCs really lies in the fine-tuned match of apps and especially low-level software that make good use of excellent hardware. What happens when that doesn't happen was outlined in detail by Andrei in his reviews of Samsung's Mongoose M3 SoC - to use a famous line from a movie that "could've been a contender", but really isn't. Apple's tight integration is the key factor that a more open ecosytem (Android) has a hard time matching; however, Google and (especially) Qualcomm leave a lot of possible performance improvements on the table by really poor collaboration; for example, GPU-assisted computing is AWOL for Android - not a smart move when you try to compete against Apple.varase - Tuesday, October 23, 2018 - link
I have serious doubts that Android would even run on an A12 SoC - I thought Apple trashed ARMv7 when it went to A11.Strafeb - Saturday, October 6, 2018 - link
It would be interesting to see comparison of screen efficiency of iPhone XR's low res LCD screen, and also some of LG's pOLED screens like in V40.Alistair - Saturday, October 6, 2018 - link
The Xeon Platinum 8176 is a 28 core, $9000 Intel server CPU, based on Skylake. In single threaded performance, the iPhone XS outperforms it by 12 percent for integers, despite its lower clock speed. If the iPhone were to run at 3.8ghz, the Apple A12 would outperform Intel's CPU by 64 percent on average for integer tests.iPhone XS and A12 numbers from: https://www.anandtech.com/show/13392/the-iphone-xs...
Xeon numbers from: https://www.anandtech.com/show/12694/assessing-cav...
spreadsheet: https://docs.google.com/spreadsheets/d/1ipKIh4i56o...
image of chart: https://i.imgur.com/IAupi9p.jpg
Think about that, the iPhone's CPU IPC (performance per clock) is already higher in integer performance now. Those tests include: spam filter, compression, compiling, vehicle scheduling, game ai, protein seq. analyses, chess, quantum simulation, video encoding, network sim, pathfinding, and xml processing. Test takes hours to run.
SanX - Saturday, October 6, 2018 - link
Yes, and while Apple and all other mobile processor manufacturers charge $5 per core, Intel $300yeeeeman - Saturday, October 6, 2018 - link
It might be faster in single thread, but in MT it gets toasted by the Xeon. The Xeon is 9000$ for a few reasons:- it is an enterprise chip;
- it supports ecc;
- it supports up to 8 cpus on a board;
- it supports tons of ram, a LOT of memory channels;
- it has almost 40MB of L3 cache, compared to 8mb in a12;
- it has a ring bus architecture meaning all those cores have very low latency between them and to memory;
- it has CISC instructions, meaning that when you get out of basic phone apps and you start doing scientific/database/HPC stuff, you will see a lot of benefits and performance improvements from executing a single instruction for a specific operation, compared to the RISC nature of A12;
- it supports AVX512, needed for high performance computing. In this, the A12 would get smashed;
- and many more;
So the Xeon 8180 is still an mighty impressive chip and Intel has invested some real thought and experience into making it. Things that Apple doesn't have.
I get it, it is nice to see Apple having a chip with this much compute power in such a low TDP and it is due to the fact that x86 chips have a lot of extra stuff added in for legacy. But don't get carried away with this, what Apple is doing now from uArch point of view is not new. Desktop chip have had this stuff 15 years ago. The difference is that Apple works on the latest fabrication process and doesn't care about x86 legacy.
Alistair - Saturday, October 6, 2018 - link
"It might be faster in single thread, but in MT it gets toasted by the Xeon"That is totally irrelevant. Obviously Apple could easily make a chip with more cores. Just like Cavium's Thunder. 8 x A12 Vortex cores would beat an 8 core Xeon in integer calculations easily enough.
eastcoast_pete - Sunday, October 7, 2018 - link
Agree on your points re. the XEON. However, I'd still like to see Apple launch CPUs/iGPUs based on their design especially in the laptop space, where Intel still rules and charges premium prices. If nothing else, Apple getting into that game would fan the flames under Intel's chair that AMD is trying to kindle (started to work for desktop CPUs). In the end, we all benefit if Chipzilla either gets off its enormous bottom(line) and innovates more, or gets pushed to the side by superior tech. So, even as a non-Apple user: go Apple, go!Constructor - Sunday, October 7, 2018 - link
CISC instructions generally don't really do much more than RISC ones do – they just have more addressing modes while RISC is almost always register-to-register with separate Load & Store.
That just doesn' make any difference any more because the bottleneck is not instruction fetching (as it once was in the old times) but actually execution unit pipeline congestion, including of the Load & Store units.
There's already a scalable vector extention for ARM which Apple could adopt if that was actually a bottleneck. And even the existing vector units aren't anything to scoff at – the issue is more that Intel CPUs are forced to drop down to half their nominal clock once you actually use AVX512; It could actually be more efficient to optimize the regular vetor units for ful lspeed operation to make up for it.
We actually have no clue what Apple is investing in behind closed doors until they slam it on the table as a finished product ready for sale!
tipoo - Thursday, October 18, 2018 - link
I'm hoping Apple takes the ARM switch as an opportunity to bring an ARM AVX-512 equivalent down to more products, like the iMac.Constructor - Sunday, October 7, 2018 - link
Just to add, ARM RISC instructions actually do more than Intel CISC instructions in one respect: ARM is a 3-address machine (dst = src1 + src2) while Intel is a 2-address machine (dst += src). And, of course, due to the much larger logical register file there is much less fiddling with the stack for local variable storage.varase - Tuesday, October 23, 2018 - link
You sound a lot like some of my fellow IBM mainframers talking trash about those limited capacity Wintel servers :-).ABR - Monday, October 8, 2018 - link
Thanks. This kind of info really should have been in the article, rather than just the unsubstantiated claim, "we’re just margins off the best desktop CPUs".Speedfriend - Tuesday, October 9, 2018 - link
Those anandtech results for the XEON are well below the result on the SPEC2006 website for a number of tests. The results for the 462 test on the website seems strange for all the XEON as a multiple of the A12, but even excluding this the XEON 8176 has a 18% performance advantage, rather than the other way round.A XEON 5120 has the same performance as the A12
id4andrei - Saturday, October 6, 2018 - link
What is the state of Vulkan support on varios Android handsets? I'm asking because GFX Bench is making use of a Vulkan/Metal workload. Apple is all in on Metal but last time I read Vulkan is not fully supported on all Android handsets. In this aspect isn't the comparison of GPU numbers lopsided as the API is not equal? Don't some of the smartphones revert to OpenGL?Andrei Frumusanu - Saturday, October 6, 2018 - link
The test is Vulkan/Metal only, there's no fallback to GL. Obviously all devices listed support it.Wardrive86 - Saturday, October 6, 2018 - link
The performance measurement was run in a synthetic environment (read: bench fan cooling the phones) Didn't this originally say refrigerator?Andrei Frumusanu - Saturday, October 6, 2018 - link
That was wrongly edited in, it's a fan.Wardrive86 - Saturday, October 6, 2018 - link
Ok thank you, I started to question myself there. BTW great review, I love the quality and depth of your reviews, keep up the fantastic work!hlovatt - Saturday, October 6, 2018 - link
Thanks Andrei. Wonderful review and great to see in depth phone reviews again. Also you must have worked extremely hard to get such a comprehensive and in-depth review out so quickly. Thanks again.iwod - Saturday, October 6, 2018 - link
> 480-500 mW when on a black screenAm I correct in understanding this figures in base "system" power? And not the display? There could be a case for more memory from iPhone 8, where the X had 3GB and Xs had 4GB.
Andrei Frumusanu - Saturday, October 6, 2018 - link
It's the power of the whole phone, meaning idle SoC, power delivery, and the screen and its controllers at an equivalent 0 nits brightness. Out of that figure some part will be the display, but we can't really separate it without tearing down the phone.The base power for the X and XS is near identical - the big difference is to the LCD iPhones.
iwod - Saturday, October 6, 2018 - link
Arh. That makes things clear. But why would OLED consume more power compared to LCD when it is completely off. This goes against common wisdom / knowledge OLED were suppose to be extremely efficient in all black display.Glaurung - Sunday, October 7, 2018 - link
"But why would OLED consume more power compared to LCD when it is completely off."This is addressed obliquely in the article - because the OLED display has high colour depth and extremely high DPI, it requires extra power to control all the levels of brightness for each and every one of those pixels.
eastcoast_pete - Sunday, October 7, 2018 - link
A few years ago, I loaded a "black Background" app on my trusty old Samsung S3. A black background always made a lot of sense to me for OLED-type screens. I might have missed it, but does Apple provide a black background setting in its settings for the XS and XS max? Also, I haven't seen much discussion about possible burn-in for OLED screens here or in other reviews of phones with OLED screens. Is this now a non-issue, maybe as many replace their phones before burn-in shows?ex2bot - Sunday, October 7, 2018 - link
No dark mode yet except for an incomplete one in accessibility settings. You end up with some graphics turned negative image. Hopefully, we’ll get a proper dark mode in iOS 13. The Mac finally got one.fzwo - Sunday, October 7, 2018 - link
If you activate Voice Over in Accessibility settings, you can triple-tap the display with three fingers to activate the „screen courtroom“, which makes the whole screen dark. I believe this turned off the backlight on LCD iPhones. Maybe it deactivates more than just displaying a black screen, and can help lower base power usage.Of course Voice Over being on may affect other behavior/benchmarks.
fzwo - Sunday, October 7, 2018 - link
Sorry, that should read „screen courtain“. Autocorrect is one area that seems to have gotten worse for me with iOS 12 😅eastcoast_pete - Monday, October 8, 2018 - link
Thanks for the information (I don't own an iPhone). The dark/black background I mean (and used to use on my S3 and later on a Blackberry Priv) didn't turn off the entire screen, so the icons were still bright, and appeared even brighter against the black background. I really wonder how much power one can save on phones like the iPhone X, XS and XS Max by using an unlit background. In theory, every lit pixel adds a little power drain, and every little LED not lit up should save that.AntonErtl - Saturday, October 6, 2018 - link
For me the most interesting part of the review is the SPECint results. Looking at the official SPEC report of the system with the highest base result (Cisco UCS B200 M5 (Intel Xeon Gold 6146,3.20 GHz), 4.2GHz Turbo), the A12 is really close; e.g., 44.56 (A12) vs. 48.3 (Xeon) for 403.gcc. An exception is 456.hmmer, but there Cisco (and many others) are using some compiler trick that you apparently are not using.
And the A12 is using quite a bit less power for these results than the Xeon. Very impressive of Apple. Maybe they should go into the server CPU business and produce the ARM-based server CPU that others have aimed for these last years.
zepi - Saturday, October 6, 2018 - link
Otherwise a nice idea, but Datacenter CPU-market is too little to be interesting for Apple, as crazy as it is.Intel makes about $5b/quarter selling Xeons and other Datacenter stuff.
Apple makes some $50B. I don't think they can waste chip-development resources to design something for such a little "niche".
tipoo - Thursday, October 18, 2018 - link
Well, it would be largely reusing the R&D they already do for iOS chips, making the high performance cores is the hardest part, scaling them up to more cores would be a fraction the work.
varase - Tuesday, October 23, 2018 - link
The Enterprise server business is already a crowded field, and it's not really something Apple has any expertise with.In Apple terms, it's not like there's a huge profit potential there, even if they were successful.
Why put all that effort into learning, when most of their income comes from a portable consumer device they first released in 2007?
iwod - Saturday, October 6, 2018 - link
What are the other die area used for? The labels only has ~half of the die. I could add image signal processing, video encode and decode if that is not included in GPU. Some FPGA we know Apple had included in their SoC. But all that accounted that is likely less than 25% of that due space. What about the other 25%?Glaurung - Sunday, October 7, 2018 - link
Hardware accelerators for anything and everything that can be hardware accelerated.Plus the "secure enclave" is also on there somewhere - a fenced off, cut down SOC within the SOC for handling logins/unlocking and other security stuff.
Antony Newman - Sunday, October 7, 2018 - link
Andrei - This is an awesome review. Do you think Apple could roll out a low end laptop with 6 Vortex cores - or are there still SoC design areas that Apple still needs to address?AJ
Constructor - Sunday, October 7, 2018 - link
I'm not Andrei, but my speculation on this would be:• It would make no sense to start with the weakest Macs because that would put the transition to Apple's own CPUs in a bad light from the start. As in the Intel transition 12 years ago they would need to start with the middle of their lineup (with iMacs and MacBook Pros) in order to demonstrate the strength of the new CPU platform and to motivate software developers to jump on board, including actually working on the new machines full time if possible.
• They would need to have an emulation infrastructure for Intel legacy code in place like they did with Rosetta back then (also for Windows/Linux VMs!). And even in emulation that legacy code cannot be much slower than natively on then-current Intel machines, so their own CPUs already need to be a good bit faster than the corresponding Intel ones at the time in order to compensate for most of the emulation cost.
• As in 2006, this would have a significant impact on macOS so at announcement they would need to push at least developer versions of the new macOS to developers. Back in 2006 they had Intel-based developer systems ready before the actual Intel Macs came out – this time they could actually provide a macOS developer version for the then top-of-the-line iPads until the first ARM-based Macs were available (which already support Blutooth keyboards now and could then just support Bluetooth mice and trackpads as well). But this also means that as back then, they would need to announce the transition at WWDC to explain it all and to get the developers into the boat.
• Of course Apple would need to build desktop/notebook capable versions of their CPUs with all the necessary infrastructure (PCIe, multiple USB, Thunderbolt) but on the other hand they'd have more power and active cooling to work with, so they could go to more big cores and to higher clock speeds.
Again: This is sheer speculation, but the signs are accumulating that something this that may indeed be in the cards with Intel stagnating and Apple still plowing ahead.
I just don't think that it would be practical to put the current level of Apple CPUs into a Mac just like that even though from sheer CPU performance it looks feasible. These transitions have always been a massive undertaking and can't just be shot from the hip, even though the nominal performance seems almost there right now.
Constructor - Sunday, October 7, 2018 - link
Oops – this forum insists on putting italics into separate lines. Oh well.ex2bot - Sunday, October 7, 2018 - link
Not to mention they’d have to maintain two processor architectures for an extended period. By that, I mean, I doubt they’d transition high-end Macs for a long, long time to avoid angering pros... again.serendip - Monday, October 8, 2018 - link
A real left field move would be for Apple to release a MacOS tablet running ARM, like a Qualcomm Windows tablet. I wouldn't rule it out considering how Apple went from a single product for the iPhone and iPad to making multiple sizes.Constructor - Monday, October 8, 2018 - link
I don't see that happening at all because Apple has explicitly maintained a clear distinction between Macs and iOS exactly along the lines of different interaction paradigms (point-based vs. touch).Windows with touch continues to be a mess and I don't see Apple following Microsoft into that dead end.
Constructor - Monday, October 8, 2018 - link
If they'd have a replacement offering noticeably higher performance than any Intel Mac Pro and if legacy software at least ran decently until new ARM recompiles were available, I don't think most users would mind all that much.Going from PowerMacs to Mac Pros was also not entirely painless, but most users still thought it was worth it overall.
id4andrei - Sunday, October 7, 2018 - link
Andrei, I remember you mentioning in the comment sections of an article - maybe the S9 review - that the A11 cannot maintain it's freq and drops by as much as 40% while the Snapdragon drops only 10% on sustained workloads.You made your testing on a bench fan. You tested the potential of the A12, and it is incredible, but not the real life performance of the iphone. When used for prolonged sessions the A12 might reach its threshold faster than the Snapdragon and drop performance. What are your musings on this? Throttling matters and identifying it is very important, especially considering Apple's recent history. The CPU is great but is that top performance sustainable and for how long?
Andrei Frumusanu - Sunday, October 7, 2018 - link
What you mention is in regards to the GPU performance, it's addressed in that section in this piece.And of course it's the real performance of a phone. The duration of a benchmark doesn't change the fact that the CPU is capable of that throughput. Real world workloads are transactional and are a few seconds at best in the majority of use-cases. In such scenarios, the performance is well exhibited.
id4andrei - Sunday, October 7, 2018 - link
That makes perfect sense. No one does folding on smartphones. Thanks for the prompt reply.eastcoast_pete - Sunday, October 7, 2018 - link
Also, folding your smartphone is really hard, and doesn't end well for the phone (:FunBunny2 - Sunday, October 7, 2018 - link
"folding your smartphone is really hard, and doesn't end well for the phone"I don't recall (too lazy to confirm :) ) which company, but a patent was awarded a couple or so years ago for a flexible display, such that it would (according to the drawing I saw) make a cylindrical bend the hinge when closed. still hasn't appeared, so far as I know. let's see... looks like Samsung and LG have some, and more recently than when I first saw..
here: https://www.androidauthority.com/lg-foldable-phone...
eastcoast_pete - Monday, October 8, 2018 - link
Yes, that comment was in jest. I believe both Samsung, LG and Huawei have folding smartphones with folding screens underdevelopment. If those work out and aren't too pricey, I'd be interested. Nice to be able to fold a phone with a 7 inch display to a little more then half its full size.varase - Tuesday, October 23, 2018 - link
While this would probably be neat to see in the short run, I can't imagine that would yield a long lasting display over the long haul.Javert89 - Sunday, October 7, 2018 - link
Hi, do you think the current power draw of the CPU (in watt) is sustainable for the battery, expecially in the long term? In this review you cite a case where a GPU benchmark made crash the phone because the power required is too high.. Any chance to see this behavior on real life scenario? Moreover do you think that the power draw (watt) is sustainable in smartphone envelope? Or other aspects like overall power consumption or leakage count?Andrei Frumusanu - Sunday, October 7, 2018 - link
Smartphones can sustain 3-3.5W. There's a window of several minutes where you can maintain peak. However because CPU workloads are bursty, it's rare that you would hit thermal issues in usual scenarios. Sustained scenarios are games, but there the GPU is the bottleneck in terms of power.Batteries easily sustain 1C discharge rate, for the XS' that's 2.6 and 3.1A, or rounded to around 10 to 12W. Temperature becomes a natural limitations before the battery discharge rate becomes any concern.
Javert89 - Sunday, October 7, 2018 - link
Thanks for the reply. My look on the battery was long term, in the past there has been the 'throttling fiasco' in which the performance of the device was cut down because batteries seemingly could not hold the power requested.. If the power draw increased on the A12, I could wonder if the problem will get worseConstructor - Sunday, October 7, 2018 - link
The whole panic about the iPhone 6 on old, softening batteries was basically just this, but after one relatively crude attempt they got the power management optimized that I had very little if any slowdown even on my 4 years old battery.The crash during the extreme load test on the XS will of course need to get fixed with another power management update, just that it needs to apply even to factory-fresh batteries already.
But when the battery ages and its current delivery capability erodes over time it will only become more and more necessary for power management to deal with such spikes, possibly by ramping up the clock less aggressively.
iAPX - Sunday, October 7, 2018 - link
This test is a master piece. Kudos!TheSparda - Sunday, October 7, 2018 - link
I am interested to know more the recent issues with the poor wifi & LTE speeds and signal strength. A smartphone having great cpu/gpu is good and all, but some of the basics such as wifi & LTE should also be up to snuff if apple wants to phone to be a total package. I feel that the wifi & LTE issues are overlooked when they designed the phone.FunBunny2 - Monday, October 8, 2018 - link
"I feel that the wifi & LTE issues are overlooked when they designed the phone."see my earlier comment, to wit: if you want to make a clear phone call, get a landline.
varase - Tuesday, October 23, 2018 - link
It already seems fixed in 12.1 beta 3, and may have been fixed in 12.0.1 but I don't have that installed.lucam - Sunday, October 7, 2018 - link
I was sure that the GPU used on A11 and A12 are basically Imagination Tech solutions and Apple can't design a completely new GPU solution from scratch.I am wondering if Apple will use the future Furian solution or not in future chips; if they won't as they don't partner anymore they will regret it.
Question is what's the current agreement with Imagination as they are still using PowerVR solutions.
tipoo - Wednesday, October 24, 2018 - link
Even with the A10, their GPU was 2/3rds custom, their arrangement with Imagination seems similar to their ARM licence. Calling it custom as of the A11 even though not that much seemed to change (but for grouping more ALUs into a core) seems like a formality.https://www.realworldtech.com/apple-custom-gpu/
Calin - Monday, October 8, 2018 - link
You could do some tests in "Airplane" mode to eliminate the WiFi and telephony stack power use.I'm not sure there's a way to eliminate the telephony stack but still keep the WiFi active.Constructor - Monday, October 8, 2018 - link
Sure you can – just go to airplane mode and re-enable WiFi.s.yu - Monday, October 8, 2018 - link
In the third and fifth low light sample I think it's pretty clear that the XS beat the P20P, again P20P's retention of texture is horrible and only in extreme low light (second and last sample) and scenes with many man-made objects which lack obvious texture (third sample) does its excessive sharpening and NR have a advantage. The first sample is a surprise, there either the scene is actually darker than I'm lead to believe or there's something else I've not taken into account.daiquiri - Monday, October 8, 2018 - link
Isn't this a too bold affirmation that A12 is on par with current desktop CPUs? So why didn't we stack them in order to have 5x the processing power of a common laptop/workstation for 1/2 of the power? What am I missing here? Because this doesn't make much sense to me.If I have 2 or 3 of these SoC chips and photoshop running on them will I have my filters running faster? Chrome loading webpages faster? Creating zip archives or converting flac files 3x times faster than an Intel I7?
resiroth - Monday, October 8, 2018 - link
There is widespread speculation (and at this point, it is considered more likely than not) that apple will transition to their own chips exclusively for their Mac products in the near future (within 2 years).What exactly doesn’t make sense to you?
Ps: will be very interesting to see results of new iPad Pro this year too.
daiquiri - Monday, October 8, 2018 - link
What is strange to me is why aren't we already stacking this SoCs. I suppose I can fit 6 or 8 of these processors on the same die size. This means I would have a lot faster PC for less power? It they are that great why aren't on desktops already?Does this mean if I manage to install Windows 10 ARM edition on my PC photoshop will run faster and Chrome will load pages two or three times faster since I have 8 of these SoCs on my desktop pc?
Constructor - Wednesday, October 10, 2018 - link
Because just increasing the core count is not quite enough.As I have explained above, a platform transition is a massive undertaking which among many other things needs actually superior performance to afford the initially still necessary legacy code emulation.
This will almost certainly start at a WWDC (next year or the year after that, most likely) with an announcement about not just coming hardware but also about those new mechanisms in macOS itself, and actual Axx-powered Macs may take months after that to emerge while developers start porting their code to the new platform.
It's not as if this was the first time for Apple – they've done it twice already!
varase - Tuesday, October 23, 2018 - link
Yeah, but they always went to a *much* faster CPU to do the emulation.Constructor - Wednesday, October 24, 2018 - link
Even just the iPhone cores right now are already at about a desktop i5 level according to GeekBench.There should be quite some room for upscaling with more power and better cooling.
tipoo - Wednesday, October 24, 2018 - link
These cores are 30% larger than Intels, let that sink in.I'm sure 8 of them would perform marvellously, for the cost. And it may be coming.
Zoolook - Tuesday, October 9, 2018 - link
I just pulled an old test of a i3-6320, a 3 year old dual-core, and the A12 does good on some of the tests, but in many tests it's quite a bit behind, so it's not ready for server-duty yet.I know that the i3 has higher frequencies but it's also only two cores, compared to 2+4.
There is no question that the big cores are very efficient, but they are optimized for their current job.
If Apple would build a 6- or 8-core processor based on their large cores and throw away most of the other SoC and add other parts (stronger SIMD etc), yes then we might have a good desktop/serverchip, but the A12 is not that chip.
Also remember that we are comparing 14nm Intel chips with the newest "7nm" process, if Intel ever gets their 10nm process up and running for real, then we could do a better comparison.
Constructor - Wednesday, October 10, 2018 - link
Multicore A12 performance seems mostly limited by the passive cooling in a handheld device. That is where the much higher power availability and active cooling in a notebook or desktop makes the biggest difference.It's single-core performance where you see the most of the actual core performance. By allowing for higher power consumption and using active cooling Apple should be able to scale up multicore performance relatively easily (and some of the iPads with additional CPU cores, notebook-sized batteries and at least improved passive cooling have already demonstrated that).
zeeBomb - Monday, October 8, 2018 - link
Andrei came thru...thank you!!!zeeBomb - Monday, October 8, 2018 - link
Does anyone here still use a seperate camera app for Night time photos instead of the stock one? Like NightCap Pro, etc.tmi_(') - Monday, October 8, 2018 - link
Andrei, nice analysis!can you write something about new storage controller in A12?
strajk - Tuesday, October 9, 2018 - link
-"Apple’s CPU have gotten so performant now, that we’re just margins off the best desktop CPUs"That sentence alone discredits your whole article, this has to be one of the most stupid things I've ever read in a review the past years.
A mobile ARM CPU isn't even faster than a Pentium 4 in pure IPC, and they perform in completely different instruction sets...
That statement was so moronic that it forced me to create an account just to call you out on this.
Andrei Frumusanu - Tuesday, October 9, 2018 - link
> this has to be one of the most stupid things I've ever read in a review the past years.Did this cause you to write something even more stupid in the following sentence?
> A mobile ARM CPU isn't even faster than a Pentium 4 in pure IPC
The P4's IPC was overtaken by mobile devices maybe half a decade ago. That's such a ridiculous claim.
> and they perform in completely different instruction sets...
So what? How is that relevant? The same high language workloads are compiled for the respective ISAs. Please do explain how that is not comparable.
Boxador - Wednesday, October 10, 2018 - link
Andrei, keep kicking ass. This review and your comment responses are fire.tipoo - Thursday, October 18, 2018 - link
People who deny that ARM designs especially from Apple have closed in on x86 performance, and in Apples case often beaten it, are starting to remind me of flat earthers.
Silma - Friday, October 12, 2018 - link
Regarding processor power.Apart from gamers, is the increase in processing power perceptible to the user, for which applications and is it noticeable?
I have a 2.5 year old phone with a SnaDragon 810 and its performances still suit me just fine. In a future purchase, I would mostly look for improvements on battery autonomy.
tipoo - Wednesday, October 24, 2018 - link
I thought a Core 2 Duo felt fine until I got a Haswell system, I suspect it would be similar for you going to this. The improvement just in web page loading speed alone would be significant.peevee - Monday, October 15, 2018 - link
"we see four new smaller efficiency cores named “Mistral”. The new small cores bring some performance improvements, but it’s mostly in terms on power and power efficiency where we see Tempest make some bigger leaps"So, is it Tempest or Mistral? Or both?
Ryan Smith - Tuesday, October 23, 2018 - link
It's Tempest. Thanks for the heads up!peevee - Monday, October 15, 2018 - link
"upgrade in sensor size from an area of 32.8mm² to 40.6mm²"These are not sensor sizes, these are total image chip sizes.
Sensor (as in "sensor", the part which actually "senses" light) sizes are not hard to calculate, and are MUCH smaller.
12MP is approx 4000x3000 pixels.
The old sensor had 1.22 µm pixel pitch. 1.22*4=4.88mm. 1.22*3=3.66mm.
So old sensor was 4.88x3.66mm = 17.9mm².
The new sensor is 5.6mm x 4.2mm = 23.5mm².
This is is comparison to
- typical cheap P&S camera sensor size (so-called '1/2.3" type') of 6mm x 4.5mm = 27mm²
- high-end P&S camera sensor, (1" type) of 13.2mm x 8.8mm = 116mm²
- Four Thirds camera sensor size of 17.2 x 13mm = 225mm²
- Modern pro camera sensor size of about 36x24mm = 864mm².
Please do not confuse your readers by calling total image chip sizes as "sensor size".
peevee - Monday, October 15, 2018 - link
"The performance measurement was run in a synthetic environment (read: bench fan cooling the phones) where we assured thermals wouldn’t be an issue for the 1-2 hours it takes to complete a full suite run."Which makes the whole thing useless. Of course wider (read hotter and less efficient due to higher overhead of often-useless blocks) will run faster in this environment, unlike in user hands (literally, ~36C/97F plus blanketing effect).
Andrei Frumusanu - Monday, October 22, 2018 - link
It changes absolutely nothing. It will still reach that performance even in your hands. The duration of a workload is not orthogonal to its complexity.viczy - Sunday, October 21, 2018 - link
Fantastic and in-depth work! Thanks for the data and analysis. I would like to know a little more about your method for energy and power measurement. Thanks!techbug - Friday, November 2, 2018 - link
Thanks a lot Andrei.L2 cache latency is 8.8ns, Core clock speed is 2.5GHz, each cycle is around 0.4ns, then the l2 cache latency is 8.8ns/0.4=22 cycles. This is much longer than Skylake, which is around 12 cycles (taking i7-6700 Skylake 4.0 GHz at https://www.7-cpu.com/cpu/Skylake.html as an example, it equals to 3ns L2 cache latency).
So L2 latency is 8.8ns versus 3ns in skylake. Is this comparison correct?
I cannot tell the precise L1 latency from the graph "Much improved memory latency". Can you give the number?
According to Figure 3 in https://www.spec.org/cpu2006/publications/SIGARCH-... the working set size of 80% SPEC2K6 workload is larger than 8MB, A12 's L2 cache (8MB) won't hold the working set. Compared with 32MB L3 cache Skylake configuration.
So overall the memory hierarchy of A12 seems not comparable to Skylake. What else helps it to deliver a comparable SPEC2K6 performance?
demol3 - Wednesday, December 5, 2018 - link
Will there be a comparison between XS-series and XR or XR review anytime soon?tfouto - Thursday, December 27, 2018 - link
Does XS has a true 10-bit panel, or uses Frame Rate Control?What about Iphone X?
Latiosxy - Wednesday, January 23, 2019 - link
Hello. I just wanted to criticize the way this site works. It’s hard to read while listening to music when your intrusive ads follow my screen and interrupt my audio consistently. Please fix this as this has been really annoying. Thanks.alexdi - Tuesday, February 5, 2019 - link
As a tech geek, I agree with your analysis. I have an X because the XR looks like a cheap design and I appreciate AMOLED. I think this is a minority sentiment, though. The XR is a stellar handset, perhaps the best available for most people. Between the battery life, SoC, and Apple's generous iOS update policy, the longevity will be extraordinary. I expect to see "still happily rolling along" comments even into 2025.Anirudh2FL - Wednesday, June 26, 2019 - link
Is it just me or are iPhones fast becoming one of the worst flagships to take pics in low light/indoor light ?Are the pics as bad as they appear ?
Forgun - Tuesday, June 23, 2020 - link
Thanks for the useful information.