Hmmm, as no NVDA investors have really made a strong defense of Tegra 3 here I will do so for the sake of balance. Firstly Windows does not use the fifth helper core. The 40G process is also the performance variant not the low power one and is a bulk silicon process not HKMG. A lot of the performance difference is the 400-500 MHz clock deficiency to Clovertrail (1.3/1.4 to 1.8 GHz). The Tegra 3 has also not peaked yet, it is worth shrinking this design all the way to 20nm as you could get the A9s up to 2 GHz at lower power than the 40nm original and the die size under 30 sq mm which would make it very cheap to profitably build, about $5. The Tegra 3 story has only just begun.
So in summary, the nvidia tegra program is in shambles.
■ nvidia is trying to sell a 5 core chip to a customer that only wants 4 cores.
■ This is a dunder-headed business proposition for both nvidia and Microsoft.
■ So Microsoft will likely soon kick the tegra #3 to the curb.
■ The tegra #3 is being made on yesterday's obsolete 40 nanometer process.
■ Shrinking the tegra to 20 nm would be great, except there is nobody on the
planet that can do it -- except for competitor Intel.
■ Intel has no interest in making chips for competitor nvidia.
■ The tegra chip runs too slow at 1.3 GHz, versus 1.8 to 2.0 GHz for the competition.
■ nvidia does not own a fab like Samsung who can do fab fills at cost.
■ Good luck with that.
■ Yes, nvidia needs a brighter paid pumper crew that can play some defense.
■ Like nvidia, their paid pumper crew is also in shambles.
■ nvidia does not have enough IP for integrated solutions.
■ Heck, they can't even take ARM Holding's mail-order blueprints and make a decent, competitive ARM chip.
■ Finally, what little Intellectual Property nvidia did have, Jen sold it all to Intel.
■ Intel is now using nvidia's own IP to bring the El Kabong to nvidia's business lines.
Did I mention?
nvidia, it's tegra program, and it's paid pumper crew are all in shambles.
nvda does not have enough IP to integrate T3 into anything other than highest end part. T3 is no longer highest end part. nvda also does not have a fab like samsung who can do fab fills at cost. Hence T3 is eof.
Best example of former is samsung, while having its own exynos, is introducing a handful of feature phones using integrated solutions from brcm.
you, like pretty much everyone here, have forgotten that the mobile segment is segmented, and that #1, 2 in mobile will never use anything other than their own processors. Atoms, like Tegras, will never go into any high end aapl or samsung products. I will also say the same about intel as nvda - intel does not have enough IP for integrated solutions, which is really the bulk of the mobile market, not the iPads or Nexus 7 that one talks about all the time.
"The low max brightness makes the W510 not ideal for use outdoors"
With most of the power going to the screen, how much of these tests have to do with
power used by the SOC vs the screen... less brightness translates lower power
"The Tegra 3 has also not peaked yet, it is worth shrinking this design all the way to 20nm as you could get the A9s up to 2 GHz at lower power"
Grey is still supposed to use A9s...
The Genuine Intel 32nm Atom/Clover Trail clearly trounces nvidia's best 40nm tegra 3.
And Trounces it Badly.
Let's take a look at a few excerpts from the AnandTech benchmarks, shall we?
"To kick off what is bound to be an exciting year, Intel made a couple of stops around the country showing off that even its existing architectures are quite power efficient.
Intel carried around a pair of Windows tablets, wired up to measure power consumption at both the device and component level, to demonstrate what many of you will find obvious at this point: that Intel's 32nm Clover Trail is more power efficient than NVIDIA's Tegra 3.
We've demonstrated this in our battery life tests already. Samsung's ATIV Smart PC uses an Atom Z2760 and features a 30Wh battery with an 11.6-inch 1366x768 display. Microsoft's Surface RT uses NVIDIA's Tegra 3 powered by a 31Wh battery with a 10.6-inch, 1366x768 display. In our 2013 wireless web browsing battery life test we showed Samsung with a 17% battery life advantage, despite the 3% smaller battery.
For us, the power advantage made a lot of sense. We've already proven that Intel's Atom core is faster than ARM's Cortex A9 (even four of them under Windows RT). Combine that with the fact that NVIDIA's Tegra 3 features four Cortex A9s on TSMC's 40nm G process and you get a recipe for worse battery life, all else being equal.
First up is total platform power consumption:
[The Tegra 3 powered] Surface RT has higher idle power, around 28% on average, compared to Acer's [Intel Atom/Clover Trail powered] W510.
Here the Atom Z2760 cores average 36.4mW at idle compared to 70.2mW for Tegra 3.
The power savings are around 47.8mW (average) for the W510 in airplane mode when fully idle.
Advantages in idle power consumption are key to delivering good battery life overall.
Peak power consumption for the entire [Intel Atom/Clover Trail powered] tablet tops out at just over 5W compared to 8W for [the nvidia Tegra 3] Surface RT.
The difference in average CPU power consumption is significant.
Tegra 3 pulls around 1.29W on average compared to 0.48W for Atom. Atom also finishes the boot process quicker, which helps it get to sleep quicker and also contributes to improved power consumption.
GPU power is a big contributor as well with Tegra 3 averaging 0.80W and Atom pulling down 0.22W.
Now the fun stuff.
We already know that Intel completes SunSpider quicker, thanks to its improved memory subsystem over the Cortex A9, but it also does so with much better average power (3.70W vs. 4.77W). A big part of the average power savings comes courtesy of what happens at the very tail end of this graph where the W510 is able to race to sleep quicker, and thus saves a good deal of power.
NVIDIA's GPU power consumption is more than double the PowerVR SGX 545's here, while its performance advantage isn't anywhere near double. I have heard that Imagination has been building the most power efficient GPUs on the market for quite a while now, this might be the first argument in favor of that heresay.
Ultimately I don't know that this data really changes what we already knew about Clover Trail: it is a more power efficient platform than NVIDIA's Tegra 3.
Across the board Intel manages a huge advantage over NVIDIA's Tegra 3. Again, this shouldn't be a surprise. Intel's 32nm SoC process offers a big advantage over TSMC's 40nm G used for NVIDIA's Cortex A9 cores (the rest of the SoC is built on LP, the whole chip uses TSMC's 40nm LPG), and there are also the architectural advantages that Atom offers over ARM's Cortex A9.
Keeping in mind that this isn't Intel's best foot forward either, the coming years ahead should provide for some entertaining competition. In less than a year Intel will be shipping its first 22nm Atom in tablets.
Clover Trail has the CPU performance I want from a tablet today, but I want Apple, Google or Microsoft to use it.
Now Intel just needs an iPad and a Nexus win."
Pretty much confirms what OEM's and Wall Street already knew, right?
It sure does.
That nvidia parts suck juice like an Elephant.
The fact that clovertrail beats tegra3 by 30-50% in power consumption, says that even at 28nm, the tegra4 won't have a 50% power advantage over 40nm tegra3.
tegra3 is a power hog, and tegra4 will be even more so, because of the A15 and 72 core GPU. 28nm doesn't add that much to power savings.
Nvidia has always taken a lot of pride on how many transistors they can pack in silicon, its tough to do that and also claim power efficient design.
To be fair - Windows RT does not exploit 4+1 architecture (ninja core is unused). Also, T4 will be 28nm as opposed to 40nm T3. Not sure about power consideration for new GPU in T4 though.
To me its rather pathetic how Intel is now reduced to this. Going to bloggers to prove their superiority. Btw, T3 is $20 - how much for precious Atom?
Anandtech is a pro Qualcomm shop and its a symbiotic relationship where they are leaked early products from QCOM also. But nothing tops Nvidia hatred as semiaccurate.