I just don't think it is anything special now like you and the author do. The only time it gets close to this low end dual-core i3 Core sku is either in throughput (with twice/quadruple the cores) or using crypto. In single-thread it is not really close to a Core running at ultimately half its native speed. Benchmarks conclusions are often about interpreting what you are looking at rather than just going wow at the headline graphs presented to you.
I understand how to read benchmark charts (FYI, Larabel is one of the developers of that benchmark suite). At the end of the day, we're talking about a relatively small, cheap core, burning around 2.5 watts (single core, assuming no nvidia games) and it's doing much better than I expected on single threaded benchmarks that actually do something - we're not talking geekbench here:)
The part is running a version of Ubuntu provided by Nvidia which has highly optimized GPU drivers.
What would the A57 w/o Maxwell benchmark results look like? Any thoughts? Guesses?
I suspect they would not look as good. Especially the floating point that can use the Maxwell GPU.
He didn't test any graphic benchmarks, only CPU - Maxwell isn't a factor. He says he'll test them when he gets a physical device (along with power consumption).
The point is that 2.1 Ghz i3 has twice the single-thread performance of the A57 which is more than good enough to maintain a healthy price premium over it.
Be honest, did you expect the performance of the A57 to be that good against something like the i3-5010? I certainly didn't, especially given the immaturity of the software stack for V8.
As for the rest of your points, can't really argue against them:)
Intel is the one that is gaining the most marketshare in both tablets and phones while completely shutting out ARM chips in its legacy PC/Server territory. Zenphone 2 is a great selling product with only 22nm Moorefield.
I certainly wont dispute Intel gaining share in tablets, we'll have to see what the sales numbers are like for the Zenphone 2. As for servers, we are seeing small volumes of production ARM based servers. It remains to be seen if this will grow into anything:)
yeah ARM is improving its processors over time, 20nm A57 is now more performant per core than a 15 year old 180nm Pentium III, congratulations on joining the not so slow party ! ;-)
:) At the end of the day, Larabel is comparing in the 20nm A57 to a 14nm i3 (launched in Q1 2015) which has a tray price of $281...
'Intel still has to deliver competitive CPU's, GPU's'
Only in your imagination.
Really? Well, we'll just wait and see who wins all the sockets for the next gen smartphones then:)
My point is that Intel simple can't sit back, expect to deploy XPoint and expend to gain market share without ramping up it's the performance profile of it's SoC's. We have the A9 round the corner, QCOM's next gen, Samsungs mongoose, ARM's 72 and other high ends as well new GPU's from all the vendors.
No, it only shows what four A57 cores and four A53 cores can do against a dual-core Core running at only 2.1 GHz, i.e. still come up short despite four times as many cores. Core goes up to eight cores in the top model running close to 4 GHz.
The comments regarding broadwell weren't mine. There is much we don't know about these tests, including what kernel was used. It could be all 8 cores, or just cluster switching (only 4 cores are seen by the kernel which I believe is the default for baseline linux). Regardless, there are decent single threaded tests in that lot which shows what a A57 can do. Also, power is something that needs to be looked at as it could be using more than 10 watts for all we know.
:) Perhaps I should have mentioned that the SoC is still operating at under 10 Watts and not on the best process.
"In some workloads, the Tegra X1 comes up just shy of an Intel Core i3 Broadwell system."
Some interesting numbers over on phoronix. Just shows what an A57 can actually do when not constrained by power and thermal envelopes.
Don't worry about the "interface" ....Intel already implemented stacked DRAM connected via TSV.....
Micron is top notch battling successfulyl Samsung and Hinix - ARM just does not have the knoweledge
Lucky for Intel and Micron eh?
So, rather than deliver the best CPU/GPU (power,price, performance) SoC, Intel will now using a TSV based interface to XPoint as a flash replacement?
Better performance across the whole SoC range due to reduced latencies.
Intel can price lower-end SoCs more competitively since it will be capturing more of the BOM [AP+Minimal DRAM+Higher 3- XPoint (instead of NAND)].
For the higher-end SoCs, it can leverage the higher performance of more 3D-XPoint memory (instead of NAND) for higher prices. Again, can capture more of the BOM and leverage this advantage against ARM vendors.
I am pretty sure all this should already be in the works
So you think Intel will use XPoint in this highly competitive market? A flash replacement, what interface will it use?
Yes, XPoint looks impressive, but we still don't know key metrics (such as power consumption). Intel still has to deliver competitive CPU's, GPU's etc etc etc. I doubt that simply having a 'faster' flash memory will really make that much of a difference to these SoC's.
[It would be absolutely phenomenal if Intel can come out with these designs over the next 6-12 months. Would be a death knell for ARM vendors.]
[Sure gamers will always want a discrete video card but the point is that overall those who need a discrete video card are less and less. Bump for Intel. ]
Ok, I buy that. Although I don't understand why this would be a bump for Intel?
Today it takes a $340 Nvidia 970, 980 card or better to deliver 4K video over HDMI at 60Hz with a 4:4:4 color space. Only a year ago when these cards were introduced they were considered a breakthrough in price/performance. If Skylake can deliver video with the same specifications using built-in graphics for DP or HDMI there will be a boatload of pain to go around at Nvidia. AMD/ATI hasn't even been a contender. Filmmakers and videogarphers may still need their discrete card but the masses can probably do quite well with the built-in capability.
I'm not sure I understand your point here? There is a differences between the resolution supported by a video card (4K or not) and the general performance of the GPU in question. Those 980 cards pack some GPU punch.
Perofrmance of annapurna alpine al 5140 arm server is abysmal.
No, it's not just abysmal, it's shockingly bad. But you are missing the point. These chips are not about CPU, they are about IO. Once an IO lane is saturated, CPU means nothing. It's the IO processing bandwidth that made Amazon buy them. That cold server storage example I gave is exactly the kind of workload that amazon will use these SoC's for.
And the big Cavium is all hot-air
I asked about failures. Both Cavium and AMCC are shipping product and it's still very early days for those guys. Once they abandon their attempts, we can say they failed.
Annaprurna has nothing shipping yet. This is a Israeli start-up, work in progress.
Yes it has... throw the following string into google
We have been hearing this since last 4 years.
Failures: Calxeda (Bankrupt) , AMCC and many more
It's true that Calxeda when Bankrupt but AMCC is very much alive. Who are the other failures?
This start-up that amazon bought, it has SoC's in shipping products.
like Calxeda, AMCC, etc ?
:) AMCC seemed to be doing better than expected with their SoC, given it's poor performance watt numbers.
[Key word potentially. This is your great white hope for ARM servers? Ha. Nobody seems excited about it.
Googling, aws arm annapurna yields a mere 84 results. Meh.]
If the ARMy make any in roads into the data centre it'll come from many different product lines, not a single "great hope".
In this case the purchase shows that Amazon is working on their own silicon. It wouldn't replace Intel in the compute space (ec2) but perhaps other AWS services (cloudfront, s3, glacier etc).