"Embedded DRAM has a long history of niche applications within the industry. The biggest proponent by far is IBM, which uses eDRAM for the massive last level caches in the POWER7,"
One of the NAND improvements that they have high on the list is better read/write timings. Intel could use eDRAM as a very large last level cache and then use the NAND as a persistent memory that does not need to be refreshed. The effective memory latency will be virtually unchanged with a large eDRAM cache. Skylake has a wider bus, the VR has been taken off chip and can effectively use the Silvermont "cache shutdown algorithms" to take power consumption to a very small fraction of what their CPU/APU/SoC consumes today. Just moving the VR offchip will drive down Skylake power.
It is not only possible .... Intel announced them last June and I suspect has several customers already working on them. Google "FPGA Xeon "
Fujitsu has already put custom logic around their SPARC to handle "Oracle numbers" and I suspect that Oracle is also a good Xeon candidate.
"as a matter of fact the 4/17 $30's that i sold for 0.68 are now looking really good at 0.40."
Very nice! I like hearing about writing puts. Selling puts is one of the ways I make much of my profit. Intel options are VERY LIQUID and have a very narrow spread .... not too painful to get in and out. I have Schwab level 3 authorization so I do not have to cover the puts with cash. I go naked.
On Wednesday, I sold Jun19th $30 on Wednesday for $1.55 and closed them today as Intel was near $32.30 and leaking back toward $32.00. I wanted to lock in some profit and if INTC goes down, I will hop back in OR if Intel goes on up ... .Oh well ..
I still have May $34 puts and May $32's that I sold on the way down .... ouch. Those are both underwater and I will probably roll them out until they get down in the 25 range.
" I dont understand why SEC and FED wont subpoena WSJ Bloomberg and other scam people"
The "FED" part is easy. The FED has no authority or responsibility over the issue.
The "SEC" part is also pretty easy. It is an issue of funding and staffing. There is no better way to slow and stop investigations than to starve the organization by cutting funding. If you would like increased monitoring by the SEC, then a hand written letter to you Congressman and Senator would help them understand how important it is to you for the SEC to be fully funded.
Might be a time to sell puts to buy back cheaper. You can sell the APR $31.50 for 70 cents for an effective buy price of $30.80 or do something like the May $32 for $1.50 ... effectively $30.50.
"what's the $ value of memory compared to APU in a smart phone"
.... I think a better question might be ....
"what's the $ value of an APU that doesn't require memory in a smart phone" .... because the memory is inside the APU package. The "value" is far beyond the cost of the memory component. The value includes work to design, track, build and test the extra component.
Any idea how it compares with the Samsung 3D NAND they are shipping or the Toshiba/Sandisk 3D NAND announced today too?
"Mainly, it was about wire-free computing features in Skylake - (wireless charging, wireless display, etc.)"
I am not sure what will be in Skylake silicon that is specific to wireless. That seems to be a system design issue.
"... he had no viable strategy for mobile whatsoever and the company had lost its focus. "
I think at the time that Apple approached Intel, the Intel fabs had just turned on the SandyBridge 32nm product lines and all the fabs were running at full capacity. I think the choice he was faced with was to either build PC Client and Server products for customers with orders in hand OR to take on additional work to build low margin "embedded" products on the advanced process lines for an industry in its formative stage.
"What's coming out of Intel now looks very encouraging."
I am looking forward to Skylake to get a better understanding of why Intel seems to be skipping a Broadwell product line that seems very nice. There must be more to the story that has not been disclosed yet.
"Huge losses & "breakeven" in the next year or two in the mobile sector is not something to hang your hat on."
The accounting of charges is difficult to allocate to specific divisions. For example, the Silvermont core they developed is being used in both PC client products and in Knights Landing. Features are also being incorporated into the lower power server chips. Intel does not adjust the accounting for cross division sharing of IP. Much, if not all, of the development would have been done anyway.
"No sense of urgency among the workforce. "
Just curious. How many of the work force did you survey?
"Server growth (mainly by IaaS cloud providers) is literally the only growth area that is making an impact"
... and will likely continue to do will by borrowing the mobile IP and incorporating it into the server products.
It appears to me like Intel has almost skipped the Broadwell designs and going directly to Skylake. It doesn't make sense to me if it is just incrementally better graphics and power ... which is what I (and most people) think.
It has to be more dramatic than just the next version of power reduction and graphics performance.
Wireless charging is a system design issue .... isn't it? My daughter damaged her Samsung charging connector and added wireless charging (simple adapter placed inside on battery and an external plate) for $20-$30 to postpone buying a replacement phone. I doubt she will go back to cables unless she has to.
I think the HTC cheat was to insert some scripts to recognize when a benchmark was running and then boosting the operating behavior of the device to enter high power mode best for benchmark numbers. If you changed the name of the benchmark program, it would run at normal speed.
I think that Samsung did the same thing as HTC.
"Well, for a start AVX2 isn't supported in silvermont:) But, I know where you are coming from. "
AVX2 does not need to be supported in order for Silvermont code to benefit. When ICC is opened up for AVX2 work, the new algorithms are incorporated throughout the compiler and all the code generated benefits from the new work. The point is that a huge amount of energy was invested in parallel/vector work. The compiler code analysis starts at the common front end and splits at the code generator.
If you continue to read that thread you quoted, more testing was done by Nothingness and his conclusion "I know some will still deny, but all my doubts have vanished: icc is definitely cheating." .
... and after a little discussion, he backed off a little "On my side I'm 99.9% confident Intel cheated.". He was upset and venting.
"Sentiment: Strong Buy"
Did your "sentiment" change or did you just get sloppy?
" It has been over a month since that has happened."
Did you just make a mistake or is your "analysis" just getting sloppy? The MARKET has NOT been up 2 days in a row but Intel has. The market, as measured by SPY, has not been up 2 days in a row during the last month. Intel, however, was up 3 days in a row on March 2, 3 and 4. Intel managed these 3 UP days in the month following ex-dividend when INTC is traditionally weak.
"Does that really represent your position?"
More or less.
The article contains much good information and analysis. The author confesses his ignorance about legitimate examples and then makes some assumptions based on that ignorance. I am personally aware of examples that he is not. The author points out that the optimization is a new optimization and that is true. It was inserted to support the 256-bit AVX2 integer operations in Haswell silicon and the upcoming AVX512 instruction set. He just arrives at incorrect conclusions based on his not being aware of the practical application examples. I arrive at different conclusions based on my personal knowledge.
"but this isn't what happened...ICC broke the benchmark - nothing to do with vectorization of code."
Seems like what the compiler did was something called an "automatic vector transformation" where the compiler transformed a program loop operating on 1-bit data into a program loop that operated on 32 data elements on each loop. The ICC compiler would generate AVX2 code to transform the same loop operating on a 1-byte character array into a loop operating on 32-bytes each iteration. Can you explain why that logic is wrong?
"The question becomes is GB a good proxy or not?"
Agree. Then the discussion becomes: What is a collection of GB over weighted compression/decompression, encryption/decryption benchmarks a good proxy for?
My concern is that GB contains a pile of code fragments that get far more respect than they deserve.
Ugh! The rubout key kills the Yahoo screen and my message ..... the space bar posts partial messages.
I re-read the AnandTech 7-10-2013 Exophase "AnTuTu and Intel" forum posting and it seems represent my position.
I raise VECTOR because the Intel compiler has undergone major work to improve vectorization driven by customer code, new instruction support and ... not some desire to "cheat" on AnTuTu.. The result of this work was a compiler that is better at identifying and generating parallel code.
The ToggleBitRun() function operates on a bitmap (vector) and sets/resets a section of that bit vector.
As for "CPU benchmarks", I think they are not very valuable. For me, a benchmark is a proxy for how my workload will perform. Which one of the benchmarks is a proxy for "voice recognition" since that is important to me. General CPU benchmarks tell which runs that particular code faster/slower but ... that has limited value.
" The real issue was was ICC applying optimizations that only worked for AnTuTu (and no other application). "
This is simply not true.
The ICC vector optimization effort was driven by development of in-memory, column oriented databases and HPC apps. The ICC vector optimizations track the releases of the SAP HANA and Oracle in-memory databases (Times Ten) which use dictionary driven, column structures. You get stunning performance improvements in decision support queries. AnTuTu likely had no part and was a surprise to Intel.
TSX for OLTP and Vector operations database DSS operations.
"The point of having these cryptography sub tests (along with compression tests) is that they are very good at representing general integer cpu performance."
If you want to think so. That is fine with me. Like I said ... we simply disagree. If that were the goal, then it was really dumb for geekbench to select code that distills a loop down to a single instruction. It should give you an indication of how little thought they gave to the design of their collection code 1990's code fragments. AnTuTu too.
"No, the issue was that ICC was removing loops from the benchmark. This "optimization" was specific to Antutu."
If you are saying that the optimization was generated only for AnTuTu, then this is where you and I ... I guess disagree.
It would be silly to think that Intel's focus on vector code generation improvements for HPC, AVX, AVX2, AVX512 was not going to cause optimizations to percolate down the entire complier. AnTuTu was broken and the dead code was bound to be eliminated.
Besides .... "it's easy enough to strip the score from the number" from the AnTuTu results. 8-).
Geekbench 3.0 .... who on earth uses a SHA application during anything that they do? What % of time they use their device would that represent?
IMO, 0% use SHA and it represents 0% of real work unless SHA is built into some app I am not aware of.
"Antutu, the benchmark that Intel cheated in? Once a cheater...:)"
The Intel compiler broke the AnTuTu benchmark and AnTuTu "fixed" it so the code could not be auto-vectorized. End users will benefit from the ICC vector optimizations even if the benchmarks do not show it. When a compiler breaks a benchmark, that is "good".
That is quite different than geekbench 3.0 which intentionally chose benchmarks (SHA1/SHA2) that explicitly give an edge to native instructions.