"I take exception to Intel and anybody else who spreads disinformation about the fabless semiconductor ecosystem."
Me too .... and so does ideal_invst and ....
did you get this nenni?
he's talking about you
Worse yet you get an echo chamber of sites like Seeking Alpha, Forbes, and Motley Fool who will publish things from any idiot out there with a grammar checker. These bottom dwellers of the financial web sites are funded simply on hits, they will put up anything from nearly anyone, and pay by the view. The more inflammatory the headline, the more hits, and the more money both sides make. In short they don’t seem to give a rats #$%$ about accuracy, content, or simple logic, it is all about hits.
or simple logic, it is all about hits.
funny - I posted "show me the logic" with regards to the ALTR topic -
charlie just reiterates what most of us figured out some time ago -
BTW me thinks sometimes charlie is sittin in a glass house - nevertheless he nailed it
great post WW
We should mention here that based on information released during last week’s SPIE Advanced Lithography (2014), it seems EUV is not going to be ready for the N+1 node (10nm). These costs, as well as other capital costs, increase, and thus drive up the wafer price as illustrated by the recent NVidia chart from Semicon Japan (Dec. 2013) below:....
This escalating wafer cost eats away the higher transistor density gains, as articulated by NVidia and calculated by IBS’ Dr. Handel Jones and shown in the following table:
note nenni was refering to SPIE 2014....for the 450mm delay ...nevertheless he did not mention EUV delay -
Semiconductors are dead taking Intel along with it.
I don't think so - i am diversified - chip equips and Intel - time is on Intel's side.
Unless Apple and QCOM inject capex $ TSMC is going to bleed slowly but surely.
The mobile SoC market is not that big and Intel can leverage CPU to make "cheap" SoC.
The next big thing will be memory interface (3D) and that's strictly a manufacturing issue.
Fab equipment spending to increase 20-30% in 2014
SoC scaling to 16/14 nm could result in a significant cost increase...aka FinFet....
too bad neither nenni nor the intel expert have an understanding of manufacturing economics...
both nenni and the intel expert avoid this topic like the devil avoids the holy water
Consequently, the average SoC scaling to 16/14 nm could result in a significant cost increase, and hence 28nm is effectively the last node of Moore’s Law. To make things even worse, the remaining 35% of die area is not composed of only logic gates. More than 10% of the die area is allocated to I/O, pads and analog functions that either scale poorly or do not scale at all. And even in the pure logic domain scaling could not reach the potential 4X density improvements. The following chart was presented by Geoffrey Yeap, VP of Technology at Qualcomm, in his invited paper at IEDM 2013:
JPM (US) started the rumor about voltage regulator (supposedly not working the way it was intended) - JPM was quoting "sources" at (analog) tech conference
People use GPS and get lost - why? because they giving up on thinking for themself,- they become "brain dead"
a quarter delay (by Intel) initiates that ALTR is moving to TSMC who is starting up 20 nm planar?
Let's see whether the Intel expert can come up with a spin
BTW this is JPM Taiwan - and it's not a front page news - it should be shouldn't it .
Of course it was eagerly picked up by ML...you get the drift
The paper, citing JJ Park of JP Morgan Securities, said that Intel's decision to delay its 14nm Broadwell CPUs to the fourth quarter of 2014 is the main concern that sent Altera switching its orders back to TSMC.
Meanwhile, TSMC has managed to ramp up the yield rate of its 20nm process to 50% recently, which will allow TSMC to enjoy brisk sales in the next three quarters, said Park.
TSMC has not produced one FinFet wafer - so why would ALTR ditch Intel for TSMC?
Looks pretty risky
The reason why eventually delayed is because TSMC (and Samsung) have cold feet - they don't seem to be too keen because they would be digging their own grave...you follow what I am saying
The "big" news at SPIE was that EUV will be delayed - that's bad news in particular for TSMC.
Olympics of killing ...gold medal to Vlad the Impaler.
how many countries did the US invade over the last 40 years?
Sewastopol is critical for the Russian fleet - there won't be war.
It will get worked out
Today, Intel CTO Justin Rattner is demonstrating the Hybrid Memory Cube, the fastest and most efficient Dynamic Random Access Memory (DRAM) ever built. I want to give you some background on how and why we collaborated with Micron on this new memory technology. One of my research passions is helping to design computers to be faster and more energy efficient. A portion of my creative energy over my career has been to improve the interconnect within computer systems so that communication between the microprocessor, DRAM, storage and peripherals is faster and lower power with each successive generation. In other words, I’m an I/O guy. One of the biggest impediments to scaling the performance of servers and data centers is the available bandwidth to memory and the associated cost. As the number of individual processing units (“cores”) on a microprocessor increases, the need to feed the cores with more memory data expands proportionally. Legacy DDR-style of DRAM main memory isn’t going to cut it for much of the future high-end systems. Being an I/O researcher, my initial efforts to solve the memory bandwidth problem were focused exclusively on the I/O to improve the circuits, connectors and wires that help to form the connection between the microprocessor and memory. In the past our research team has demonstrated very low-power I/O connecting multiple microprocessors together at high rates. However, the process technology used to implement a CPU is dramatically different than that used for a DRAM and it quickly became clear that there were severe limitations to achieving high-speed and low-power using a commodity DRAM process....
Rebates, in the form of "contra revenue," and one-time engineering fees, called "NRE," are meant to reduce the "total bill of materials," which includes the higher cost of things such as DRAM and other circuitry that must go along with Intel's tablet chips.