Quote from comments section of yahoo article today:
Too many people are fixated on the server market and think it will save Intel. I don't know the exact numbers, but in our organization, virtualization is reducing server count in dramatic ways. We go 10 deep on one server now. At a minimum, we go 5 on heavier apps. That 5;1or 10:1 reduction in server count has got to impact someone in the hardware space.
Intel sold 6400 of the Phi boards to U of TX for their supercomputer project at $400 per board. I thought it might be fun to play if the board were $400 or so. Sigh.
"While the prices of Intel Xeon E5 systems and the Tesla boards were delivered at special but still realistic pricing, we were quite surprised to learn that the computing center only paid around $400 per Xeon Phi board. Given that competing Tesla K20 boards retail for $3199 (available in December), this can be viewed from a price dumping perspective. Bear in mind the TACC only had $2.4 million for Xeon Phi boards, and reaching 8PFLOPS e.g. 7+ PFLOPS requires around 6,000-7,000 boards. At $400, it is quite a steal."
Virtualization is not a new phenomenon and actually costs performance for the middleware software to handle it. Sure if your servers are set up inefficiently and lightly loaded in the first place it will help consolidate but most sites are already set up and either are working flat out or using virtualization already. One user's subjective comments are not a reason to think Intel's server revenue will suddenly decline ;-). Remember server revenue is based on increasing global demand and performance and energy efficiency not current form factor fashion ;-).
There is much variation depending on what the pre-VM application loading on the machines was and how much memory the apps needed and what OS they ran on. A machine reduction of 5:1 or 10:1 is not going to be common. It may even attract some programs to the machines that were running on desk system.
Virtualization is when you install a VM manager at the lowest level and that VM software then provides a "virtual machine" environment to an OS so that OS can then support applications on top of that OS. The VM can then run a copy of Win7 or Linux or any OS on top of the VM in 32-bit or 64-bit modes.
The VM software adds extra compute cycle overhead to the system. Multiple copies of the OS and/or OS data and likely different applications will stress the HW more. The HW cannot be used at 100% load very long without customer complaining about throughput or response times.
Some random thoughts:
They will likely end up buying larger machines with more power compute. EP and EX systems = higher CPU price, margin. They will have to pay for the extra VM environment cycle overhead. If they had lightly loaded machines, it makes sense for them to do this.
VM would seem to encourage a homogeonous compute environement so all the "x86" could run. If I have a lab of Intel VM machines, there won't be any reason for an ARM system unless someone wants to specifically run ARM apps.
Companies have been doing this for years. Oracle sells access to their products via cloud and they are (I think) using VM sessions to protect customer data but make most efficient use of space, power and equipment.
I think it is one natural configuration. People have been using time shared systems for 50 years. Intel is driving performance up so much that this becomes a more desireable configuration. That is why Intel charges $2000 for these CPU rather than $200. I doubt that a customer would collapse 10:1 with "like" systems. If they did, they were wrong in the original purchases by buying too much machine.