This is incorrect, Ashraf. The Qlogic products were one full speed generation behind Mellanox, and lacked crucial features for general purpose use in data centers. The Qlogic HCAs are only worthwhile for MPI applications, which hide their RDMA shortcomings under the MPI layer. The InfiniBand specification has been expanded to include a new transport type, which includes many of the advantages Qlogic claimed for its proprietary implementation, but blends perfectly with the other transport modes in the spec, and Mellanox supports these extensions. The development of the features necessary for complete, high-performance RDMA support are the most challenging aspect of InfiniBand development. Furthermore, the rumor that Intel will simply leap ahead of Mellanox by going to 100Gbps links is probably unrealistic. 100Gbps links will become practical at Layer 1 (which, incidentally, what Intel announced in conjunction with Facebook), but getting Layer 4 & above to run at that speed is a different problem altogether, and the problem space in which Mellanox is unchallenged for InfiniBand. If QLogic had such technology in the oven it is inconceivable that they would have sold the InfiniBand team to Intel for such a modest price, and if Intel already had the 100Gbps L2-L4 technology they would not have wasted money and culture clash buying the Qlogic team.