% | $
Quotes you view appear here for quick access.

Intel Corporation Message Board

  • getanid61 getanid61 Feb 21, 2013 1:28 PM Flag

    Intel says faster processors need faster I/O

    THIS YEAR there wasn't a single processor paper by Intel at ISSCC, but there was a very interesting paper on rapid I/O technology that combines scalability with significant power savings.

    Minimizing I/O cost, area and power are crucial to achieving a practically realisable system with large bandwidth. To meet these needs, Intel has developed a low power, dense 64-lane I/O system with per port aggregate bandwidth up to 1Tbit/s and 2.6PJ/bit power efficiency.

    In addition to the interconnect Intel has also developed a high density connector and cable attached to the top side of the package.

    We have previously seen fast interconnects, but the news this time around is the density as well as the scalability. It is possible to vary the lane data rate from 2Gbit/s to 16Gbit/s on the fly.

    This provides scalable aggregate bandwidth of 128Gbit/s to 1Tbit/s with a power efficiency of 0.8PJ/bit to 2.6PJ/bit, or to put it very simply, you can adjust the bandwidth to whatever your needs are - and when you don't need a lot of bandwidth you save power instead of running at full tilt.

    The cable that Intel uses has a maximum length of 50cm and can be used either to connect processors or to connect a processor to a DRAM subsystem in order to feed the processor with data.

    SortNewest  |  Oldest  |  Most Replied Expand all replies
    • Intel Hybrid Memory Cube: When Ultra High-Density Meets Ultra High-Speed.

      Hybrid Memory Cube Enables 7 Times Higher DRAM Power Efficiency
      [09/15/2011 04:27 PM]
      by Anton Shilov
      ....probably @ 10nm
      “We knew that future high-speed memory will need to conquer a challenging set of tradeoffs and achieve low cost and power as well as high density and speed. We came to the conclusion that mating DRAM and a logic process based I/O buffer using 3D stacking could be the way to solve the dilemma. We found out that once we placed a multi-layer DRAM stack on top of a logic layer, we could solve another memory problem which limits the ability to efficiently transfer data from the DRAM memory cells to the corresponding I/O circuits,” said Bryan Casper, an Intel official.
      Getting the data out of the memory cells to the I/O is analogous to the difficulty of navigating the streets of a crowded city. However, placing the logic layer underneath the DRAM stack has a similar effect to building a high-speed subway system underneath the streets, bypassing encumbrances such as the DRAM process as well as the routing restricted memory arrays. Additionally, the adjacent logic layer enables integration of an intelligent control logic to hide the complexities of the DRAM array access, allowing the microprocessor memory controller to employ much more straightforward access protocols than what has been achievable in the past, according to Intel.

37.44+0.26(+0.70%)Sep 28 4:00 PMEDT