Hardware Reference
In-Depth Information
integrating machines with the Aries interconnect [1]. As mentioned earlier,
Intel acquired a few companies such as Cray hardware, the QLogic IB team,
and Fulcrum in 2012 and 2013, but they do not currently offer any new inter-
connect technology.
33.2 Future Directions
The scale of future clusters at the DOE National Laboratories will force a
change in interconnect technology. As clusters scale for a scientific workload,
communications quickly become the bottleneck. On very large-scale clusters,
the cost of interconnects and cost of power will drive technology requirements.
The PCIe interface performance is not increasing as fast as processors or
networks. PCIe Gen 3 is widely used today with no clear date of release for
PCIe Gen 4 on the horizon. 1
Many of the future interconnects for large-scale machines in the 2017 or
the later time frame are expected to be based on silicon photonics technology.
Every major processor vendor is expected to be looking at how to implement
this type of network by then. The development of this technology is primarily
driven by lower power and higher bandwidth requirements. The 2008 DARPA
Exascale report speculates 1.5{2.0 pJ/bit for an exascale interconnect [4]. Ex-
trapolated to the expected size of an exascale machine, the projected energy
requirements would be on the order of 2{4 MW. The current target for an exa-
scale machine is 20 MW of total power, so it is unlikely that the interconnect
will be able to consume 20% of the total allocated power budget. There are
several technologies in development to mitigate this problem. Some examples
of these technologies include 3D chip stacking technology as well as on-package
and on-die solutions, which require significantly less power. The Fast Forward
and Design Forward DOE programs have listed a minimum of 400 GB/s node
interface performance requirements [3]. This requires technology advances in
several areas. With current technology evolution, the speed of the basic has to
increase to reach these higher speeds. An alternative to a very fast single pipe
is to do what telecommunications companies have done for years, which is to
use wavelength division multiplexing (WDM), as shown in Figure 33.2. WDM
allows for parallel communications channels over a single physical optical fiber
links utilizing discrete ranges of light wavelengths. WDM can be implemented
as CWDM (coarse WDM), which is up to 8 wavelengths, or DWDM (dense
WDM), which can be up to 64{80 wavelengths. With this type of technology,
data movement costs can be reduced. Networking for exascale systems will
be moving to mostly optical connections in network interface cards (NIC),
switches, and routers. There has been a recent emergence of vendors selling
1 PCIe releases are announced at PCISIG.com.
 
Search WWH ::




Custom Search