ABSTRACT

At present, HPC facilities primarily use Ethernet and IB interconnect technologies, although Cray and IBM have deployed some larger systems with a proprietary interconnect. In the June 2013 Top500 list [8], 40.2% of the systems use InfiniBand (DDR/QDR/FDR) as the interconnect, 43% use a combination of 1 GigE and 10 GigE, 4.2% use the Cray interconnect (Seastar/Gemini/Aries), 2% are proprietary, and 10% are custom interconnects. The remaining 4.2% of the proprietary interconnects that are not using Cray on the Top500 are from IBM and other countries like China and Japan who are developing their own interconnects. The custom category includes interconnects like the tofu network found on the Fujitsu machine. InfiniBand has proven itself over the years for small-and medium-size clusters. But due to scalability concerns, the DOE National Laboratories have avoided using IB for their largest capability machines.