ABSTRACT

Machines with multiple processing units are commonly known as parallel processing systems, although a more appropriate term might be concurrent or cooperative processing. It should come as no surprise to the reader that there have been, and still are, many types of high-performance computer systems, most of which are parallel to some extent. The need for high-performance computing hardware is common across many types of applications, each of which has different characteristics that favor some approaches over others. Indeed, the communications networks used to interconnect parallel systems are a key component—often the most important aspect—of the overall system design. High cost and low generality are significant reasons why vector processors have mostly disappeared from the supercomputing arena, having largely been replaced by massively parallel systems using central processing units and/or graphics processing units (GPUs). The use of GPUs to do nongraphical processing in high-performance systems has rapidly increased during the sixth generation of computing.