ABSTRACT

The history of computing has been marked by the inevitable march toward higher performance computing devices. As the physical limits of electronic semiconductors reach submicron sizes and current movement is limited to a few hundreds of electrons, the search for increased speeds and performance in computing machines has shifted focus from high-performance specialized processors once found in past generations of supercomputers to so-called massively parallel computers built with commodity CPU chips. These commodity chips have evolved much more rapidly than specialized processors because the large scale of the installed base allows economies of scale in their development. Parallel computation is an efficient form of computing, which emphasizes the exploitation of concurrent events. Even the simplest microcomputers to some extent have elements of concurrency such as simultaneous memory fetches, arithmetic processing and input/output. However, generally the term, parallel computation, is reserved for machines or clusters of machines, which have multiple arithmetic/logical processing units and which can carry on simultaneous

arithmetic/logical operations. The architectures that, when implemented, lead to parallel machines can be thought of in three generic forms: pipeline processors, array processors, and concurrent multiprocessors.