ABSTRACT

With the emergence of low-cost, high performance microprocessors, like Transputers and Intel i860, design of parallel systems received a dramatic impetus, and this process is bound to gain additional momentum with further advances in VLSI technology and computer networks. Parallel systems are designed with the basic intention of reducing the execution time of a program and increasing the system throughput. Of secondary importance is "safety, expandability, redundancy, and meeting real-time restrictions." So the basic purpose for which these systems are built, namely, faster execution and increased throughput, can be achieved by exploiting parallelism at various levels: (a) instruction-level, (b) statement-level, (c) process-level, and (d) job-level. Based on activities at different levels, various structures of parallel systems have been proposed: (a) array, (b) data flow, (c) pipelined and vector, (d) associative processors, (e) multicom-

puter and multiprocessor systems, (f) local networks and recently neural-nets, and (g) connection machine models [1].