ABSTRACT

The effective treatment of large systems makes it desirable to develop new, fast, and efficient methods. In this context parallel algorithms are of increasing importance. The difference between such algorithms has become highly significant because of the development of parallel and pipeline computers. The development of high-speed computers makes it necessary to reconfigure well-known methods for solving large and complex systems and to develop new efficient algorithms. The “traditional” von Neumann computer architecture is the basis for mainframes, minicomputers, and microcomputers. Using the current hardware technology the floating-point performance of this architecture seems to be limited to 10 MFLOPS, which is far from supercomputer performance. The development of new computer architectures with different levels of parallelisms requires a detailed classification essential for the comparison of computers. Schwartz made a distinction between paracomputers and ultracomputers based on different methods of memory access. Parallel computers started with array processors that perform one instruction simultaneously on an array of operands.