ABSTRACT

This chapter is concerned with designing algorithms for machines constructed from multiple processors. In particular, we discuss algorithms for machines in which the processors are connected to each other by some simple, systematic, interconnection patterns. For example, consider a chess board, where each square represents a processor (for example, a processor similar to one in a home computer) and every generic processor is connected to its four neighboring processors (those to the north, south, east, and west). This is an example of a mesh computer, a network of processors that is important for both theoretical and practical reasons. The focus of this chapter is on algorithmic techniques. Initially, we define some basic terminology

that is used to discuss parallel algorithms and parallel architectures. Following this introductory material,wedefineavarietyof interconnectionnetworks, including themesh (chessboard),whichare used to allow processors to communicate with each other. We also define an abstract parallel model of computation, the PRAM, where processors are not connected to each other, but communicate directly with a global pool of memory that is shared amongst the processors.We then discuss several parallel programming paradigms, including the use of high-level data movement operations, divideand-conquer, pipelining, andmaster-slave. Finally, we discuss the problem ofmapping the structure of an inherently parallel problem onto a target parallel architecture. This mapping problem can arise

in a variety of ways, and with a wide range of problem structures. In some cases, finding a good mapping is quite straightforward, but in other cases it is a computationally intractable NP-complete problem.