ABSTRACT

The theory of parallel computation is concerned with the development and analysis of parallel computing models, the techniques for solving and classifying problems on such models, and the implications of that work. Even though we have seen dramatic increases in computer performance, humans are adept at

inventingproblems forwhich a single processor is simply too slow.Problemcomplexity has increased significantly over the past 20 years. The number of computational operations required to solve some of these problems is mind boggling, yet problems and their data sizes continue to grow rapidly. Only by using many processors in parallel is there any hope of quickly solving such problems. And, as a

and

result of increased problem complexity, inherent technological limitations to individual processor speeds, and economic factors, the first decade of the twenty-first century has seen a resurgence in parallel computing [1]. In principle, having more processors working on a problem means it can be solved much faster. Ideally, we expect a speedup of the following form:

Parallel Time = Sequential Time Number of Processors

In practice, parallel algorithms seldom attain ideal speedup,∗ are more complex to design, and are more difficult to implement than single-processor algorithms. The designer of a parallel algorithm must grapple with these fundamental issues:

1. What parallel resources are available? 2. What kind of speedup is desired? 3. What machine architecture should be used? 4. How can the problem be decomposed to exploit parallelism? 5. Whether the problem is even amenable to a parallel attack?