ABSTRACT

Since this book is about parallelism, we need to define it and explain why it is different to concurrency. Concurrency occurs naturally in all programs that need to interact with their environment: multiple threads handle different aspects of the program and one can wait for input while another is computing. This behavior will occur even on a single core. Successful concurrency requires fair scheduling between threads. Parallelism is defined differently: the threads are intended to execute together and

the primary goal is speedup. The wish is that by adding n cores, the program will run n times faster. Sometimes this goal can almost be achieved, but there are spectacular failures as well [1], and many books and papers spend time on how to avoid them [2-4]. Parallelism uses unfair scheduling to ensure all cores are loaded as much as pos-

sible. This chapter describes new libraries incorporated into familiar languages that help to avoid these pitfalls by representing common patterns at a reasonably high level of abstraction. The value of using these libraries is that if one has a program that needs speeding up, the programming is easier, the resulting code is shorter, and there is less chance of making mistakes. On the other hand, applying these techniques to tasks that are small, the overhead will most probably make it run slower.