ABSTRACT

Parallel programming is currently a difficult task. However, it does not have to be that way. Current methods tend to be coarse-grained and use either a shared memory or a message passing model. These methods often require the programmer to think in a way that takes into account details of memory layout or architectural implementation, leading the 2003 NSF Blue-Ribbon Advisory Panel on Cyberinfrastructure to opine that: to many users, programming existing parallel computers is still “as intimidating and time consuming as programming in assembly language”. Consequently, to date, the outreach of parallel computing has fallen short of historical expectations. With the ongoing transition to chip multiprocessing (multi-cores), the industry is considering the parallel programming problem a key bottleneck for progress in the commodity processor space. In recent decades, thousands of papers have been written on algorithmic models that accommodate simple representation of concurrency. This effort brought about a fierce debate between several schools of thought, with one of the approaches, the “PRAM approach,” emerging as a clear winner in this “battle of ideas.” Three of the main standard undergraduate computer science texts that came out in 1988-90 [5,18,43] chose to include large chapters on PRAM algorithms. The PRAM was the model of choice for parallel algorithms in all major algorithms/theory communities and was taught everywhere. This win did not register in the collective memory as the clear and decisive victory it was since (i) at about the same time (early 1990s), it became clear that it will not be possible to build a machine that can look to the performance programmer as a PRAM using 1990s technology, (ii) saying that the PRAM approach can never become useful became a mantra, and PRAM-related work came to a near halt, and (iii) apparently, most people made an extra leap into accepting this mantra.