ABSTRACT

The first impression one gets when phrases such as dependence analysis and automatic parallelization are mentioned is that of loop programs and array variables. This is not surprising, because the loop is the classic repetitive structure in any programming language, and clearly this is where programs spend a significant amount of their time. Second, because of the early impetus on high-performance computing for large numerical applications (e.g., FORTRAN programs) on supercomputers, a long

and Machine Code

research effort has been underway on parallelizing such programs. The area has been active for over a quarter century, and a number of well-known texts on this topic are readily available.