ABSTRACT

In the last three sections in Chapter 6 several MPI codes were illustrated. In this chapter a more detailed discussion of MPI will be undertaken. The basic eight MPI commands and the four collective communication subroutines mpi_bcast(), mpi_reduce(), mpi_gather() and mpi_scatter() will be studied in the first three sections. These twelve commands/subroutines form a basis for all MPI programming, but there are many additional MPI subroutines. Section 7.4 describes three methods for grouping data so as to minimize the number of calls to communication subroutines, which can have significant startup times. Section 7.5 describes other possible communicators, which are just subsets of the processors that are allowed to have communications. In the last section these topics are applied to matrix-matrix products via Fox’s algorithm. Each section has several short demonstration MPI codes, and these should be helpful to the first time user of MPI. This chapter is a brief introduction to MPI, and the reader should also consult other texts on MPI such as P. S. Pacheco [21] and W. Gropp, E. Lusk, A. Skjellum and R. Thahur [8].