ABSTRACT

This chapter provides a detailed discussion on Message Passing Interface (MPI) with its commands and collective communication subroutines. These commands/subroutines form a basis for all MPI programming. The chapter expains the methods for grouping data so as to minimize the number of calls to communication subroutines, which can have significant startup times. It describes other possible communicators, which are just subsets of the processors that are allowed to have communications. The chapter introduces hybrid computing for nodes that are shared memory or multicore computers. MPI are used to communicate between nodes, and OpenMP are used to program the shared memory nodes. OpenMP is thread based, and one important parallel programming method is parallel do loop, which is similar to MATLAB parfor loop.