Skip to content

Open MPI

The Open MPI Project is an open source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.

This wiki is primarily intended for NERSC users who wish to use Open MPI on Perlmutter, however these instructions are sufficiently general that they should be largely applicable to other Cray EX systems running SLURM.

Using Open MPI at NERSC Perlmutter system

Note

Only Open MPI 5.0.0 and newer are supported on Perlmutter when using the Slingshot 11 CXI libfabric provider. Older versions may work, but are not being tested by the Open MPI developer community on HPE EX systems.

Compiling

Load the Open MPI module to pick up the packages compiler wrappers, mpirun launch command, and other utilities:

The following will load the default package

module load openmpi

Open MPI is available for use with the following Cray programming environments:

  • PrgEnv-gnu
  • PrgEnv-nvidia
  • PrgEnv-llvm

The module file will detect which compiler environment you have loaded and load the appropriately built Open MPI package.

The simplest way to compile your application when using Open MPI is via the MPI compiler wrappers, e.g.

mpicc -o my_c_exec my_c_prog.c
mpif90 -o my_f90_exec my_f90_prog.f90

You pass extra compiler options to the back end compiler just as you would if using the compiler (not the cray wrappers) directly. Note by default the Open MPI compiler wrappers will build dynamic executables.

Note

As the Cray MPICH distribution now includes MPI compiler wrappers, one needs to be extra careful that the right mpicc, etc. is being used. In particular, double check that in job scripts, etc. the cray-mpich module hasn't been reloaded, placing the Cray MPICH mpicc in the PATH ahead of the Open MPI compiler wrapper.

Job Launch

There are two ways to launch applications compiled against Open MPI on Perlmutter. You can either use the Open MPI supplied mpirun job launcher, or Slurm's srun job launcher (termed "native launch" in some Open MPI documentation), e.g.

salloc -N 6 --ntasks-per-node=32 -C cpu
srun --mpi=pmix -n 192 ./my_c_exec

or

salloc -N 6 --ntasks-per-node=32 -C cpu
mpirun -np 192 ./my_c_exec

For srun-launched jobs, one can also use the Slurm environment variable SLURM_MPI_TYPE to set the mpi launch type. Open MPI 5.0.0 and newer only work with the pmix flavor.

If you wish to use srun, you should use the same srun options as if your application was compiled and linked against the vendor's MPI implementation.

(See also our running jobs example for Open MPI).

See the mpirun man page for more details about command line options. mpirun --help may also be used to get more information about mpirun command line options.

Note if you wish to use MPI dynamic process functionality such as MPI_Spawn, or MPI-4 Sessions related functions like MPI_Comm_create_from_group,
you must use mpirun to launch the application.

Using Java Applications with Open MPI

The Open MPI supports a Java interface. Note this interface has not been standardized by the MPI Forum. Information on how use Open MPI's Java interface is available on the Open MPI Java FAQ.