Skip to content



Note that the upstream MPICH module is still experimental and is actively being worked on. One of the important features missing in the module is the integration with the Cray Compiler Wrappers. So please update your build instructions accordingly.

The MPICH project is a widely portable open source implementation of the MPI-4.0 standard from the Argonne National Laboratory.

This wiki is primarily intended for NERSC users who wish to use upstream MPICH on Perlmutter.


Currently the mpich module is built with the GNU compiler suite available with PrgEnv-gnu (the default PE). It supports C,C++ and Fortran applications. The mpich module is built with CUDA support, where the CUDA version matches the default cudatoolkit available with PrgEnv-gnu.

On Perlmutter, the following command should load the deafult mpich module. The default module is the latest release of mpich, i.e., version 4.1.1, as of writing this wiki page.

module load mpich/4.1.1


Load the mpich module to get access to the compilers and library paths. You cannot use compiler wrappers such as cc, CC and ftn at the moment with the mpich module. You can compile the code with mpicxx, mpicc and mpifort. To include the headers, use the environment variable MPICH_INCLUDE_PATH. The installation path is available in the environment variable MPICH_ROOT.

A small example for compiling a C++ application

mpicxx -I$MPICH_INCLUDE_PATH -lmpi foo.cpp -o foo.ex


Use the same srun commands as for cray-mpich to run the compiled executable.

srun -N2 --ntasks-per-node=4 --gpus-per-task=1 ./foo.ex

Known Issues

The following error messages always appear at the end of a run

│ Wed May 10 08:28:13 2023: [PE_6]:_pmi2_kvs_get:key [-NONEXIST-KEY] was not found.
│ Wed May 10 08:28:13 2023: [PE_6]:PMI2_KVS_Get:_pmi2_kvs_get failed

Even successful runs will output the above message. There will be 2 such messages per task.