Skip to content

Perlmutter Software

Environment

Lmod - modules

Perlmutter uses Lmod, a Lua based module system, to manage and dynamically change user's environment settings such as which directories are included in the PATH environment variable.

Detailed Usage Guide for Lmod at NERSC

NERSC Defaults

NERSC provides gpu and cpu modules that load recommended defaults for compiling on GPUs and CPUs, respectively. We may update these modules as new information or recommendations are made. You can use module show <gpu/cpu> to see what each module does. You can only have one of these modules loaded at a time and loading one unloads the other. Currently, the gpu module is loaded by default.

Older Environments

Generally we recommend that you use the most recent programming environment installed on Perlmutter. However, sometimes it is convenient to have access to previous programming environments to check things like compile options and libraries, etc. You can use module load cpe/YY.XX to load the previous programming environment from year YY and month XX. We will remove cpe modules for environments that no longer work on our system due to changes in underlying dependencies like network libraries.

Please keep in mind that these cpe modules are offered for convenience sake. If you require reproducibility across environments we encourage you to investigate container-based options like Shifter.

Spack

NERSC provides the Spack package manager via a modulefile. This spack instance is preconfigured by NERSC to integrate with the Perlmutter software environment.

In order to use the default Spack instance

module load spack

In order to use the E4S instance

module load e4s

The Spack documentation provides on overview of how to use spack and Spack at NERSC provides details of the NERSC provided instance along with additional recommendations for how to effectively use Spack at NERSC.

Finding Software

NERSC provides a wide array of software: compilers, libraries, frameworks, profilers, debuggers, utilities and applications.

Typically these are available via the module system.

Tip

Always use module spider <search term> instead of module avail <search term> on Perlmutter to look search for software.

See finding software for further details about finding pre-installed software.

Packaging

Containers

Containers provide a stable fixed environment for your software that greatly reduces the dependency on any details of NERSC's software stack while retaining and in some cases improving performance!

If your application depends on MPI then NERSC recommends installing MPICH inside the container from source (i.e. not from apt-get) with shared libraries. This allows your software to use the ABI compatible Cray MPICH which has been optimized for the Perlmutter network.

Best Practice

When installing software on NERSC's filesystems it is recommended to use the /global/common/software/<your NERSC project> directories. This filesystem is optimized for sharing software.

Compilers

Cray, AMD, NVIDIA, LLVM and GNU compilers are available through modules on Perlmutter.

AMD and LLVM compilers not currently covered

Content related to these compilers is under development and will be added when ready.

Cray provides PrgEnv-<compiler vendor> modules that load components of specific toolchain including MPI and LibSci (man intro_libsci). Included with these modules are compilers wrappers (similar to mpicc) that include additional link options and compiler flags:

  • cc for C
  • CC for C++
  • ftn for Fortran

Tip

Use the Cray compiler wrappers (cc, CC and ftn) whenever possible.

Vendor PrgEnv Module Language Wrapper Compiler
GNU PrgEnv-gnu gcc C cc ${GCC_PATH}/bin/gcc
GNU PrgEnv-gnu gcc C++ CC ${GCC_PATH}/bin/g++
GNU PrgEnv-gnu gcc Fortran ftn ${GCC_PATH}/bin/gfortran
NVIDIA PrgEnv-nvidia nvidia C cc nvc
NVIDIA PrgEnv-nvidia nvidia C++ CC nvc++
NVIDIA PrgEnv-nvidia nvidia Fortran ftn nvfortran
HPE PrgEnv-cray cce C cc craycc
HPE PrgEnv-cray cce C++ CC crayCC
HPE PrgEnv-cray cce Fortran ftn crayftn

Linking

Warning

Static compilation is not officially support by NERSC, dynamic linking is the default.

Languages

Recommended compilers and programming models to target GPUs. Further details can be found in the NERSC Perlmutter Readiness Guide.

C

PrgEnv Programming Model
PrgEnv-gnu CUDA
PrgEnv-nvidia OpenMP
PrgEnv-nvidia CUDA

C++

PrgEnv Programming Model
PrgEnv-gnu CUDA
PrgEnv-gnu Kokkos
PrgEnv-gnu HPX
PrgEnv-nvidia OpenMP
PrgEnv-nvidia CUDA
PrgEnv-nvidia stdpar

Fortran

PrgEnv Programming Model
PrgEnv-gnu CUDA
PrgEnv-gnu Kokkos
PrgEnv-nvidia OpenMP
PrgEnv-nvidia OpenACC
PrgEnv-nvidia stdpar

Python

NERSC provides a python installation based on Anaconda python. See the NERSC page on using python on Perlmutter for details about how to manage environments and target the Perlmutter GPUs.

MPI

Cray MPICH is a CUDA-aware MPI implementation - allowing programmers to pass pointers to buffers in GPU memory to MPI API routines. Example CUDA Aware MPI program.

To utilize CUDA-Aware MPI the feature must be enabled at both compile and run time:

At compile time the HPE GPU Transport Layer (GTL) libraries must be linked. In order to enable this take at least one the following actions:

  1. Keep the default gpu module loaded
  2. Load the cudatoolkit and craype-accel-nvidia80 modules
  3. Set the environment variable CRAY_ACCEL_TARGET=nvidia80
  4. Pass the compiler flag -target-accel=nvidia80

At run time MPICH_GPU_SUPPORT_ENABLED=1 must be set. If it is not set there will be Errors similar to

MPIDI_CRAY_init: GPU_SUPPORT_ENABLED is requested, but GTL library is not linked

For further details about Cray MPICH see the manpage on Perlmutter man intro_mpi.

Programming Models

OpenMP

Vendor PrgEnv Language(s) OpenMP flag
GNU PrgEnv-gnu C/C++/Fortran -fopenmp
NVIDIA PrgEnv-nvidia C/C++/Fortran -mp
HPE PrgEnv-cray C/C++ -fopenmp
HPE PrgEnv-cray Fortran -homp

See the overview of OpenMP at NERSC for a summary of key features, links to NERSC training and other useful references.

GPU Offload

GPU target must be set

Either load the cudatoolkit and craype-accel-nvidia80 modules or use the "target" flags below.

PrgEnv-nvidia

Use the -mp=gpu option.

Details of OpenMP support and programming recommendation are available at in the OpenMP section of the Perlmutter Readiness Guide.

PrgEnv-cray
Language(s) OpenMP flag target
C/C++ -fopenmp -fopenmp-targets=nvptx64 -Xopenmp-target=nvptx64 -march=sm_80
Fortran -homp -hnoacc -haccel=nvidia80

linker warning

The following linker warning can be safely ignored:

warning: linking module '@': linking module flags 'SDK Version': IDs have conflicting values ('[2 x i32] [i32 11, i32 1]' from /opt/cray/pe/cce/14.0.1/cce-clang/x86_64/lib/libcrayomptarget-nvptx-sm_80.bc with '[2 x i32] [i32 11, i32 5]' from ../benchmarks/babelStream/main.cpp) [-Wlinker-warnings]

Target selection error with Fortran

Target selection error With PrgEnv-cray Fortran applications with OpenMP offload can be avoided by

  1. unset the CRAYPE_ACCEL_TARGET variable
  2. manually specify the target architecture -homp -hnoacc -haccel=nvidia80

mixed C/C++ and Fortran applications

Use -fopenmp -fopenmp-targets=nvptx64 -Xopenmp-target=nvptx64 -march=sm_80 to manually specify the target architecture for C/C++ code.

PrgEnv-gnu

PrgEnv-gnu is not recommended for OpenMP offloading.

To access it:

ml use /global/cfs/cdirs/m1759/wwei/Modules/perlmutter/modulefiles/
ml gcc/13.1.0

An example to compile your code:

gcc -Ofast -fopenmp -foffload=nvptx-none="-Ofast -lm -latomic -misa=sm_80"  main.c -o main

OpenACC

Vendor PrgEnv Language(s) OpenMP flag
NVIDIA PrgEnv-nvidia C/C++/Fortran -acc

GPU Offload

GPU target must be set

Either load the cudatoolkit and craype-accel-nvidia80 modules or use the "target" flags below.

PrgEnv-nvidia

Use the -acc=gpu option.

CUDA

On Perlmutter CUDA is available via the cudatoolkit modules. The toolkit modules contain GPU-accelerated libraries, profiling tools (nsight compute & systems), debugger tools (cuda-gdb & cuda-memcheck) a runtime library and nvcc CUDA compiler.

NVIDIA maintains extensive documentation for CUDA toolkits.

PrgEnv-nvidia

The host compilers nvc / nvc++ (accessible through the cc/ CC wrapper) in NVIDIA SDK has CUDA opt-in support. To compile a single source C / C++ code (host & device code in the same source file) with the Cray wrappers you must add the -cuda flag to their compilation step which notifies the nvc/ nvc++ compiler to accept CUDA runtime APIs. Omitting the -cuda flag will result in your application compiling without any of the CUDA API calls, and will generate an executable with undefined behavior.

PrgEnv-gnu

When using the PrgEnv-gnu environment in conjunction with the cudatoolkit module (i.e., if compiling any application for both host and device side), you must note that not every version of gcc is compatible with every version of nvcc - supported host compilers for each nvcc installation.

Versions

NERSC generally aims to make the latest versions of cudatoolkit available. In some cases a specific version other than what is installed is needed.

In this situation one should first check if the version needed is compatible. Generally CUDA is forward compatible. For example code written for 11.3 should work with 11.7.

See the CUDA Compatibility Document which describes the details.

If this is not an option next one should consider using containers through Shifter for the specific desired CUDA version.

UPC++

On Perlmutter the UPC++ PGAS library is available as user-managed software, contributed by the UPC++ library maintainers at LBNL.

Also see UPC++ documentation for Perlmutter for site-specific details and recommendations regarding use of UPC++ at NERSC.