Skip to content

WRF and WPS modules

Selected versions of WRF and WPS executables are available as modules on Perlmutter. Those modules can be loaded by first loading the contributed module, i.e.,

module load contrib
module load wrf/4.5.2
module load wps/4.5

WPS module

The current WPS module is based on WPS version 4.5. After loading the WPS module,

module load contrib
module load wps/4.5

the executables geogrid.exe, ungrib.exe., and metgrid.exe become available to a user. They are built by the GNU compilers for a serial processing (no heavy work, e.g., very high-resolution large-domain) with the following configurations:

  • Configure option 1. Linux x86_64, gfortran (serial)
  • GRIB2 option
  • The netcdf and other dependent libraries are based on the default modules as of January 2024.

The WPS executables require input files (e.g., Vtable) and shell scripts (e.g.,link_grib.csh) provided with the WPS code. These are made available and can be copied from the Community File System (CFS) through the environmental variable $WPSSRC_DIR (loaded together with the WPS module) so that following common input files can be copied from:

  • geogrid tables: $WPSSRC_DIR/geogrid/GEOGRID.TBL.ARW
  • Variable tables: $WPSSRC_DIR/ungrib/Variable_Tables/Vtables.XXX
  • ling_grib.csh: $WPSSRC_DIR/link_grib.csh
  • metgrid tables: $WPSSRC_DIR/metgrid/METGRID.TBL.ARW

Also most of the static data used by geogrid is available on CFS; you can find the location through the environmental variable $WPS_STATIC_DATA, which will be available after loading the wps module.

WRF module

Currently, WRF version 4.5.2 is available as a module. They are built by the GNU compilers for the Perlmutter CPU nodes with the following configurations:

  • The netcdf and other dependent libraries are based on the default modules as of January 2024.
  • debug = false
  • OpenMP and MPI parallelism (the "sm + dm" option 35)
  • support parallel I/O through the parallel netcdf library (not by the netcdf4/HDF5 parallel I/O)
  • no quilt I/O
  • no netcdf compression by the netCDF4 library
  • only the root MPI rank writes log files (rsl.err.0000 and rsl.out.0000)
  • em_real case
  • basic nesting (option 1)

After loading a WRF module,

module load contrib
module load wrf/4.5.2

three executables real.exe, main.exe., and ndown.exe become available. The dependent libraries/modules (e.g., netCDF) are automatically loaded as well.

The WRF executables require input files (e.g., physics lookup tables, namelist) provided with the WRF code. The WRF source code is stored in the CFS space and available to NERSC users. The directory path is provided as the environmental variable $WRFSRC_DIR, which is loaded with the WRF module. For example, physics lookup tables and the default namelist (namelist.input) can be copied from $WRFSRC_DIR/run.

Using the WRF module, a batch job script to run a simulation (wrf.exe) looks like:

#!/bin/bash 
#SBATCH -N 1
#SBATCH -q debug
#SBATCH -t 00:30:00
#SBATCH -J test
#SBATCH -A <account>   #user needs to change this
#SBATCH --mail-type=END,FAIL
#SBATCH --mail-user=<email address>  #and this
#SBATCH -L scratch,cfs
#SBATCH -C cpu
#SBATCH --tasks-per-node=64

#load modules
module load contrib
module load wrf/4.5.2

#diretcory to run the simulation
rundir="/pscratch/sd/e/elvis/simulation/WRF/run" #user needs to change this

#number of OpenMP threads per MPI task
ntile=1  
#need to set the "numtiles" variable in the wrf namelist (namelist.input) to be the same 

#OpenMP settings:
export OMP_NUM_THREADS=$ntile
export OMP_PLACES=threads  #"true" when not using multiple OpenMP threads (i.e., ntile=1)
export OMP_PROC_BIND=spread

cd $rundir

#run simulation, assuming all the necessary files are in the directory
# (namelist, physics tables, initial and boundary conditions, etc.)
srun -n 64 -c 4 --cpu_bind=cores wrf.exe

#rename and save the process 0 out and err files using the Slurm job ID
cp rsl.error.0000 rsl.error_0_$SLURM_JOBID
cp rsl.out.0000 rsl.out_0_$SLURM_JOBID