VASP is a package for performing ab initio quantum-mechanical molecular dynamics (MD) using pseudopotentials and a plane wave basis set. The approach implemented in VASP is based on a finite-temperature local-density approximation (with the free energy as variational quantity) and an exact evaluation of the instantaneous electronic ground state at each MD step using efficient matrix diagonalization schemes and an efficient Pulay mixing scheme.
Availability and Supported Architectures at NERSC¶
VASP is available at NERSC as a provided support level package for users who have an active VASP license. To gain access to the VASP binaries at NERSC through an existing VASP license, please fill out the VASP License Confirmation Request. You can also access this form at NERSC Help Desk (Open Request -> VASP License Confirmation Request).
If your VASP license was purchased from VASP Software GmbH, the license owner (usually your PI) should have registered you under his/her license at the VASP Portal before you fill out the form.
It may take several business days from when the form is submitted to when access to NERSC-provided VASP binaries is granted.
When your VASP license is confirmed, NERSC will add you to a unix file group:
vasp5 for VASP 5, and
vasp6 for VASP 6. You can check if you have VASP access at NERSC via the
groups command. If you are in the
vasp5 file group, then you can access VASP 5 binaries provided at NERSC. If you are in the
vasp6 file group, then you can access both VASP 5 and VASP 6.
VASP 6 supports GPU execution.
|Perlmutter GPU||Perlmutter CPU|
module avail vasp command to see a full list of available sub-versions.
Application Information, Documentation, and Support¶
See the developers page for information about VASP, including links to documentation, workshops, tutorials, and other information. Instructions for building the code and preparing input files can be found in the VASP Online Manual. For troubleshooting, see the VASP users forum for technical help and support-related questions; see also the list of known issues. For help with issues specific to the NERSC module, please file a support ticket.
Using VASP at NERSC¶
We provide multiple VASP builds for users. Use the
module avail vasp command to see which versions are available and
module load vasp/<version> to load the environment. For example, these are the available modules (as of 05/23/2023),
perlmutter$ module avail vasp ----------------------- /global/common/software/nersc/pm-2022.12.0/extra_modulefiles ----------------------- mvasp/5.4.4-cpu vasp-tpc/6.3.2-cpu vasp/5.4.4-cpu (D) vasp/6.3.2-gpu vasp-tpc/5.4.4-cpu (D) vasp-tpc/6.3.2-gpu vasp/6.2.1-gpu vasp/6.4.1-cpu vasp-tpc/6.2.1-gpu vasp/5.4.1-cpu vasp/6.3.2-cpu vasp/6.4.1-gpu Where: D: Default Module
The modules with "6.x.y" in their version strings are official releases of hybrid MPI+OpenMP VASP, which are available to the users who have VASP 6 licenses. The vasp-tpc (tpc stands for Third Party Codes) modules are the custom builds incorporating commonly used third party contributed codes, e.g. VTST from University of Texas at Austin, Wannier90, BEEF, VASPSol, etc. On Perlmutter the "cpu" and "gpu" version strings indicate builds which target Perlmutter's CPU and GPU nodes, respectively. The current default on Perlmutter is
vasp/5.4.4-cpu (VASP 5.4.4 with the latest patches), and you can access it by
perlmutter$ module load vasp
To use a non-default module, provide the full module name,
perlmutter$ module load mvasp/5.4.4-cpu
module show command shows the effect VASP modules have on your environment, e.g.
perlmutter$ module show mvasp/5.4.4-cpu ---------------------------------------------------------------------------------------------------- /global/common/software/nersc/pm-2022.12.0/extra_modulefiles/mvasp/5.4.4-cpu.lua: ---------------------------------------------------------------------------------------------------- help([[This is an MPI wrapper program for VASP 5.4.4.pl2 (enabled Wannier90 1.2) to run multiple VASP jobs with a single srun. VASP modules are available only for the NERSC users who already have an existing VASP license. In order to gain access to the VASP binaries at NERSC through an existing VASP license, please fill out the VASP License Confirmation Request form at https://help.nersc.gov (Open Request -> VASP License Confirmation Request). ]]) whatis("Name: VASP") whatis("Version: 5.4.4") whatis("URL: https://docs.nersc.gov/applications/vasp/") whatis("Description: MPI wrapper for running many VASP jobs with a single srun") setenv("PSEUDOPOTENTIAL_DIR","/global/common/software/nersc/pm-stable/sw/vasp/pseudopotentials") setenv("VDW_KERNAL_DIR","/global/common/software/nersc/pm-stable/sw/vasp/vdw_kernal") setenv("NO_STOP_MESSAGE","1") setenv("MPICH_NO_BUFFER_ALIAS_CHECK","1") prepend_path("LD_LIBRARY_PATH","/opt/nvidia/hpc_sdk/Linux_x86_64/22.5/compilers/extras/qd/lib") prepend_path("LD_LIBRARY_PATH","/opt/nvidia/hpc_sdk/Linux_x86_64/22.5/compilers/lib") prepend_path("LD_LIBRARY_PATH","/opt/nvidia/hpc_sdk/Linux_x86_64/22.5/math_libs/11.5/lib64") prepend_path("LD_LIBRARY_PATH","/global/common/software/nersc/pm-stable/sw/vasp/5.4.4-mpi-wrapper/milan/nvidia-22.5/lib") prepend_path("PATH","/global/common/software/nersc/pm-stable/sw/vasp/vtstscripts/3.1") prepend_path("PATH","/global/common/software/nersc/pm-stable/sw/vasp/5.4.4-mpi-wrapper/milan/nvidia-22.5/bin")
This vasp module adds the path to the VASP binaries to your search path and sets a few environment variables, where
VDW_KERNAL_DIR are defined for the locations of the pseudopotential files and the
vdw_kernel.bindat file used in dispersion calculations. The OpenMP and MKL environment variables are set for optimal performance.
Each VASP module provides three different binaries:
vasp_gam- gamma-point-only build
vasp_ncl- non-collinear spin
vasp_std- the standard k-point binary
One must choose the appropriate binary for the corresponding job.
Sample Job Scripts¶
To run batch jobs, prepare a job script (see samples below), and submit it to the batch system with the
sbatch command, e.g. for job script named
nersc$ sbatch run.slurm
Please check the Queue Policy page for the available QOS settings and their resource limits.
Sample job script for running VASP 6 on Perlmutter GPU nodes
#!/bin/bash #SBATCH -J myjob #SBATCH -A <your account name> # e.g., m1111 #SBATCH -q regular #SBATCH -t 6:00:00 #SBATCH -N 2 #SBATCH -C gpu #SBATCH -G 8 #SBATCH --exclusive #SBATCH -o %x-%j.out #SBATCH -e %x-%j.err module load vasp/6.2.1-gpu export OMP_NUM_THREADS=1 export OMP_PLACES=threads export OMP_PROC_BIND=spread srun -n 8 -c 32 --cpu-bind=cores --gpu-bind=none -G 8 vasp_std
Sample job script for running VASP 5 on Perlmutter CPU nodes
#!/bin/bash #SBATCH -N 1 #SBATCH -C cpu #SBATCH -q regular #SBATCH -t 01:00:00 #SBATCH -J vasp_job #SBATCH -o %x-%j.out #SBATCH -e %x-%j.err module load vasp/5.4.4-cpu srun -n 4 -c64 --cpu-bind=cores vasp_gam
To run VASP interactively, request a batch session using
- The interactive QOS allocates the requested nodes immediately or cancels your job in about 5 minutes (when no nodes are available). See the Queue Policy page for more info.
- Test your job using the interactive QOS before submitting a long running job.
Long running VASP jobs¶
For long VASP jobs (e.g., > 48 hours), you can use the
variable-time job script, which allows you to run jobs with any length. See a sample job script at Running Jobs. Variable-time jobs split a long running job into multiple chunks, so it requires the application to be able to restart from where it left off. Note that not all VASP computations are restartable, but e.g., RPA; long running atomic relaxations and MD simulations are good use cases of the
variable-time job script.
Running multiple VASP jobs simultaneously¶
For running many similar VASP jobs, it may be beneficial to bundle them inside a single job script, as described in Running Jobs.
However, the maximum number of jobs you should bundle in a job script is limited, ideally not exceeding ten. This is because the batch system, Slurm (as it's implemented currently) is serving tens of thousands of other jobs on the system at the same time as yours, and compounding
srun commands can occupy a great deal of Slurm's resources.
If you want to run many more similar VASP jobs simultaneously, we recommend using the MPI wrapper for VASP that NERSC has provided, which enables you to run as many VASP jobs as you wish under a single
srun invocation. The MPI wrapper for VASP is available via the
For example, consider the case of running 2 VASP jobs simultaneously, each on a single Perlmutter CPU node. One has prepared 2 input files, where each input resides in its own directory under a common parent directory. From the parent directory one can create a job script like below,
run_mvasp.slurm run 2 VASP jobs simultaneously on Perlmutter CPU
#!/bin/bash #SBATCH -C cpu #SBATCH --qos=debug #SBATCH --time=0:30:00 #SBATCH --nodes=2 #SBATCH --error=mvasp-%j.err #SBATCH --output=mvasp-%j.out module load mvasp/5.4.4-cpu sbcast --compress=lz4 `which mvasp_std` mvasp_std srun -n128 -c4 --cpu-bind=cores mvasp_std
then generate a file named
joblist.in, which contains the number of jobs to run and the VASP run directories (one directory per line). One can then use the script
gen_joblist.sh that is available via the
mvasp modules to create the
module load mvasp gen_joblist.sh
joblist.in file is available. Then, submit the job via
- Be aware that running too many VASP jobs at once may overwhelm the file system where your job is running. Please do not run jobs in your global home directory.
- In the sample job script above, to reduce the job startup time for large jobs the executable was copied to the /tmp file system (memory) of the compute nodes using the
sbcastcommand prior to execution.
Users are recommended to refer to the paper to run VASP efficiently on Perlmutter.
Building VASP from Source¶
Some users may be interested in building VASP themselves. As an example we outline the process for building the VASP 6.3.2 binaries. First, download
vasp6.3.2.tgz from VASP Portal to your cluster and run,
tar -zxvf vasp6.3.2.tgz cd vasp.6.3.2
e.g. in your home directory, to unpack the archive and navigate into the VASP main directory.
One needs a
makefile.include file in order to build the code; samples are available in the
arch directory in the unpacked archive and are also provided in the installation directories of the NERSC modules. For example, the
makefile.include file that we used to build the vasp/6.3.2-gpu module is located at,
module show <a vasp module> to find the installation directory of the NERSC module. Copy the sample
makefile.include file from the NERSC installation directory to the root directory of your local VASP 6.3.2 build.
A user may be interested in augmenting the functionality of VASP by activating certain plugins. See the developer's list of plugin options for instructions on how to modify
makefile.include for common supported features. Note that some features can make use of modules installed on NERSC, including
nccl, so the user should load these modules before compiling, as needed.
For VASP 6 GPU-enabled builds on Perlmutter, one should have the following modules loaded prior to compiling:
Once the module environment is ready, run
to build the standard k-point binary, or
to build all binaries:
Instructions specific to
mvasp builds, execute the following commands immediately after unpacking the archive to apply the latest patch:
cd vasp.5.4.4.pl2 git clone https://github.com/zhengjizhao/mpi_wrapper.git patch -p0 < mpi_wrapper/patch_vasp.5.4.4.pl2_mpi_wrapper.diff
Then proceed with the build instructions described in the previous section.
User Contributed Information¶
Please help us improve this page
Users are invited to contribute helpful information and corrections through our GitLab repository.