VASP¶
VASP is a package for performing ab initio quantum-mechanical molecular dynamics (MD) using pseudopotentials and a plane wave basis set. The approach implemented in VASP is based on a finite-temperature local-density approximation (with the free energy as variational quantity) and an exact evaluation of the instantaneous electronic ground state at each MD step using efficient matrix diagonalization schemes and an efficient Pulay mixing.
Availability and Supported Architectures at NERSC¶
VASP is available at NERSC as a provided support level package for users who have an active VASP license. To gain access to the VASP binaries at NERSC through an existing VASP license, please fill out the VASP License Confirmation Request. You can also access this form at NERSC Help Desk (Open Request -> VASP License Confirmation Request).
Note
If your VASP license was purchased from the VASP Software GmbH, the license owner (usually your PI) should have registered you under his/her license at the VASP Portal before you fill out the form.
It may take several business days from when the form is submitted to when access to NERSC-provided VASP binaries is granted.
When your VASP license is confirmed, NERSC will add you to a unix file group: vasp5
for VASP 5, and vasp6
for VASP 6. You can check if you have VASP access at NERSC via the groups
command. If you are in the vasp5
file group, then you can access VASP 5 binaries provided at NERSC. If you are in the vasp6
file group, then you can access both VASP 5 and VASP 6.
VASP 6 supports GPU execution.
Versions Supported¶
Cori Haswell | Cori KNL | Perlmutter GPU | Perlmutter CPU (Anticipated) |
---|---|---|---|
5.X, 6.X | 5.X, 6.X | 6.X | (6.X) |
Use the module avail vasp
command to see a full list of available sub-versions.
Application Information, Documentation, and Support¶
See the developers page for information about VASP, including links to documentation, workshops, tutorials, and other information. Instructions for building the code and preparing input files can be found in the VASP Online Manual. For troubleshooting, see the VASP users forum for technical help and support-related questions; see also the list of known issues. For help with issues specific to the NERSC module, please file a support ticket.
Using VASP at NERSC¶
We provide multiple VASP builds for users. Use the module avail vasp
command to see which versions are available and module load vasp/<version>
to load the environment. For example, these are the available modules (as of 4/22/2022),
cori$ module avail vasp
----------------------- /global/common/software/nersc/cle7up03/extra_modulefiles -----------------------
vasp/5.4.1-hsw vasp/6.3.0-hsw vasp-tpc/5.4.1-hsw
vasp/5.4.1-knl vasp/6.3.0-knl vasp-tpc/5.4.1-knl
vasp/5.4.4-hsw(default) vasp/20170323_NMAX_DEG=128-hsw vasp-tpc/5.4.4-hsw(default)
vasp/5.4.4-knl vasp/20170323_NMAX_DEG=128-knl vasp-tpc/5.4.4-knl
vasp/6.1.0-hsw vasp/20170629-hsw vasp-tpc/6.2.1-hsw
vasp/6.1.0-knl vasp/20170629-knl vasp-tpc/6.2.1-knl
vasp/6.1.2-hsw vasp/20171017-hsw vasp-tpc/20170629-hsw
vasp/6.1.2-knl vasp/20171017-knl vasp-tpc/20170629-knl
vasp/6.2.1-hsw vasp/20181030-hsw
vasp/6.2.1-knl vasp/20181030-knl
perlmutter$ module avail vasp
--------------------- /global/common/software/nersc/pm-2022.03.1/extra_modulefiles ---------------------
vasp-tpc/6.2.1-gpu vasp/6.2.1-gpu
where the modules with "5.4.4" or "5.4.1" in their version strings are pure MPI builds, and the modules with "2018" or "2017" in their version strings are hybrid MPI+OpenMP builds, which are available to the NERSC VASP 5 users through the VASP beta testing program. The modules with "6.1.0" are official releases of hybrid MPI+OpenMP VASP, which are available to the users who have VASP 6 licenses. The vasp-tpc (tpc stands for third party codes) modules are the custom builds incorporating commonly used third party contributed codes, e.g. VTST from University of Texas at Austin, Wannier90, BEFF, VASPSol, etc. The "knl" and "hsw" in the version strings indicate the modules are optimal builds for Cori KNL and Haswell, respectively. The current default on Cori is vasp/5.4.4-hsw
(VASP 5.4.4 with the latest patches), and you can access it by
cori$ module load vasp
To use a non-default module, provide the full module name,
cori$ module load vasp/20181030-knl
The module show
command shows the effect VASP modules have on your environment, e.g.
cori$ module show vasp/20181030-knl
-------------------------------------------------------------------
/usr/common/software/modulefiles/vasp/20181030-knl:
module load craype-hugepages2M
module-whatis VASP: Vienna Ab-initio Simulation Package
This is the vasp-knl development version (last commit 10/30/2018). Wannier90 v1.2 was enabled in the build.
setenv PSEUDOPOTENTIAL_DIR /usr/common/software/vasp/pseudopotentials/5.3.5
setenv VDW_KERNAL_DIR /usr/common/software/vasp/vdw_kernal
setenv NO_STOP_MESSAGE 1
setenv MPICH_NO_BUFFER_ALIAS_CHECK 1
setenv MKL_FAST_MEMORY_LIMIT 0
setenv OMP_STACKSIZE 256m
setenv OMP_PROC_BIND spread
setenv OMP_PLACES threads
prepend-path PATH /usr/common/software/vasp/vtstscripts/3.1
prepend-path PATH /global/common/cori/software/vasp/20181030/knl/intel/bin
-------------------------------------------------------------------
This vasp module adds the path to the VASP binaries to your search path and sets a few environment variables, where PSEUDOPOTENTIAL_DIR
and VDW_KERNAL_DIR
are defined for the locations of the pseudopotential files and the vdw_kernel.bindat
file used in dispersion calculations. The OpenMP and MKL environment variables are set for optimal performance.
Vasp binaries¶
Each VASP module provides the three different binaries:
vasp_gam
- gamma point only buildvasp_ncl
- non-collinear spinvasp_std
- the standard kpoint binary
One must choose the appropriate binary for the corresponding job.
Sample Job Scripts¶
To run batch jobs, prepare a job script (see samples below), and submit it to the batch system with the sbatch
command, e.g. for job script named run.slurm
,
nersc$ sbatch run.slurm
Please check the Queue Policy page for the available QOS's and their resource limits.
Cori Haswell¶
Sample job script for running the Pure MPI VASP build
#!/bin/bash
#SBATCH -N 1
#SBATCH -C haswell
#SBATCH -q regular
#SBATCH -t 6:00:00
module load vasp
srun -n32 -c2 --cpu_bind=cores vasp_std
Sample job script for running the hybrid MPI+OpenMP VASP build
#!/bin/bash
#SBATCH -N 2
#SBATCH -C haswell
#SBATCH -q regular
#SBATCH -t 6:00:00
module load vasp/20181030-hsw
export OMP_NUM_THREADS=4
# launching 1 task every 4 cores (8 CPUs)
srun -n16 -c8 --cpu_bind=cores vasp_std
Cori KNL¶
Sample job script for running the Pure MPI VASP build
#!/bin/bash
#SBATCH -N 2
#SBATCH -C knl
#SBATCH -q regular
#SBATCH -t 6:00:00
module load vasp/5.4.4-knl
srun -n128 -c4 --cpu_bind=cores vasp_std
Sample job script for running the hybrid MPI+OpenMP VASP build
#!/bin/bash
#SBATCH -N 2
#SBATCH -C knl
#SBATCH -q regular
#SBATCH -t 6:00:00
module load vasp/20181030-knl
export OMP_NUM_THREADS=4
# launching 1 task every 4 cores (16 CPUs)
srun -n32 -c16 --cpu_bind=cores vasp_std
Perlmutter GPUs¶
Sample job script for running VASP 6 on Perlmutter GPU nodes
#!/bin/bash
#SBATCH -J myjob
#SBATCH -A <your project GPU allocation account name> # e.g., m1111_g
#SBATCH -q regular
#SBATCH -t 6:00:00
#SBATCH -N 2
#SBATCH -C gpu
#SBATCH -G 8
#SBATCH --exclusive
#SBATCH -o %x-%j.out
#SBATCH -e %x-%j.err
module load vasp/6.2.1-gpu
srun -n8 -c32 --cpu-bind=cores --gpu-bind=single:1 -G 8 vasp_std
Tips
- For better throughput on Cori, run jobs on KNL nodes.
- Hybrid MPI+OpenMP builds are recommended on Cori KNL for optimal performance.
- More performance tips can be found in a Cray User Group 2017 proceeding.
- Refer to the presentation slides for the VASP user training (6/18/2019).
Running interactively¶
To run VASP interactively, request a batch session using salloc
.
Interactive VASP on Cori Haswell
The following command requests one Cori Haswell node for one hour:
cori$ salloc -N 1 -q interactive -C haswell -t 1:00:00
When the batch session returns with a shell prompt, execute the following commands:
cori$ module load vasp
cori$ srun -n32 -c2 --cpu-bind=cores vasp_std
Interactive VASP on Cori KNL
For example, to run on two Cori KNL nodes for four hours, do
cori$ salloc -N 2 -q interactive -C knl -t 4:00:00
When the batch session returns with a shell prompt, execute the following commands:
cori$ module load vasp/20181030-knl
cori$ export OMP_NUM_THREADS=4
cori$ srun -n32 -c16 --cpu-bind=cores vasp_std
Tips
- The interactive QOS allocates the requested nodes immediately or cancels your job in about 5 minutes (when no nodes are available). See the Queue Policy page for more info.
- Test your job using the interactive QOS before submitting a long running job.
Long running VASP jobs¶
For long VASP jobs (e.g., > 48 hours), you can use the variable-time job script
, which allows you to run jobs with any length. See a sample job script at Running Jobs. Variable-time jobs split a long running job into multiple chunks, so it requires the application to be able to restart from where it left off. Note that not all VASP computations are restartable, e.g., RPA; long running atomic relaxations and MD simulations are good use cases of the variable-time job script
.
Running multiple VASP jobs simultaneously¶
For running many similar VASP jobs, it may be beneficial to bundle them inside a single job script, as described in Running Jobs.
However, the maximum number of jobs you should bundle in a job script is limited, ideally not exceeding ten. This is because the batch system, Slurm (as it's implemented currently) is serving tens of thousands of other jobs on the system at the same time as yours, and compounding srun
commands can occupy a great deal of Slurm's resources.
If you want to run many more similar VASP jobs simultaneously, we recommend using the MPI wrapper for VASP that NERSC has provided, which enables you to run as many VASP jobs as you wish under a single srun
invocation. The MPI wrapper for VASP is available via the mvasp
module on Cori.
For example, consider the case of running 512 VASP jobs simultaneously, each on a single KNL node. One has prepared 512 input files, where each input resides in its own directory under a common parent directory. From the parent directory one can create a job script like below,
run_mvasp.slurm
run 512 VASP jobs simultaneously on Cori KNL
#!/bin/bash
#SBATCH -J test_mvasp
#SBATCH -N 512
#SBATCH -C knl
#SBATCH -q debug
#SBATCH -o %x-%j.out
#SBATCH -t 30:00
module load mvasp/5.4.4-knl
#run 512 VASP jobs simultaneously each running vasp_std with 1 KNL node (64 processes)
sbcast --compress=lz4 `which mvasp_std` /tmp/mvasp_std
srun -n 32768 -c4 --cpu-bind=cores /tmp/mvasp_std
then generate a file named joblist.in
, which contains the number of jobs to run and the VASP run directories (one directory per line). One can then use the script gen_joblist.sh
that is available via the mvasp
modules to create the joblist.in
file.
module load mvasp
gen_joblist.sh
A sample joblist.in
file is available. Then, submit the job via sbatch
:
sbatch run_mvasp.slurm.
Note
- Be aware that running too many VASP jobs at once may overwhelm the file system where your job is running. Please do not run jobs in your global home directory.
- In the sample job script above, to reduce the job startup time for large jobs the executable was copied to the /tmp file system (memory) of the compute nodes using the
sbcast
command prior to execution.
Similarly, one can run multiple VASP jobs on Haswell nodes. Here is a sample job script:
run_mvasp.slurm
: run 512 VASP jobs simultaneously with 64 Haswell nodes
#!/bin/bash
#SBATCH -J test_mvasp
#SBATCH -N 64
#SBATCH -C haswell
#SBATCH -q debug
#SBATCH -o %x-%j.out
#SBATCH -t 30:00
module load mvasp/5.4.4-hsw
#run 512 VASP jobs simultaneously each running vasp_std with 4 processors (each node runs 8 jobs)
srun -n 2048 -c2 --cpu-bind=cores ./mvasp_std
Building VASP from Source¶
Some users may be interested in building VASP themselves. As an example we outline the process for building the VASP 5.4.4 binaries. First, download vasp5.4.4.pl2.tgz
from VASP Portal to your cluster and untar it, e.g., on your home directory. Then run the following commands:
cd vasp.5.4.4.pl2
git clone https://github.com/zhengjizhao/mpi_wrapper.git
patch -p0 < mpi_wrapper/patch_vasp.5.4.4.pl2_mpi_wrapper.diff
One needs a makefile.include
file in order to build the code; samples are available in the installation directories of the NERSC modules. For example, the makefile.include
file that we used to build the vasp/5.4.4-hsw module is located at,
/global/common/sw/cray/cnl7/haswell/vasp/5.4.4/intel/18.0.1.163/w5vq7o2/
Type module show <a vasp module>
to find the installation directory of the NERSC module. Copy the sample makefile.include
file from the NERSC installation directory to the root directory of your local VASP 5.4.4 build. Then run,
make std
or
make all
The resulting VASP binaries are mvasp_std
, mvasp_gam
, and mvasp_ncl
(where m
stands for multiple
).
Related Applications¶
User Contributed Information¶
Please help us improve this page
Users are invited to contribute helpful information and corrections through our GitLab repository.