GROMACS¶
GROMACS is a versatile package to perform molecular dynamics simulations. GROMACS simulations scale from a couple hundred to millions of particles. It was primarily designed to perform simulations of biochemical molecules, such as, proteins, lipids, and nucleic acids. These molecules feature a lot of complicated bonded interactions, which GROMACS is specifically designed to target. However, GROMACS also handles non-bonded interactions as well, and therefore has been used by research groups to study non-biological systems as well (e.g. polymers).
Availability and Supported Architectures¶
GROMACS runs at NERSC are currently supported on GPU nodes but the application can also run on CPU nodes.
Application Information, Documentation and Support¶
The official GROMACS user-guide is available at GROMACS User Guide. GROMACS has a large user base and good user support. Questions related to using GROMACS can be posted to the GROMACS Mailing List. The forum also contains an archive of all past mailing list messages, which can be useful in resolving some of the common user issues.
Tip
If after checking the above forum, if you believe that there is an issue with the NERSC container, please file a ticket with our help desk.
Using GROMACS at NERSC¶
GROMACS is now supported on Perlmutter machines using containers. Please note that GROMACS modules are deprecated as of 20th June 2024, and will not be updated.
There are three different containers of GROMACS available on Perlmutter:
perlmutter$ shifterimg images | grep 'gromacs'
perlmutter docker READY f480d57ed2 2024-06-11T13:25:32 nersc/gromacs:23.2
perlmutter docker READY 43e1a3e114 2024-06-11T21:01:01 nersc/gromacs_colvars:23.2
perlmutter docker READY 7f966b3079 2024-08-20T18:51:42 nersc/gromacs_plumed:23.2
All nersc/gromacs
containers are built to run GROMACS simulations on both CPU and GPU nodes. The nersc/gromacs
container is built from GROMACS source and does not have any modifications. The nersc/gromacs_colvars
container is built using the source files provided by the COLVARS repository. Similarly, the nersc/plumed
container was built using GROMACS source but was patched with PLUMED. Please note that while all three containers are built to support bonded and non-bonded calculations on the GPU, the current containers do not include GPU acceleration capability for PME. This is currently a work in progress and will be addressed in future container builds. Users are encouraged to test the container images for scaling performance before submitting production runs.
Using GROMACS on Perlmutter¶
As stated, GROMACS can be run on both, CPU and GPU nodes of Perlmutter. The following are example scripts that can be used to submit a batch job to either types of nodes.
Sample Batch Script to Run GROMACS on Perlmutter GPU nodes
#!/bin/bash
#SBATCH --image docker:nersc/gromacs:23.2
#SBATCH -C gpu
#SBATCH -t 00:20:00
#SBATCH -J Gromacs_GPU
#SBATCH -o Gromacs_GPU.o%j
#SBATCH -A <your_nersc_project>
#SBATCH -N 4
#SBATCH -c 32
#SBATCH --ntasks-per-node=4
#SBATCH --gpus-per-task=1
#SBATCH -q regular
exe="gmx_mpi mdrun -bonded gpu -nb gpu"
input="-s lignocellulose.tpr -cpt 1000 -maxh 1.0 -nsteps 1000 -ntomp 64"
export GMX_ENABLE_DIRECT_GPU_COMM=true
export OMP_NUM_THREADS=32
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
command="srun --cpu-bind=cores --gpu-bind=none --module cuda-mpich shifter $exe $input"
echo $command
$command
The above script launches a 4 node GPU job running with 16 tasks (equal to the total number of GPUs available on 4 nodes).
Sample Batch Script to Run GROMACS-PLUMED on Perlmutter GPU nodes
#!/bin/bash
#SBATCH --image docker:nersc/gromacs_plumed:23.2
#SBATCH -C gpu
#SBATCH -t 00:20:00
#SBATCH -J Gromacs_GPU
#SBATCH -o Gromacs_GPU.o%j
#SBATCH -A <your_nersc_project>
#SBATCH -N 1
#SBATCH -c 32
#SBATCH --ntasks-per-node=4
#SBATCH --gpus-per-task=1
#SBATCH -q regular
exe="gmx_mpi mdrun -bonded gpu -nb gpu"
input="-deffnm md_carb -plumed metad.dat -nsteps 1000 -ntomp 32"
export GMX_ENABLE_DIRECT_GPU_COMM=true
export OMP_NUM_THREADS=32
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
command="srun --cpu-bind=cores --gpu-bind=none --module cuda-mpich shifter $exe $input"
echo $command
$command
The example above uses 1 GPU node on Perlmutter, which has 4 GPUs each. When changing the number of nodes, please modify the line SBATCH -N 1 to whatever number of nodes you want to run your simulation with. Additionally, change lines 'input=' and 'exe=' in accordance to GROMACS use guide as suitable for your simulation.
Further details on using docker containers at NERSC with shifter can be found at our shifter page
Related Applications¶
User Contributed Information¶
Please help us improve this page
Users are invited to contribute helpful information and corrections through our GitLab repository.