NAMD¶
NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD's interface provides access to hybrid QM/MM simulations in an integrated, comprehensive, customizable, and easy-to-use suite. It has been developed by the collaboration of the Theoretical and Computational Biophysics Group (TCB) and the Parallel Programming Laboratory (PPL) at the University of Illinois Urbana–Champaign.
Availability and Supported Architectures¶
NAMD runs at NERSC are currently supported on CPU and GPU nodes.
Application Information, Documentation and Support¶
The official NAMD user-guide is available at NAMD User Guide. NAMD has a large user base and good user support. Question related to using NAMD can be posted to the NAMD Mailing List. The forum also contains an archive of all past mailing list messages, which can be useful in resolving some of the common user issues.
Tip
If after checking the above forum, if you believe that there is an issue with the NERSC container, please file a ticket with our help desk
Using NAMD at NERSC¶
NAMD is now supported on Perlmutter machines using containers. Please note that NAMD modules are deprecated as of 20th April 2024, and will not be updated.
There are two different containers of NAMD available on Perlmutter:
perlmutter$ shifterimg images | grep 'namd'
perlmutter docker READY 91c54bf1e5 2024-03-21T13:36:27 nersc/namd:3.0.b5
perlmutter docker READY cd62d5f2a4 2024-03-21T13:09:34 nvcr.io/hpc/namd:3.0-beta5
The nersc/namd
container is built to run NAMD simulations on the CPU nodes. The nvcr.io/hpc/namd
container comes built from the NVIDIA container repository. Both these containers are compiled to NAMD version 3.0 beta 5 release. The nersc/namd
container image is built with MPICH support for Charm version 7.0.0. This container supports multi-node and mult-thread runs. The nvcr.io/hpc/namd
conatiner, designed to run on GPUs DOES NOT have multi-node support.
Users are encouraged to test the container images for scaling performance before submitting production runs.
Using NAMD on Perlmutter¶
As stated NAMD can be run on both, CPU and GPU nodes of Perlmutter. The following are the two example scripts that can be used to submit a batch job to either of the nodes.
Sample Batch Script to Run NAMD on Perlmutter CPU nodes
#!/bin/bash
#SBATCH --image docker:nersc/namd:3.0.b5
#SBATCH -C cpu
#SBATCH -t 00:20:00
#SBATCH -J NAMD_CPU
#SBATCH -o NAMD_CPU.o%j
#SBATCH -A <your_nersc_project>
#SBATCH -N 16
#SBATCH --ntasks-per-node=128
#SBATCH -q regular
exe=namd3
input= "+setcpuaffinity ++ppn 4 <please change this as per your requirements>"
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
command="srun --cpu-bind=cores --module mpich shifter $exe $input ./<name of the input file>"
echo $command
$command
The above script launches a 16 CPU node job running with 128 tasks (equal to the number of cores available on 1 CPU node).
Sample Batch Script to Run NAMD on Perlmutter GPU nodes
#!/bin/bash
#SBATCH --image nvcr.io/hpc/namd:3.0-beta5
#SBATCH -C gpu
#SBATCH -t 00:20:00
#SBATCH -J NAMD_GPU
#SBATCH -o NAMD_GPU.o%j
#SBATCH -A <your_nersc_project>
#SBATCH -N 1
#SBATCH -c 16
#SBATCH --ntasks-per-node=4
#SBATCH --gpus-per-task=1
#SBATCH --gpu-bind=none
#SBATCH -q regular
exe=namd3
input="+setcpuaffinity ++ppn 16 +devices 0,1,2,3 <please change this as per your requirements>"
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
export CUDA_VISIBLE_DEVICES=0,1,2,3
export CRAY_CUDA_MPS=1
command="srun --cpu-bind=cores --gpu-bind=none --module mpich,gpu shifter $exe $input ./<name of the input file>"
echo $command
$command
The example above uses 1 GPU node on Perlmutter, which has 4 GPUs each. When changing the number of nodes, please modify the line SBATCH -N 1 to whatever number of nodes you want to run your problem with. Additionally, please change Please change line 'input=' in accordance to NAMD use guide as suitable for your problem.
Further details on using docker containers at NERSC with shifter can be found at shifter
Related Applications¶
User Contributed Information¶
Please help us improve this page
Users are invited to contribute helpful information and corrections through our GitLab repository.