LAMMPS¶
LAMMPS is a large scale classical molecular dynamics code, and stands for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers), solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.
Availability and Supported Architectures¶
LAMMPS is available at NERSC as a provided support level package. LAMMPS runs at NERSC are currently supported on CPU and GPU nodes.
Application Information, Documentation and Support¶
The official LAMMPS is available at LAMMPS Online Manual. LAMMPS has a large user base and good user support. Question related to using LAMMPS can be posted to the LAMMPS User forum. The forum also contains an archive of all past mailing list messages, which can be useful to help resolve some of the common user issues.
Tip
If after checking the above forum, if you believe that there is an issue with the NERSC module, please file a ticket with our help desk
Using LAMMPS at NERSC¶
Lammps is now supported on Perlmutter machines using containers. Please note that lammps modules are deprecated and will not be updated.
There are two different containers of LAMMPS available on Perlmutter:
perlmutter$ shifterimg images | grep 'nersc/lammps'
perlmutter docker READY 78c9bbb876 2023-10-11T15:35:48 nersc/lammps_all:23.08
perlmutter docker READY 1265e04cff 2023-12-05T12:20:10 nersc/lammps_allegro:23.08
perlmutter docker READY a546b186a4 2023-09-19T12:14:13 nersc/lammps_lite:23.08
The lammps lite container should be sufficient for simulations utilizing most popularly used potentials. This a lighter build and does not contain user packages except SNAP-ML and ReaxFF. For most other mainstream packages available from LAMMPS, users should instead use lammps all container in their batch submission scripts.
Users running pair allegro should use image lammps allegro.
Using LAMMPS on Perlmutter¶
LAMMPS can be run on both, CPU and GPU nodes of Perlmutter. The following are the two example scripts that can be used to submit a batch job to either of the nodes.
Sample Batch Script to Run LAMMPS on Perlmutter CPU nodes
#!/bin/bash
#SBATCH --image docker:nersc/lammps_lite:23.08
#SBATCH -C cpu
#SBATCH -t 00:20:00
#SBATCH -J LAMMPS_CPU
#SBATCH -o LAMMPS_CPU.o%j
#SBATCH -A mXXXX
#SBATCH -N 1
#SBATCH --ntasks-per-node=128
#SBATCH -q regular
exe=lmp
input="<please change this section to what is needed to run your simulation>"
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
command="srun --cpu-bind=cores --module mpich shifter lmp $input"
echo $command
$command
The above script launches a 1 CPU node job running with 128 tasks (equal to the number of cores available on 1 CPU node).
Sample Batch Script to Run LAMMPS on Perlmutter GPU nodes
#!/bin/bash -l
#SBATCH --image docker:nersc/lammps_lite:23.08
#SBATCH -C gpu
#SBATCH -t 00:20:00
#SBATCH -J LAMMPS_GPU
#SBATCH -o LAMMPS_GPU.o%j
#SBATCH -A mXXXX
#SBATCH -N 1
#SBATCH -c 32
#SBATCH --ntasks-per-node=4
#SBATCH --gpus-per-task=1
#SBATCH --gpu-bind=none
#SBATCH -q regular
exe=lmp
input="-k on g 4 -sf kk -pk kokkos newton on neigh half -in in.snap.test -var nsteps 20 -var nx 10 -var ny 10 -var nz 80 -var snapdir 2J14_InflatedFrom_2J10/"
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
command="srun --cpu-bind=cores --gpu-bind=none --module mpich,gpu shifter lmp $input"
echo $command
$command
Please change the project number to number assigned to your project where it says mXXXX. The example above uses 1 GPU node on PM, which has 4 GPUs each. When changing the number of nodes, please modify the line SBATCH -N 1 to whatever number of nodes you want to run your problem with. Additionally, please change the line 'command="srun -n 4 lmp $input"' to -n as number of nodes times 4. Please change line 'input=' in accordance to LAMMPS GPU use guide as suitable for your problem.
If running LAMMPS with Kokkos package, please review the Kokkos package page to add appropriate flags to the job submission line as well as modify the input script.
Further details on using docker containers at NERSC with shifter can be found at shifter
Building LAMMPS from source¶
Some users may be interested in building LAMMPS from source to enable more specific LAMMPS packages. The source files for LAMMPS can be downloaded as either a tar file or from the LAMMPS Github repository.
Building on Perlmutter
The following procedure was used to build LAMMPS on Perlmutter GPU with Kokkos. In the terminal:
module load cudatoolkit
module load craype-accel-nvidia80
git clone https://github.com/lammps/lammps.git
cd lammps
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=$PWD/../install_pm -D CMAKE_BUILD_TYPE=Release \
-D CMAKE_Fortran_COMPILER=ftn -D CMAKE_C_COMPILER=cc -D CMAKE_CXX_COMPILER=CC \
-D MPI_C_COMPILER=cc -D MPI_CXX_COMPILER=CC -D LAMMPS_EXCEPTIONS=ON \
-D BUILD_SHARED_LIBS=ON -D PKG_KOKKOS=yes -D Kokkos_ARCH_AMPERE80=ON -D Kokkos_ENABLE_CUDA=yes \
-D PKG_MANYBODY=ON -D PKG_MOLECULE=ON -D PKG_KSPACE=ON -D PKG_REPLICA=ON -D PKG_ASPHERE=ON \
-D PKG_RIGID=ON -D PKG_MPIIO=ON \
-D CMAKE_POSITION_INDEPENDENT_CODE=ON -D CMAKE_EXE_FLAGS="-dynamic" ../cmake
make -j16
make install
Related Applications¶
User Contributed Information¶
Please help us improve this page
Users are invited to contribute helpful information and corrections through our GitLab repository.