Skip to content

QMCPACK

QMCPACK is an open-source production-level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, 2D nanomaterials and solids. The solid-state capabilities include metallic systems as well as insulators. This open-source code implements numerous QMC algorithms and is used for electronic structure calculations of molecular, periodic 2D, and 3D solid-state systems. It supports highly accurate simulations of up to around 1000 atoms and can handle complexities like asymmetries and impurities. QMCPACK utilizes a hybrid (OpenMP, CUDA)/MPI approach for parallelization and offers standard file formats for data exchange.

Availability and Supported Architectures

QMCPACK is available at NERSC as a provided support level package through the use of containers. Please note that while the package is available, domain expertise related questions should be directed to the development team. QMCPACK runs at NERSC are currently supported on CPU and GPU nodes.

Application Information, Documentation and Support

The official QMCPACK documentation is available at QMCPACK Online Manual. Question related to electronic structure and QMC-related research can be posted to the QMCPACK google group. The forum also contains an archive of all previous user questions, which can be useful to help resolve some of the common user issues.

Tip

If after checking the above forum, if you believe that there is an issue with the QMCPACK installation, please file a ticket with our help desk

Using QMCPACK at NERSC

QMCPACK is supported on Perlmutter machines using containers. Please note that there are no QMCPACK modules available.

The QMCPACK container image available on Perlmutter:

perlmutter$ shifterimg images | grep 'nersc/qmcpack'
perlmutter docker     READY    05fa20401b   2025-04-30T16:24:50 nersc/qmcpack:4.1.0

Please note that the container uses NVIDIA provided CUDA 11.8 base image. CUDA 11.3-12.2 have a bug which affects multideterminant calculations in QMCPACK. Single determinant calculations are OK.

Using QMCPACK on Perlmutter

QMCPACK can be run on both, CPU and GPU nodes of Perlmutter. The following are the two example scripts that can be used to submit a batch job to either of the nodes.

Sample Batch Script to Run QMCPACK on Perlmutter CPU nodes
#!/bin/bash
#SBATCH --image docker:nersc/qmcpack:4.1.0
#SBATCH --nodes 2
#SBATCH --ntasks-per-node 128
#SBATCH --constraint cpu
#SBATCH --qos regular
#SBATCH --time 00:30:00
#SBATCH -A mXXXX

input="he_simple.xml"

export MPICH_GPU_SUPPORT_ENABLED=1
export OMP_NUM_THREADS=64
export OMP_PROC_BIND=spread
export OMP_PLACES=threads

command="srun --cpu-bind=cores --gpu-bind=none --module cuda-mpich shifter qmcpack $input"
echo $command
$command

The above script launches a 2 CPU node job running with 128 tasks per node (equal to the number of cores available on 1 CPU node).

Sample Batch Script to Run QMCPACK on Perlmutter GPU nodes
#!/bin/bash
#SBATCH --image docker:nersc/qmcpack:4.1.0
#SBATCH --nodes 2
#SBATCH --gres=gpu:4
#SBATCH --ntasks-per-node 4
#SBATCH --cpus-per-task 32
#SBATCH --constraint gpu
#SBATCH --qos regular
#SBATCH --time 00:30:00
#SBATCH -A mXXXX

input="he_simple.xml"

export MPICH_GPU_SUPPORT_ENABLED=1
export OMP_NUM_THREADS=64
export OMP_PROC_BIND=spread
export OMP_PLACES=threads

command="srun --cpu-bind=cores --gpu-bind=none --module cuda-mpich shifter qmcpack $input"
echo $command
$command

Please change the project number to number assigned to your project where it says mXXXX. The example above uses 2 GPU node on PM, which has 4 GPUs each. When changing the number of nodes, please modify the line SBATCH -N 2 to whatever number of nodes you want to run your problem with. Additionally, please change line 'input=' as suitable for your problem.

Further details on using docker containers at NERSC with shifter can be found at shifter

Building QMCPACK in a container

Some users may be interested in understanding details of the QMCPACK build within the container.

Building inside the container

The following procedure was used to build QMCPACK inside a container. Containerfile includes:

FROM nvcr.io/nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04
WORKDIR /opt
ENV DEBIAN_FRONTEND noninteractive

RUN \
    apt-get update        &&   \   
    apt-get install --yes      \   
        build-essential autoconf cmake flex bison zlib1g-dev \
        fftw-dev fftw3 apbs libicu-dev libbz2-dev libgmp-dev \
        bc libblas-dev liblapack-dev git libtool swig uuid-dev \
        libfftw3-dev automake lsb-core libxc-dev libgsl-dev  \
        unzip libhdf5-serial-dev ffmpeg libcurl4-openssl-dev \
        libedit-dev libyaml-cpp-dev make libquadmath0 gfortran \
        python3-yaml automake pkg-config libc6-dev libzmq3-dev \
        libjansson-dev liblz4-dev libarchive-dev python3-pip \
        libsqlite3-dev lua5.1 liblua5.1-dev lua-posix jq     \   
        python3-dev python3-cffi python3-ply python3-sphinx  \
        aspell aspell-en valgrind libyaml-cpp-dev wget vim   \   
        make libzmq3-dev python3-yaml time valgrind  libeigen3-dev \
        mlocate python3-jsonschema python-is-python3       &&\ 
    apt-get clean all 

RUN apt-get update && apt-get install --yes gpg-agent wget
RUN wget -O- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB | gpg --dearmor | tee /usr/share/keyrings/oneapi-archive-keyring.gpg > /dev/null
RUN echo "deb [signed-by=/usr/share/keyrings/oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main" | tee /etc/apt/sources.list.d/oneAPI.list
RUN apt-get update 
RUN apt-get install --yes intel-oneapi-mkl intel-oneapi-mkl-devel && \
    apt-get clean all 


WORKDIR /opt
ARG mpich=4.2.2
ARG mpich_prefix=mpich-$mpich
RUN \
    wget https://www.mpich.org/static/downloads/$mpich/$mpich_prefix.tar.gz && \
    tar xvzf $mpich_prefix.tar.gz                                           && \
    cd $mpich_prefix                                                        && \
    ./configure FFLAGS=-fallow-argument-mismatch FCFLAGS=-fallow-argument-mismatch \
    --prefix=/opt/mpich/install                                             && \
    make -j 16                                                              && \
    make install                                                            && \
    make clean                                                              && \
    cd ..                                                                   && \
    rm -rf $mpich_prefix.tar.gz
ENV PATH=$PATH:/opt/mpich/install/bin
ENV PATH=$PATH:/opt/mpich/install/include
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/mpich/install/lib
RUN /sbin/ldconfig

ENV PATH=$PATH:/usr/local/cuda/lib64/stubs
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64/stubs
ENV PATH=$PATH:/usr/local/cuda-11.8/targets/x86_64-linux/lib/stubs
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.8/targets/x86_64-linux/lib/stubs

RUN ln -s /usr/local/cuda-11.8/targets/x86_64-linux/lib/stubs/libnvidia-ml.so /usr/local/cuda-11.8/targets/x86_64-linux/lib/stubs/libnvidia-ml.so.1 

# Install Eigen and Blas
WORKDIR /opt
RUN git clone https://gitlab.com/libeigen/eigen.git
RUN cd /opt/eigen                                       && \
    mkdir build                                         && \
    cd /opt/eigen/build                                 && \
    cmake -DCMAKE_INSTALL_PREFIX=/opt/eigen/install ..   && \
    make blas                                           && \
    make lapack                                         && \
    make install    
ENV PATH=$PATH:/opt/eigen/install/bin
ENV PATH=$PATH:/opt/eigen/install/include
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/eigen/install/lib

#Install ScaLAPACK
WORKDIR /opt
RUN git clone https://github.com/scivision/scalapack.git                      && \
    cd /opt/scalapack                                                         && \
    mkdir build                                                               && \
    cd /opt/scalapack/build                                                   && \
    cmake -D CMAKE_INSTALL_PREFIX=/opt/scalapack/install ../                  && \
    make -j 4                                                                 && \
    make install
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/scalapack/install/lib


#Install FFTW
WORKDIR /opt
RUN wget https://fftw.org/pub/fftw/fftw-3.3.10.tar.gz                         && \
    tar -xvzf fftw-3.3.10.tar.gz                                              && \
    mv fftw-3.3.10 fftw                                                       && \
    cd /opt/fftw                                                              && \
    ./configure --prefix=/opt/fftw/install  --enable-shared --enable-static      \
                --enable-threads --enable-sse2 --enable-avx --enable-avx2     && \
    make -j 4                                                                 && \
    make install
ENV PATH=$PATH:/opt/fftw/install/bin
ENV PATH=$PATH:/opt/fftw/install/include
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/fftw/install/lib


#Install LibXC
WORKDIR /opt
RUN git clone -b 6.2.2 https://gitlab.com/libxc/libxc.git                     && \
    cd /opt/libxc                                                             && \
    autoreconf -i                                                             && \
    ./configure --prefix=/opt/libxc/install                                   && \
    make -j 4                                                                 && \
    make check                                                                && \
    make install
ENV PATH=$PATH:/opt/libxc/install/bin
ENV PATH=$PATH:/opt/libxc/install/include
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/libxc/install/lib


#Install HDF5
WORKDIR /opt
RUN git clone -b hdf5_1_14_3 https://github.com/HDFGroup/hdf5.git hdf5
RUN cd hdf5                                                                 && \
    mkdir build                                                             && \
    cd build                                                                && \
    cmake -G "Unix Makefiles" -DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx \
        -DCMAKE_Fortran_COMPILER=mpif90 \
        -DCMAKE_BUILD_TYPE:STRING=Release -DBUILD_SHARED_LIBS:BOOL=ON \
        -DBUILD_TESTING:BOOL=ON -DHDF5_BUILD_TOOLS:BOOL=ON -DHDF5_BUILD_FORTRAN:BOOL=ON \
        -DHDF5_ENABLE_PARALLEL=ON -D CMAKE_INSTALL_PREFIX=/opt/hdf5/install ../ && \
    cmake --build . --config Release                                        && \
    cpack -C Release CPackConfig.cmake                                      && \
    make -j 4                                                               && \
    make install
ENV PATH=$PATH:/opt/hdf5/install/bin
ENV PATH=$PATH:/opt/hdf5/install/lib
ENV PATH=$PATH:/opt/hdf5/install/include
ENV PATH=$PATH:/opt/hdf5/install/share
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/hdf5/install/lib

ENV CUDALIB='/usr/local/cuda/lib64'
ENV HDF5_ROOT='/opt/hdf5/install'
ENV FFTW_HOME='/opt/fftw3/install'
ENV FFTW_INC='/opt/fftw3/install/include'
RUN pip install --no-cache-dir --upgrade pip setuptools
RUN python -m pip install mpi4py -i https://pypi.anaconda.org/mpi4py/simple
RUN python -m pip install numpy
RUN python -m pip install h5py pandas matplotlib pyscf scipy

# Install Boost from source
WORKDIR /opt
RUN git clone --recursive https://github.com/boostorg/boost.git
RUN cd /opt/boost                                                           && \
    ./bootstrap.sh                                                          && \
    ./b2 --prefix=/opt/boost/install
ENV PATH=$PATH:/opt/boost/bin.v2
ENV PATH=$PATH:/opt/boost
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/boost/lib

# Install QMCPack
WORKDIR /opt
ENV MKL_ROOT='/opt/intel/oneapi/mkl/2025.0'
RUN git clone -b develop https://github.com/QMCPACK/qmcpack.git
RUN git config --global --add safe.directory /opt/qmcpack
RUN cd /opt/qmcpack/build                                                   && \
    cmake -DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx                 \
          -DMKL_ROOT=/opt/intel/oneapi/mkl/2025.0/lib -DMKL_INCLUDE_DIR=/opt/intel/oneapi/mkl/2025.0/include -DMKL_LIBRARY=/opt/intel/oneapi/mkl/2025.0/lib \
          -DFFTW_HOME=/opt/fftw/install -DCMAKE_INSTALL_PREFIX=/opt/qmcpack/install  \
          -DQMC_COMPLEX=ON -DQMC_MIXED_PRECISION=ON -DQMC_GPU="cuda"          \
          -DCUDA_HOST_COMPILER=gcc -DQMC_GPU_ARCHS="sm_80"       ..                                  && \
    make -j 8                                                               && \
    make install
ENV PATH=$PATH:/opt/qmcpack/install/bin
ENV PATH=$PATH:/opt/qmcpack/utils/afqmctools/bin
ENV PYTHONPATH=$PYTHONPATH:/opt/qmcpack/utils/afqmctools
ENV PYTHONPATH=$PYTHONPATH:/opt/qmcpack/utils/afqmctools/afqmctools
RUN ln -s -f /opt/qmcpack/install/bin/qmcpack_complex /opt/qmcpack/install/bin/qmcpack

User Contributed Information

Please help us improve this page

Users are invited to contribute helpful information and corrections through our GitLab repository.