Skip to content

Example Dockerfiles for Shifter

The easiest way to run your software in a Shifter container is to create a Docker image, push it to Docker Hub then pull it to Shifter ImageGateway which creates the corresponding Shifter image. More details.

Using Python

Python containers are commonly used with Shifter. For detailed examples and best practices, refer to our guide on How to use Python in Shifter.

Shifter can also be integrated with Jupyter notebooks. To learn how to use a Shifter image as a Jupyter kernel, see our documentation on Using Shifter in Jupyter.

Using Open MPI - example ORCA chemistry package

Some applications are hard coded to require a certain version of Open MPI. One example is the ORCA chemistry package. These applications can be run on our system in a Shifter image.

Here's an example Dockerfile to build an image with Open MPI. It downloads the Open MPI tarball, installs it in /usr, and configures MPI to communicate via ssh. Note that ORCA requires the --disable-builtin-atomics flag.

FROM ubuntu:20.04

ENV DEBIAN_FRONTEND noninteractive

RUN apt-get update && apt-get install -y build-essential apt-utils ssh gfortran

RUN cd / && wget https://www.open-mpi.org/software/ompi/v4.1/downloads/openmpi-4.1.1.tar.bz2 \
    && tar xvjf openmpi-4.1.1.tar.bz2 && cd openmpi-4.1.1 \
    && ./configure --prefix=/usr --disable-builtin-atomics && make && make install \
    && rm -rf /openmpi-4.1.1 && rm -rf openmpi-4.1.1.tar.bz2

RUN echo "--mca plm ^slurm" > /usr/etc/openmpi-mca-params.conf

You can build and upload this image yourself, or you can use our copy at stephey/orca:3.0.

We will demonstrate how to use this image to run ORCA.

The ORCA binaries are precompiled and released in several tarballs. Typically we suggest that you install your software stack in your image, but in this case we will not do this. Since the total size of the ORCA package is so large (~40 GB), we suggest that you install it on our filesystems rather than inside the image itself. To make sure your ORCA binaries aren't purged, we suggest you install them on our /global/common/software filesystem. Since the default quotas can be small on this filesystem, you may need to request a quota increase first. However, we almost always grant these requests, so please don't hesitate to open a ticket to ask for more space.

Here is an example jobscript which uses the ORCA package installed on our /global/common/software filesystem inside our Open MPI Shifter image. Note that you'll need to adjust this to include your project directory and your username.

#!/bin/bash
#SBATCH -q debug
#SBATCH -N 2
#SBATCH -t 00:10:00
#SBATCH -C cpu
#SBATCH --image=stephey/orca:3.0

#populate the node list
scontrol show hostnames $SLURM_JOB_NODELIST > parent_salen.nodes
shifter /global/common/software/<your project>/<your username>/orca/orca parent_salen.inp

HEP/HENP Software Stacks

DESI

DESI jobs use community standard publicly available software, independent of the Linux distro flavor. This example is built with an Ubuntu base.

This Dockerfile is also available:

You need the following prerequesites:

  1. Start with an Ubuntu base image
  2. Install all the needed standard packages from Ubuntu repositories
  3. Compile Astrometry.net, Tractor, TMV and Galsim.
  4. Setup the needed environment variables along the way

Build the image:

# Build DESI software environment ontop an Ubuntu base
FROM ubuntu:16.04
MAINTAINER Mustafa Mustafa <mmustafa@lbl.gov>

# install astrometry and tractor dependencies
RUN apt-get update && \
    apt-get install -y wget make git python python-dev python-matplotlib \
                       gcc swig python-numpy libgsl2 gsl-bin pkg-config \
                       zlib1g-dev libcairo2-dev libnetpbm10-dev netpbm \
                       libpng12-dev libjpeg-dev python-pyfits zlib1g-dev \
                       libbz2-dev libcfitsio3-dev python-photutils python-pip && \
    pip install fitsio

ENV PYTHONPATH /desi_software/astrometry_net/lib/python:$PYTHONPATH
ENV PATH /desi_software/astrometry_net/lib/python/astrometry/util:$PATH
ENV PATH /desi_software/astrometry_net/lib/python/astrometry/blind:$PATH

# ------- install astrometry
RUN mkdir -p /desi_software/astrometry_net && \
         git clone https://github.com/dstndstn/astrometry.net.git && \
         cd astrometry.net && \
         make install INSTALL_DIR=/desi_software/astrometry_net &&\
         cd / && \
         rm -rf astrometry.net

# ------- install tractor
RUN mkdir -p /desi_software/tractor && \
         git clone https://github.com/dstndstn/tractor.git && \
         cd tractor && \
         make && \
         python setup.py install --prefix=/desi_software/tractor/

ENV PYTHONPATH /desi_software/tractor/lib/python2.7/site-packages:$PYTHONPATH

# ------- install missing GalSim dependencies (others have been installed above)
RUN apt-get install -y python-future python-yaml python-pandas scons fftw3-dev libboost-all-dev

# ------- install TMV
RUN wget https://github.com/rmjarvis/tmv/archive/v0.73.tar.gz -O tmv.tar.gz && \
         gunzip tmv.tar.gz && \
         mkdir tmv && tar xf tmv.tar -C tmv --strip-components 1 && \
         cd tmv && \
         scons && \
         scons install && \
         cd / && \
         rm -rf tmv.tar tmv

# ------- install GalSim
RUN wget https://github.com/GalSim-developers/GalSim/archive/v1.4.2.tar.gz -O GalSim.tar.gz && \
         gunzip GalSim.tar.gz && \
         mkdir GalSim && tar xf GalSim.tar -C GalSim --strip-components 1 && \
         cd GalSim && \
         scons && \
         scons install && \
         cd / && \
         rm -rf GalSim.tar GalSim

STAR

The STAR experiment software stack is typically built and run on Scientific Linux.

There are two ways we can build the STAR image, the first is to compile all the stack components one by one. The other is to install the compiled libraries by copying them into the image. We chose to do the latter in this example.

We use an SL6.4 docker base image that is publicly available, install the needed rpms, extract pre-compiled binaries tarballs into the image and finally install some software that needed to run STAR jobs.

# Example Dockerfile to show how to build STAR
# environment image from binuaries tarballs. Not necessarily
# the one currently used for STAR docker image build
FROM ringo/scientific:6.4
MAINTAINER Mustafa Mustafa <mmustafa@lbl.gov>

# RPMs
RUN yum -y install libxml2 tcsh libXpm.i686 libc.i686 libXext.i686 \
                   libXrender.i686 libstdc++.i686 fontconfig.i686 \
                   zlib.i686 libgfortran.i686 libSM.i686 mysql-libs.i686 \
                   gcc-c++ gcc-gfortran glibc-devel.i686 xorg-x11-xauth \
                   wget make libxml2.so.2 gdb libXtst.{i686,x86_64} \
                   libXt.{i686,x86_64} glibc glibc-devel gcc-c++# Dev Tools
RUN wget -O /etc/yum.repos.d/slc6-devtoolset.repo \
     https://linuxsoft.cern.ch/cern/devtoolset/slc6-devtoolset.repo && \
 yum -y install devtoolset-2-toolchain
COPY enable_scl /usr/local/star/group/templates/

# untar STAR OPT
COPY optstar.sl64_gcc482.tar.gz /opt/star/
COPY installstar /
RUN python installstar SL16c && \
 rm -f installstar &&         \
 rm -f optstar.sl64_gcc482.tar.gz

# untar ROOT
COPY rootdeb-5.34.30.sl64_gcc482.tar.gz /usr/local/star/
COPY installstar /
RUN python installstar SL16c && \
 rm -f installstar && \
 rm -f rootdeb-5.34.30.sl64_gcc482.tar.gz

# untar STAR library
COPY SL16d.tar.gz /usr/local/star/packages/
COPY installstar /
RUN python installstar SL16d && \
 rm -f installstar && \
 rm -f /usr/local/star/packages/SL16d.tar.gz

# DB load balancer
COPY dbLoadBalancerLocalConfig_generic.xml /usr/local/star/packages/SL16d/StDb/servers/

# production pipeline utility macros
COPY Hadd.C /usr/local/star/packages/SL16d/StRoot/macros/
COPY lMuDst.C /usr/local/star/packages/SL16d/StRoot/macros/
COPY checkProduction.C /usr/local/star//packages/SL16d/StRoot/macros/

# Special RPMs for production at NERSC; Open MPI, mysql-server
RUN yum -y install libibverbs.x86_64 environment-modules infinipath-psm-devel.x86_64 \
 librdmacm.x86_64 opensm.x86_64 papi.x86_64 && \
 wget https://mirror.centos.org/centos/6.8/os/x86_64/Packages/openmpi-1.10-1.10.2-2.el6.x86_64.rpm && \
 rpm -i openmpi-1.10-1.10.2-2.el6.x86_64.rpm && \
 rm -f openmpi-1.10-1.10.2-2.el6.x86_64.rpm && \
 yum -y install glibc-devel devtoolset-2-libstdc++-devel.i686 && \
 yum -y install mysql-server mysql && \

# add open mpi library to LD Path
ENV LD_LIBRARY_PATH /usr/lib64/openmpi-1.10/lib/

STAR MySQL DB

STAR jobs need access to a read-only MySQL server which provides conditions and calibration tables.

We have found that job scalability is not ideal if the MySQL server is outside the system network. Our solution was to run a local MySQL server on each node, the server services all the threads running on the node (e.g. 32 core threads). We chose to overcommit the cores, i.e. 32 production threads + 1 mysql server running on 32 cores.

The DB payload (~30GB) resides on Lustre. We have found out that the server with the payload accessed directly from Lustre FS doesn't perform well for this I/O pattern, it takes more than 30 minutes to cache the first few requests. In this case the XFS image mount capability (perCacheNode) came in handy. As soon as the job starts we copy the payload from Lustre FS into an XFS file mount, then we set the DB server to use this copy. Copying the 30 GB payload takes 1-3 minutes. The performance was stunning, caching time dropped down from 30 minutes to less than 1 minute, it also provided us with trivial scalability of the number of concurrent jobs.

Below are the relevant lines from our slurm batch file:

Request a perCacheNode of 50GB and mount it to /mnt in the Shifter image.

#!/bin/bash
#SBATCH --image=mmustafa/sl64_sl16d:v1_pdsf6
#SBATCH --volume=/global/cscratch1/sd/mustafa/:/mnt:perNodeCache=size=50G

Launch the Shifter container:

shifter /bin/csh <<EOF

Copy the payload to the /mnt then launch the DB sever:

#Prepare DB...
cd /mnt
cp -r -p /global/cscratch1/sd/mustafa/mysql51VaultStar6/ .
/usr/bin/mysqld_safe --defaults-file=/mnt/mysql51VaultStar6/my.cnf --skip-grant-tables &
sleep 30

Additional Examples

NERSC maintains a repository of official container images that serve as excellent examples and starting points for your own Shifter containers. You can find these images in the NERSC Official Images Repository.

Suggestions for useful images are welcome at our helpdesk.