PyTorch¶
PyTorch is a high-productivity Deep Learning framework based on dynamic computation graphs and automatic differentiation. It is designed to be as close to native Python as possible for maximum flexibility and expressivity.
Using PyTorch at NERSC¶
There are multiple ways to use and run PyTorch on NERSC systems like Cori and Cori-GPU.
Using NERSC PyTorch modules¶
The first approach is to use our provided PyTorch modules. This is the easiest and fastest way to get PyTorch with all the features supported by the system. The CPU versions for running on Haswell and KNL are named like pytorch/{version}
. These are built from source with MPI support for distributed training. The GPU versions for running on Cori-GPU are named like pytorch/{version}-gpu
. These are built with CUDA and NCCL support for GPU-accelerated distributed training. You can see which PyTorch versions are available with module avail pytorch
. We generally recommend to use the latest version to have all the latest PyTorch features.
As an example, to load PyTorch 1.7.1 for running on CPU (Haswell or KNL), you should do:
module load pytorch/1.7.1
To load the equivalent version for running on Cori-GPU, do
module load cgpu pytorch/1.7.1-gpu
You can customize these module environments by installing your own python packages on top. Simply do a user install with pip:
pip install --user ...
The modulefiles automatically set the $PYTHONUSERBASE
environment variable for you, so that you will always have your custom packages every time you load that module.
Installing PyTorch yourself¶
Alternatively, you can install PyTorch into your own software environments. This allows you to have full control over the included packages and versions. It is recommended to use conda as described in our Python documentation. Follow the appropriate installation instructions at: https://pytorch.org/get-started/locally/.
Note that if you install PyTorch via conda, it will not have MPI support. However, you can install PyTorch with GPU and NCCL support via conda.
If you need to build PyTorch from source, you can refer to our build scripts for PyTorch in the nersc-pytorch-build repository. If you need assistance, please open a support ticket at http://help.nersc.gov/.
Containers¶
It is also possible to use your own docker containers with PyTorch on Cori with shifter. Refer to the NERSC shifter documentation for help deploying your own containers.
On Cori-GPU and Perlmutter, we provide NVIDIA GPU Cloud (NGC) containers. They are named like nersc/pytorch:ngc-20.09-v0
. Note on Perlmutter, best performance for multi-node distributed training using containers is achieved via usage of the (nccl-2.15
shifter module)[../shifter/how-to-use.md#shifter-nccl-2.15-module], along with the default gpu
shifter module.
Running PyTorch on Perlmutter¶
To run PyTorch on Perlmutter is currently pretty much the same as running on Cori-GPU.
As of this writing, we have one module available with PyTorch 1.9 built from source with NCCL 2.9.8. Note that on Perlmutter we use Lmod for modules, but the syntax is familiar for basic usage:
module load pytorch/1.9.0
You can also use custom conda environments and shifter containers on Perlmutter.
Please refer to Perlmutter known issues for additional problems and suggested workarounds.
Distributed training¶
PyTorch makes it fairly easy to get up and running with multi-gpu and multi-node training via its distributed package. For an overview, refer to the PyTorch distributed documentation.
On Perlmutter, best performance for multi-node distributed training using containers is achieved via usage of the nccl-2.15
shifter module, along with the default gpu
shifter module.
See below for some complete examples for PyTorch distributed training at NERSC.
Performance optimization¶
To optimize performance of pytorch model training workloads on NVIDIA GPUs, we refer you to our Deep Learning at Scale Tutorial material from SC22, which includes guidelines for optimizing performance on a single NVIDIA GPU as well as best practices for scaling up model training across many GPUs and nodes.
Examples and tutorials¶
There is a set of example problems, datasets, models, and training code in this repository: https://github.com/NERSC/pytorch-examples
This repository can serve as a template for your research projects with a flexibly organized design for layout and code structure. It also demonstrates how you can launch data-parallel distributed training jobs on our systems. The examples include MNIST image classification with a simple CNN and CIFAR10 image classification with a ResNet50 model.
For a general introduction to coding in PyTorch, you can check out this great tutorial from the DL4Sci school at Berkeley Lab in 2020 by Evann Courdier:
Additionally, for an example focused on performance and scaling, we have the material and code example from our Deep Learning at Scale tutorial at SC22.
Finally, PyTorch has a nice set of official tutorials you can learn from as well.