Skip to content

Machine Learning benchmarking at NERSC

NERSC uses both standard framework-oriented benchmarks as well as scientific benchmarks from research projects in order to characterize our systems for scientific Deep Learning.

Framework benchmarks


We ran a version of the tf_cnn_benchmarks repository as well as a DCGAN model on Cori.

Training results Training results Training results


We have a repository of benchmarks with standard computer vision models, LSTM, and 3D convolutional models here:

We compare PyTorch software installations, hardware, and analyze scaling performance using the PyTorch distributed library with MPI. See the notebooks in the links below for numbers and plots.

Software versions

Results for a handful of software versions that were available on the Cori system are in this notebook:

Training throughput results: Training results

Hardware comparisons

Results comparing training throughput on Cori Haswell, KNL, and GPU are here:

Scaling analysis

Throughput scaling results on Cori Haswell with Intel PyTorch v1.0.0 are available here:

Scientific Deep Learning Benchmarks


The HEP-CNN benchmark trains a simple Convolutional Neural Network to classify LHC collision detector images as signal or background.


The CosmoFlow benchmark trains a 3D Convolutional Neural Network to predict cosmological parameters from simulated universe volumes.


Deep Learning Climate Segmentation