Skip to content

Machine Learning benchmarking at NERSC

NERSC uses both standard framework-oriented benchmarks as well as scientific benchmarks from research projects in order to characterize our systems for scientific Deep Learning.

Framework benchmarks

TensorFlow

We ran a version of the tf_cnn_benchmarks repository as well as a DCGAN model on Cori.

Training results Training results Training results

PyTorch

We have a repository of benchmarks with standard computer vision models, LSTM, and 3D convolutional models here: https://github.com/sparticlesteve/pytorch-benchmarks

We compare PyTorch software installations, hardware, and analyze scaling performance using the PyTorch distributed library with MPI. See the notebooks in the links below for numbers and plots.

Software versions

Results for a handful of software versions that were available on the Cori system are in this notebook:

https://github.com/sparticlesteve/pytorch-benchmarks/blob/master/notebooks/SoftwareAnalysis.ipynb

Training throughput results: Training results

Hardware comparisons

Results comparing training throughput on Cori Haswell, KNL, and GPU are here:

https://github.com/sparticlesteve/pytorch-benchmarks/blob/master/notebooks/HardwareAnalysis.ipynb

Scaling analysis

Throughput scaling results on Cori Haswell with Intel PyTorch v1.0.0 are available here:

https://github.com/sparticlesteve/pytorch-benchmarks/blob/master/notebooks/ScalingAnalysis.ipynb

Scientific Deep Learning Benchmarks

HEP-CNN

The HEP-CNN benchmark trains a simple Convolutional Neural Network to classify LHC collision detector images as signal or background.

CosmoFlow

The CosmoFlow benchmark trains a 3D Convolutional Neural Network to predict cosmological parameters from simulated universe volumes.

CosmoGAN

Deep Learning Climate Segmentation