Skip to content

Perlmutter Timeline

This page records a brief timeline of significant events and user environment changes on Perlmutter. Please see our current known issues page for a list of known issues on Perlmutter.

March 15, 2023

  • A SS11 feature for performant GPU-RDMA has been temporarily disabled to mitigate a critical issue leading to node failures. This is expected to substantially affect performance of applications using GPU-RDMA capabilities for inter-node communication (such as CUDA-Aware MPI or GASNet), but will allow jobs that were previously crashing to run. We expect to be able to remove this mitigation during the next scheduled maintenance on March 22, 2023

March 8, 2023

  • The programming environment has been updated to CPE 23.02. A full list of changes are in the HPE changelog, but notable changes include:
    • Cray MPICH 8.1.24 from 8.1.22
    • Cray PMI 6.1.9 from 6.1.7
    • HDF5 1.12.2.3 from 1.12.2.1
    • Parallel NetCDF 1.12.3.3 from 1.12.3.1
  • Moved to using the open NVIDIA driver instead of the proprietary driver (keeping the version the same at 515.65.01) to include new functionality that will enable sharing GPU nodes.
  • Moving the gateway nodes (nodes used for communication with external resources like NGF) to the “stock” SLES kernel to be able to leverage the kernel fastpath feature for packet forwarding. This is intended to improve read and write rates to NGF file systems.
  • General software and cabling updates to increase system stability and performance.
    • Recable the internal network to minimize the impact of a switch failure on the high-speed network. By ensuring that the fabric manager (the software that controls Perlmutter’s interior network) maintains connectivity to each high-speed network group, this will provide additional resiliency system-wide.
    • Upgrade the Slingshot software to add improvements in the retry handler algorithm (this governs how missed packets are handled) to reduce amplification of messages about inaccessible nodes and to fix a bug that caused certain user codes to crash. This is intended to increase network and compute resiliency.
    • Update the COS software (a software layer that contains the underlying SLES OS as well as interfaces to other things like the file system) to upstream a SLES fix to address a readahead issue that was crashing nodes. This is intended to improve system stability.
    • Updated Neo, the file system control software, to 6.2-015. This is intended to increase file system stability and performance.

February 23, 2023

  • Numerous changes intended to improve network stability and file system access
    • Further changes to the TCP queue discipline to more fairly allocate bandwidth on our gateway nodes (nodes used for communication with external resources like NGF) between all the TCP streams from the computes while keeping latencies at optimal levels. This is intended to improve access to file systems like CFS, global common and global homes for batch jobs.
    • Connected gateway nodes to new switches. This is intended to improve network performance and resiliency.
    • Multiple cable adjustments to bring system into compliance with its cabling plan. This is intended to simplify maintenance and improve resiliency.

February 19, 2023

  • Multi-connection over TCP set back to 1 (from 2) for CFS to mitigate a bug.

February 15, 2023

  • Numerous changes intended to improve network stability and file system access.
    • General Network Issues:
      • Added static ARP entries for the compute nodes. This is intended to fix the issue where a large number of jobs either failed to launch or started in slurm but produce no output.
      • Increased sweep frequency for the fabric manager, the software that controls Perlmutter’s interior network, and added shorter timeouts throughout the system. These changes are intended to improve general network robustness and shorten recovery times for component failures.
      • Replaced a defective network switch.
      • Added code to filter bad unicast traffic from the network. This is intended to improve network stability.
    • Filesystem hangs and slowness:
      • Adjusted port policy for Lustre file system and debugged long recovery times for single component failure in Lustre. This is intended to inform and simplify future work to improve Lustre reliability and responsiveness.

February 9, 2023

  • Numerous changes intended to improve network stability and file system access.
    • General Network Issues:
      • Change default TCP queue discipline to fq_codel to address missing patch in SLES 15SP4. This is intended to address many of the communication failures larger scale jobs are experiencing.
      • Recabling (replacing defective equipment and correcting misconnected cables). This is a lengthly physical process and is intended to increase network stability.
      • Updating firmware to improve network link stability and reliability of key I/O nodes.
    • Filesystem hangs, slowness, and I/O errors:
      • Numerous network tuning and changes (Increase Multi-connection over TCP to 2 to maximize connectivity to Spectrum Scale servers, ipoib parameter changes, etc.). This is intended to reduce stale file handles and job failures from nodes getting expelled from the Spectrum Scale cluster.
      • Components that use DVS (an I/O forwarding service within Perlmutter) have been converted to using other delivery methods. This is intended to reduce the number of software components in use in order to simplify debugging of the network issues. Most users will not be affected by this change.
        • Users using read-only mounts of CFS may see slower or more-variable performance while the system is in this configuration.
        • cvmfs is now delivered using native clients and loop mounted file systems for caching following the recommended HPC recipe.
  • Podman deployed on the system.

February 1, 2023

  • Network updates intended to improve stability. This maintenance was appended to the unscheduled outage to minimize disruption to users.
    • Automatically reboot compute nodes that are in a particular fail mode (softlockup). These nodes cause instability in our Spectrum Scale file systems (CFS, homes, and global common) and rebooting them is intended to reduce file system hangs and outages.
    • Changed protocol for communication to the Spectrum Scale cluster to RDMA on Perlmutter login nodes. This is intended to reduce issues accessing the file system.

January 25, 2023

  • Updates intended to improve network stability and access of cvmfs.

January 19, 2023

  • Network update intended to improve stability and address Lustre performance issues.
  • Preemptible jobs on Perlmutter are free until February 19, 2023.
  • Darshan module removed from list of default modules loaded at start up.

December 21, 2022

  • Major hardware issues that were impacting the network performance have been addressed and PM has undergone massive full scale stress testing
  • The Slingshot software stack has been updated to a new version that is expected to be more robust.
  • 256 GPUs nodes with double the GPU-attached memory have been added to the system. Please see our jobs policy page for instructions on how to access them.
  • 1536 new CPU nodes have been added to the system.
  • A new NVIDIA NCCL plugin has been installed that more efficiently uses the Slingshot network. This has been integrated into NERSC’s machine learning and vasp modules so if you use these modules no further action is needed. However, other workflows will need some adjusting:
    • You will now need to use the --module=nccl-2.15 to get access to the new nccl plugin in shifter. Please see our shifter documentation for instructions.
    • If you install your own software that depends on NCCL, please use the new nccl module to get access to the new NCCL plugin libraries
    • Some machine learning workloads running in older NGC containers (versions from before 2022) may encounter performance variability. These issues can be fixed by upgrading the container to a more recent version. Please see our machine learning known issues page for details.
  • The OS has been updated to SLES SP4 and the programming environment has been updated to CPE 22.11.
    • Due to changes in the SLES SP4 system libraries, changes may be required for conda environments built or invoked without using the NERSC provided python module. Users may see errors like ImportError: /usr/lib64/libssh.so.4: undefined symbol: EVP_KDF_CTX_new_id, version OPENSSL_1_1_1d. Please see our Perlmutter python documentation for more information.
    • The default version of the NVIDIA HPC SDK compiler was upgraded to 22.7 (from 22.5)
    • Cray MPICH upgraded to 8.1.22 (from 8.1.17)
    • GCC v12 now available

October 28, 2022

Charging for all jobs began.

October 26, 2022

  • Slurm updated to 22.05
  • The 128-node limit on the regular QOS for GPU nodes has been removed. Regular can now accept jobs of all sizes.
  • The early_science QOS has been removed. Please use regular instead. All queued jobs in the early_science QOS have been moved to regular.
  • Numerous updates intended to improve system stability and networking

October 11, 2022

  • Major changes to the internal network and file system to get Perlmutter into its final configuration. Some tuning and changes are still required and will be applied over the next few weeks

September 15, 2022

  • Perlmutter scratch is now available, but it is still undergoing physical maintenance. We expect scratch performance to be degraded and single-component failures could cause the filesystem to become unavailable during this physical maintenance. We estimate a 20% chance that this will occur in the next month. Please hold any jobs with scratch licenses that you don't want to run by noon on Friday (9/16) with scontrol hold <jobid>.
  • Numerous updates intended to improve system stability and Community and Home File System access.

September 7, 2022

  • The software environment has been retooled to better focus on GPU usage. These changes should be transparent to the vast majority of both GPU and CPU codes and will help remove the toil of reloading the same modules for every script for GPU-based codes. As our experience with the system grows, we expect to be adding more settings that are expected to be globally beneficial.
    • New gpu module added as a default module loaded at login. It includes:
      • module load cudatoolkit
      • module load load craype-accel-nvidia80
      • Sets MPICH_GPU_SUPPORT_ENABLED=1 to enable access to CUDA-aware Cray MPICH at runtime
    • A companion cpu module
      • This module is mutually exclusive to the gpu module; if one is loaded, the other will be unloaded
      • In the future we may add any modules or environment settings we find to be generally beneficial to CPU codes, but for now it is empty
      • Given the current contents, CPU users should be able to run their codes with the gpu module. But if there are any problems, users can module load cpu to revert the gpu module
    • Shifter users who want CUDA-aware Cray MPICH at runtime will need to use the cuda-mpich shifter module
  • Long-lived scrontab capabilities added to better support workflows
  • A number of performance counters (e.g., CPU, Memory) that are used by NERSC supported performance profiling tools have been re-enabled on the system

August 24, 2022

  • Perlmutter Scratch file system unmounted for upgrading. All data on Perlmutter Scratch will be unavailable. Jobs already in the queue that were submitted from Perlmutter Scratch will be automatically held. If you submitted a job that depends on scratch from another file system, you can add a scratch license with scontrol update job=<job id> Licenses=scratch[,<other existing licenses>...] to have your job held until scratch is available.
  • Numerous internal updates to the software and network for the Phase-2 integration of Perlmutter.

August 15, 2022

  • All Slingshot10 GPU nodes are removed from the system along with their corresponding QOSes (e.g., regular_ss10)
    • Any queued jobs in the Slingshot10 QOSes were moved to their corresponding Slingshot11 QOSes
  • Numerous internal updates to the software and network for the Phase-2 integration of Perlmutter.

August 8, 2022

  • Added NVIDIA HPC SDK Version 22.7
    • To use: module load PrgEnv-nvidia nvidia/22.7
  • Numerous internal updates to the software and network for the Phase-2 integration of Perlmutter.

August 1, 2022

  • Default switched to Slingshot11 for GPU nodes.
    • Default QOS switched from GPU nodes using the Slingshot10 interconnect to nodes using the Slingshot11 interconnect. If you still wish to run on the Slingshot10 GPU nodes, you can add _ss10 to the QOS on your job submission line (e.g., -q regular_ss10 -C gpu). All queued jobs will run in the QOS that was active when they were submitted.
    • Use squeue --me -O JobID,Name,QOS,Partition to check which QOS and partition your jobs are in.
    • Login nodes now use the Slingshot11 interconnect.
  • CUDA driver upgraded to version 515.48.07
  • NVIDIA HPC SDK (PrgEnv-nvidia) and CUDA Toolkit (cudatoolkit) module defaults upgraded to 22.5 and 11.7 respectively. The previous versions are still available.
    • CUDA compatibility libraries are no longer needed, so if you were employing work arounds to remove them they should no longer be needed.
  • Numerous internal updates to the software and network for the Phase-2 integration of Perlmutter.

June 20, 2022

Default striping of all user scratch directories set to stripe across a single OST because of a bug in the Progressive File Layout striping schema. If you are reading or writing files larger than 1GB please see our recommendations for Lustre file striping.

July 18, 2022

  • The second set of GPU nodes have been upgraded to Slingshot11 and added to the regular_ss11 QOS (see the discussion in July 11, 2022).
    • We expect the number of Slingshot11 GPU nodes to be changing over the next few weeks, so we recommend you use sinfo to track the number of nodes in each partition. You can use sinfo --format="%.15b %.8D" for concise summary of nodes or sinfo -o "%.20P %.5a %.10D %.16F" for more verbose output.

July 11, 2022

  • First GPU nodes are upgraded to use the Slingshot11 interconnect. These nodes have upgraded software and 4x25GB/s NICs (previously they had 2x12.5GB/s NICs). Jobs will need to explicitly request these nodes by adding _ss11 to the QOS, eg -C gpu -q regular_ss11.
    • There are currently 256 nodes converted to Slingshot11. We expect this number of nodes to be changing over the next few weeks, so we recommend you use sinfo to track the number of nodes in each partition. You can use sinfo --format="%.15b %.8D" for concise summary of nodes or sinfo -o "%.20P %.5a %.10D %.16F" for more verbose output.
  • CPE default updated to 22.06. Notable changes:
  • NVIDIA compiler version 22.5 and cudatoolkit SDK version 11.7 now available on the system. These will become the defaults soon.
  • Shared QOS now available on the CPU nodes
  • Numerous internal updates to the software and network to prepare the Phase-2 integration of Perlmutter and make cvmfs more stable

June 6, 2022

  • Changes to the batch system
    • Users can now use just -A <account name> (i.e., the extra _g is no longer needed) for jobs requesting GPU resources.
    • Xfer queue added for data transfers
    • Debug QOS now the default
  • The cuda compatibility libraries were removed from the PrgEnv-nvidia module (specifically the nvidia module). The cuda compatibility libraries are now exclusively in the cudatoolkit module and users are reminded to load this module if they are compiling code for the GPUs.
  • Second set of CPU nodes are now available to users.
  • Numerous internal updates to the software and network to prepare the Phase-2 integration of Perlmutter

June, 2022

Achieving 70.9 Pflop/s (FP64 Tensor Core) using 1,520 compute nodes, Perlmutter is ranked 7th in the Top500 list.

May 25, 2022

  • Maximum job walltime for regular (CPU and GPU nodes) and early_science (GPU nodes) QOSes increased to 12 hours

May 17, 2022

  • Perlmutter opened to all NERSC Users!
  • The default Programming Environment is changed to PrgEnv-gnu
  • Shifter MPI now working on CPU nodes
  • PrgEnv-aocc now working
  • Numerous internal updates to the software and network to prepare the Phase-2 integration of Perlmutter

May 11, 2022

April 29, 2022

  • CPE default updated to 22.04. You may choose to load an older CPE but the behavior is not guaranteed.
    • Notable changes — cray mpich upgraded to 8.1.15
  • Nvidia driver has been updated to 470.103.01
  • Removed nvidia/21.9 (nvhpc sdk 21.9) from the system
  • Numerous internal upgrades (software and network stack) to prepare the Phase-2 integration of Perlmutter
    • Re-compile is not needed, but if you’re having issues please do try recompiling your application first.

April 21, 2022

April 7, 2022

March 25, 2022

  • Numerous internal updates to improvement configuration, reliability, and performance

March 10, 2022

  • Nvidia HPC SDK v21.11 now default
  • Older cudatoolkit modules removed
  • Slurm upgrade to 21.08, codes that use gpu-binding will need to be reworked
  • CPE 21.11 has been retired
    • There will be no support for gcc/9.3
    • nvcc v11.0 (cudatoolkit/11.0) retired, will no longer be supported
  • Numerous internal updates to improvement configuration, reliability, and performance

February 24, 2022

  • Cudatoolkit modules simplified
    • New modules with shorter names point to the most recent releases available
    • Old modules will remain on the system for a short time to allow time to switch over
  • Nvidia HPC SDK v21.11 now available
    • Default will remain 21.9 for a short time to allow time for testing
    • nvidia/21.9 does not support Milan, so the Cray compiler wrappers will build for Rome instead. We recommend that users switch to nvidia/22.11.
  • Upgraded to CPE 22.02. Major changes include:
    • MPICH 8.1.12 to 8.1.13
    • PMI 6.0.15 to 6.0.17
    • hdf5 1.12.0.7 to 1.12.1.1
    • netcdf 4.7.4 to 4.8.1.1
  • Change to sshproxy to support broader kinds of logins
  • Realtime qos functionality added
  • Numerous internal updates to improvement configuration, reliability, and performance

February 10, 2022

  • Node limit for all jobs temporarily lowered to 128 nodes
  • QOS priority modified to encourage wider job variety

January 25, 2022

January 11, 2022

  • Upgraded to CPE 21.12. Major changes include:
    • MPICH upgraded to v8.1.12 (from 8.1.11)
  • The previous programming environment can now be accessed using the cpe module.
  • Numerous internal upgrades to improve configuration and performance.

December 21, 2021

  • GPUs are back in "Default" mode (fixes Known Issue "GPUs are in "Exclusive_Process" instead of "Default" mode")
  • User access to hardware counters restored (fixes Known Issue "Nsight Compute or any performance profiling tool requesting access to h/w counters will not work")
  • Cuda 11.5 compatibility libraries installed and incorporated into Shifter
  • QOS priority modified to encourage wider job variety
  • Numerous internal upgrades

December 6, 2021

  • Major changes to the user environment. All users should recompile their code following our compile instructions
  • The cuda, cray-pmi, and cray-pmi-lib modules have been removed from the default environment
  • The darshan v3.3.1 module has been added to the default environment
  • Default NVIDIA compiler upgraded to v21.9
    • Users must load a cudatoolkit module to compile GPU codes
  • Upgraded to CPE 21.11
    • MPICH upgraded to v8.1.11 (from 8.1.10)
    • PMI upgraded to v6.0.16 (from 6.0.14)
    • FFTW upgraded to 3.3.8.12 (from 3.3.8.11)
    • Python upgraded to 3.9 (from 3.8)
  • Upgrade to SLES15sp2 OS
  • Numerous internal upgrades

November 30, 2021

  • Upgraded Slingshot (internal high speed network) to v1.6
  • Upgraded Lustre server
  • Internal configuration upgrades

November 16, 2021

This was a rolling update where the whole system was updated with minimal interruptions to users.

  • Set MPICH_ALLGATHERV_PIPELINE_MSG_SIZE=0 to improve MPI communication speed for large buffer size.
  • Added gpu and cuda-mpich Shifter modules to better support Shifter GPU jobs
  • Deployed fix for CUDA Unknown Error errors that occasionally happen for Shifter jobs using the GPUs
  • Changed ssh settings to reduce frequency of dropped ssh connections
  • Internal configuration updates

November, 2021

Perlmutter achieved 70.9 Pflop/s (FP64 Tensor Core) using 1,520 compute nodes, putting the system at No. 5 in the Top500 list.

November 2, 2021

  • Updated to CPE 21.10. A recompile is recommended but not required. See the documentation of CPE changes from HPE for a full list of changes. Major changes of note include:
    • Upgrade MPICH to 8.1.10 (from 8.1.9)
    • Upgrade DSMML to 0.2.2 (from 0.2.1)
    • Upgraded PMI to 6.0.14 (from 6.0.13)
  • Adjusted QOS configurations to facilitate Jupyter notebook job scheduling.
  • Added preempt QOS. Jobs submitted to this QOS may get preempted after two hours, but may start more quickly. Please see our instructions for running preemptible jobs for details.

October 20, 2021

External ssh access enabled for Perlmutter login nodes.

October 18, 2021

  • Updated slurm job priorities to more efficiently utilize the system and improve the diversity of running jobs.

October 14, 2021

  • Updated NVIDIA driver (to 450.162). This is not expected to have any user impact.
  • Upgraded internal management framework.

October 9, 2021

  • Screen and tmux installed
  • Installed boost v1.66
  • Upgraded nv_peer_mem driver to 1.2 (not expected to have any user impact)

October 5, 2021

Deployed sparewarmer QOS to assist with node-level testing. This is not expected to have any user impact.

October 4, 2021

Limited the wall time of batch jobs to 6 hours to allow a variety of jobs to run during testing. If you need to run jobs for longer than 6 hours, please open a ticket.

September 29, 2021

  • Numerous internal network and management upgrades.

New batch system structure deployed

  • Users will need to specify a QOS (with -q regular, debug, interactive, etc.) as well as a project GPU allocation account name which ends in _g (e.g., -A m9999_g)
  • Please see our Running Jobs Section for examples and an explanation of new queue policies

September 24, 2021

  • Upgraded internal management software
  • Upgraded system I/O forwarding software and moved it to a more performant network
  • Fixed csh environment
  • Performance profiling tool that request access to hardware counters (such as Nsight Compute) should work now

September 16, 2021

  • Deployed numerous network upgrades and changes intended to increase responsiveness and performance
  • Increased robustness for login node load balancing

September 10, 2021

  • Updated to CPE 21.09. A recompile is recommended but not required. Major changes of note include:
    • Upgrade MPICH to 8.1.9 (from 8.1.8)
    • Upgrade DSMML to 0.2.1 (from 0.2.0)
    • Upgrade PALS to 1.0.17 (from 1.0.14)
    • Upgrade OpenSHMEMX to 11.3.3 (from 11.3.2)
    • Upgrade craype to 2.7.10 (from 2.7.9)
    • Upgrade CCE to 12.0.3 (from 12.0.2)
    • Upgrade HDF5 to 1.12.0.7 (from 1.12.0.6)
    • GCC 11.2.0 added
  • Added cuda module to the list of default modules loaded at startup
  • Set BASH_ENV to Lmod setup file
  • Deployed numerous network upgrades and changes intended to increase responsiveness and performance
  • Performed kernel upgrades to login nodes for better fail over support
  • Added latest CMake release as cmake/git-20210830, and is set as the default cmake on the system

September 2, 2021

  • Updated NVIDIA driver (to nvidia-gfxG04-kmp-default-450.142.00_k4.12.14_150.47-0.x86_64). This is not expected to have any user impact.

August 30, 2021

Numerous changes to the NVIDIA programming environment

  • Changed default NVIDIA compiler from 20.9 to 21.7
  • Installed needed CUDA compatibility libraries
  • Added support for multi-CUDA HPC SDK
  • Removed the cudatoolkit and craype-accel-nvidia80 modules from default

Tips for users:

  • Please use module load cuda and module av cuda to get the CUDA Toolkit, including the CUDA C compiler nvcc, and associated libraries and tools.
  • CMake may have trouble picking up the correct mpich include files. If it does, you can use set ( CMAKE_CUDA_FLAGS "-I/opt/cray/pe/mpich/8.1.8/ofi/nvidia/20.7/include") to force it to pick up the correct one.

June, 2021

Perlmutter achieved 64.6 Pflop/s (FP64 Tensor Core) using 1,424 compute nodes, putting it at No. 5 in the Top500 list.

May 27, 2021

Perlmutter supercomputer dedication.

November, 2020 - March, 2021

Perlmutter Phase 1 delivered.