Skip to content

Access

Perlmutter is not yet available for general user access.

Perlmutter will be available to users in several stages, in the following order:

  1. The NESAP (NERSC Exascale Science Applications Program) tier 1 and ECP (Exascale Computing Project) teams
  2. The NESAP tier 2 and Superfacility teams
  3. Selected general users running GPU applications
  4. Remaining general users running GPU applications
  5. Remaining users

Connecting to Perlmutter

In order to connect to Perlmutter you must connect to Cori or a DTN and then connect to Perlmutter as follows:

ssh perlmutter

Transferring Data to / from Perlmutter Scratch

Perlmutter scratch is only accessible from Perlmutter login or compute nodes. To transfer data to Perlmutter scratch it is recommended that you transfer the data to the Community File System (which is available on Perlmutter) either with Globus, or a cp, or rsync on a Data Transfer Node. Once the data is on the Community File System, you can use cp, or rsync from a Perlmutter login node to copy the data to Perlmutter scratch. Alternatively, you could use scp or rsync to copy the data remotely to Perlmutter scratch, but this can be easily interrupted and is currently not as fast as the method that goes through the Community File System.

Preparing for Perlmutter

Please check the Transitioning Applications to Perlmutter webpage for a wealth of useful information on how to transition your applications for Perlmutter.

Compiling/Building Software

You can find info below on how to set the proper programming environment and compile your code on Perlmutter:

Running Jobs

Perlmutter uses Slurm for batch job scheduling. Below you can find info on the queue policies, how to submit jobs using Slurm and monitor jobs, etc.:

During Allocation Year 2021 jobs run on Perlmutter will be free of charge.

Current Known Issues

Known Issues on Perlmutter