Perlmutter is not yet available for general user access.
Perlmutter will be available to users in several stages, in the following order:
- The NESAP (NERSC Exascale Science Applications Program) tier 1 and ECP (Exascale Computing Project) teams
- The NESAP tier 2 and Superfacility teams
- Selected general users running GPU applications
- Remaining general users running GPU applications
- Remaining users
Connecting to Perlmutter¶
Transferring Data to / from Perlmutter Scratch¶
Perlmutter scratch is only accessible from Perlmutter login or compute nodes. To transfer data to Perlmutter scratch it is recommended that you transfer the data to the Community File System (which is available on Perlmutter) either with Globus, or a
rsync on a Data Transfer Node. Once the data is on the Community File System, you can use
rsync from a Perlmutter login node to copy the data to Perlmutter scratch. Alternatively, you could use
rsync to copy the data remotely to Perlmutter scratch, but this can be easily interrupted and is currently not as fast as the method that goes through the Community File System.
Preparing for Perlmutter¶
Please check the Transitioning Applications to Perlmutter webpage for a wealth of useful information on how to transition your applications for Perlmutter.
You can find info below on how to set the proper programming environment and compile your code on Perlmutter:
- Compilers at NERSC
- Using Python on Perlmutter
- Lmod, a Lua-based module system used on Perlmutter
- Finding and using software on Perlmutter
Perlmutter uses Slurm for batch job scheduling. Below you can find info on the queue policies, how to submit jobs using Slurm and monitor jobs, etc.:
- Queue Policies on Perlmutter
- Running Jobs on Perlmutter's GPU nodes
- Monitoring Jobs
- Interactive Jobs
During Allocation Year 2021 jobs run on Perlmutter will be free of charge.