About this page
This document will guide you through the basics of using NERSC's supercomputers, storage systems, and services.
Welcome to the National Energy Research Scientific Computing Center (NERSC)! If you are a new or existing user, please see links to slides and video for overview of NERSC.
Perlmutter is a HPE Cray EX supercomputer that is currently being built. Its phase 1 system has over 1500 GPU-accelerated compute nodes.
Cori is a Cray XC40 supercomputer with approximately 12000 compute nodes.
File systems are configured for different purposes. Each machine has access to at least three different file systems with different levels of performance, permanence and available space.
- NERSC File System Overview
- NERSC File Systems Slides
- NERSC File Systems Video
- I/O Best Practices Slides
- I/O Best Practices Video
Community File System (CFS)¶
The Community File System (CFS) is a global file system available on all NERSC computational systems. It allows sharing of data between users, systems, and the "outside world".
High Performance Storage System (HPSS) Archival Storage¶
The High Performance Storage System (HPSS) is a modern, flexible, performance-oriented mass storage system. It has been used at NERSC for archival storage since 1998. HPSS is intended for long term storage of data that is not frequently accessed.
In order to use the NERSC facilities, you need to obtain a NERSC account and your user account must be tied to a NERSC Allocation in order to run jobs. Please see our password rules if you need help creating account or recovering passwords. Once you have account, you can login to Iris to manage account details.
For more details on account support see following links below:
With Iris you can
- Check allocation balances
- Change passwords
- Run reports
- Update contact information
- Clear login failures
- Change login shell
- and more!
Connecting to NERSC¶
MFA is required for NERSC users
- Multi-Factor Authentication (MFA)
- Login nodes
- Troubleshooting connection problems
- NERSC Live status
- Connecting to NERSC Slides
- Connecting to NERSC Video
NERSC and its vendors supply a rich set of compilers, HPC utilities, programming libraries, development tools, debuggers/profilers, data and visualization tools.
We provide a list of application specific documentation that are built by NERSC staff. NERSC software stack will be provided via spack package manager that helps automate software builds for popular HPC tools. To learn more about our setup see NERSC Spack Setup.
NERSC will provide software stack via Extreme-Scale Scientific Software Stack (E4S) which is a collection of open source software packages for HPC system built via spack. NERSC will provide periodic updates (quarterly/semi-annual) for e4s stacks which can be accessed via
module load e4s.
Please see our Software Policy outlining our software support model for NERSC provided software stack.
If there is something missing that you would like to have on our systems, please submit a request and we will evaluate it for appropriateness, cost, effort, and benefit to the community.
The Python we provide at NERSC is Anaconda Python.
NERSC supports a variety of software for Machine Learning and Deep Learning on our systems.
$HOME directories are shared across all NERSC systems (except HPSS)
Compiling/ building software¶
- Compilers at NERSC
- Shell Environment
- Programming Environment and Compilers Slides
- Programming Environment and Compilers Video
Typical usage of the system involves submitting scripts (also referred to as "jobs") to a batch system such as Slurm.
- Overview of jobs at NERSC
- Rich set of example jobs
- Running Jobs Slides
- Running Jobs Video
- Workflows Slides
- Workflows Video
Debugging and Profiling¶
NERSC provides many popular debugging and profiling tools.
NERSC Production Data Stack includes support for Data Transfer + Access, Workflows, Data Management Tools, Data Analytics, and Data Visualization.
Security and Data Integrity¶
Sharing data with other users must be done carefully. Permissions should be set to the minimum necessary to achieve the desired access. For instance, consider carefully whether it's really necessary before sharing write permissions on data. Be sure to have archived backups of any critical shared data. It is also important to ensure that private login secrets (like SSH private keys or apache htaccess files) are not shared with other users (either intentionally or accidentally). Good practice is to keep things like this in a separate directory that is as locked down as possible.
Sharing with Other Members of Your Project¶
NERSC's Community file system is set up with group read and write permissions and is ideal for sharing with other members of your project. There is a directory for every active project at NERSC and all members of that project should have access to it by default.
Sharing with NERSC Users Outside of Your Project¶
You can share files and directories with NERSC users outside of your project by adjusting the unix file permissions. We have an extensive write-up of unix file permissions and how they work.
NERSC provides two commands:
take which are useful for sharing small amounts of data between users.
To send a file or path to
give -u <receiving_username> <file or directory>
To receive a file sent by
take -u <sending_username> <filename>
To take all files from
take -a -u <sending_username>
To see what files
<sending_username> has sent to you:
take -u <sending_username>
For a full list of options pass the
Files that remain untaken 12 weeks after being given will be purged from the staging area.
Sharing Data outside of NERSC¶
You can easily and quickly share data over the web using our Science Gateways framework.
You can also share large volumes of data externally by setting up a Globus Sharing Endpoint.
NERSC partners with ESNet to provide a high speed connection to the outside world. NERSC also provides several tools and systems optimized for data transfer.
External Data Transfer¶
NERSC recommends transferring data to and from NERSC using Globus
Globus is a web-based service that solves many of the challenges encountered moving data between systems. Globus provides the most comprehensive, efficient, and easy to use service for most NERSC users.
However, there are other tools available to transfer data between NERSC and other sites:
- scp: standard Linux utilities suitable for smaller files (<1GB)
- GridFTP: parallel transfer software for large files
Transferring Data Within NERSC¶
"Do you need to transfer at all?" If your data is on NERSC Global File Systems (
/global/cscratch), data transfer may not be necessary because these file systems are mounted on almost all NERSC systems. However, if you are doing a lot of I/O with these files, you will benefit from staging them on the most performant file system. Usually that's the local scratch file system or the Burst Buffer.
- Use the the unix command
rsyncto copy files within the same computational system. For large amounts of data use Globus to leverage the automatic retry functionality
Data Transfer Nodes¶
The Data Transfer Nodes (DTNs) are servers dedicated for data transfer based upon the ESnet Science DMZ model. DTNs are tuned to transfer data efficiently, optimized for bandwidth and have direct access to most of the NERSC file systems. These transfer nodes are configured within Globus as managed endpoints available to all NERSC users.
NERSC FTP Upload Service¶
NERSC maintains an FTP upload service designed for external collaborators to be able to send data to NERSC staff and users.
NERSC places a very strong emphasis on enabling science and providing user-oriented systems and services. If you require additional support we encourage you to search our documentation for a solution before opening a ticket.
Account support is available 8-5 Pacific Time on business days.
The online help desk is the preferred method for contacting NERSC.
Before you open a ticket
How to file a good ticket
NERSC Consultants handle thousands of support requests per year. In order to ensure efficient timely resolution of issues include as much of the following as possible when making a request:
- error messages
- location of relevant files
- job scripts
- source code
- output of
- any steps you have tried
- steps to reproduce
Please copy and paste any text directly into the ticket and only include screenshots as attachements when the graphical output is the subject of the support request.
You can make code snippets, shell outputs, etc in your ticket much more readable by inserting a line with:
before the snippet, and another line with:
after it. While these are the most useful, other options to improve formatting can be found in the full list of formatting options.
Access to the online help system requires logging in with your NERSC username, password, and one-time password. If you are an existing user unable to log in, you can send an email to email@example.com for support.
If you are not a NERSC user, you can reach NERSC with your queries at
For critical system issues only.
- Please check the Online Status Page before calling 1-800-666-3772 (USA only) or 1-510-486-8600, Option 1
Consulting and account-support phone services have been suspended.
To report an urgent system issue, you may call NERSC at 1-800-66-NERSC (USA) or 510-486-8600 (local and international).
Please see the following links for common issues that can be addressed, if you are still having issues please create a ticket in help desk.
Appointments with NERSC User-Support Staff¶
NERSC provides 25-minute appointments with NERSC expert staff. Before you schedule your appointment consult the list of available topics described below.
To make the most use of an appointment, we strongly encourage you to try some things on your own and share them with NERSC staff ahead of time using the appointment intake form.
This category is good for basic questions, and you could not find the answer in our documentation. Or when you just don't know where to start.
Advice on how to optimize code and compilers to make use of the KNL compute nodes on Cori. Possible discussion topics include:
- Compiling code
- Thread affinity
- Batch script setup
- Profiling your code
- Refactoring your code
Containers at NERSC¶
Advice on deploying containerized workflows at NERSC using Shifter. We recommend that you share your Dockerfile, the image name (after downloading it to Cori using
shifterimg) before the appointment if possible.
Advice on I/O optimization and Filesystems at NERSC. Possible discussion topics include:
- Optimal file system choices
- Quota and file-permission issues
- I/O profiling
- Refactoring your code
Advice on programming GPUs for users that are new to the topic. This category is good for when you have started developing your GPU code, but are encountering problems.
Using GPUs in Python¶
Advice on how to use GPUs from Python, eg.
Checkpoint/Restart using MANA¶
Advice on how to use MANA to enable automatic checkpoint/restart in MPI applications.