Welcome to NERSC¶
Welcome to the National Energy Research Scientific Computing Center (NERSC)!
About this page
This document will guide you through the basics of using NERSC's supercomputers, storage systems, and services.
What is NERSC?¶
NERSC provides High Performance Computing and Storage facilities and support for research sponsored by, and of interest to, the U.S. Department of Energy (DOES) Office of Science (SC). NERSC has the unique programmatic role of supporting all six Office of Science program offices: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, and Nuclear Physics.
Scientists who have been awarded research funding by any of the offices are eligible to apply for an allocation of NERSC time. Additional awards may be given to non-DOE funded project teams whose research is aligned with the Office of Science's mission. Allocations of time and storage are made by DOE.
NERSC is a national center, organizationally part of Lawrence Berkeley National Laboratory in Berkeley, CA. NERSC staff and facilities are primarily located at Berkeley Lab's Shyh Wang Hall on the Berkeley Lab campus.
Computing & Storage Resources¶
Cori is a Cray XC40 supercomputer with approximately 12000 compute nodes.
Community File System (CFS)¶
The Community File System (CFS) is a global file system available on all NERSC computational systems. It allows sharing of data between users, systems, and the "outside world".
HPSS (High Performance Storage System) Archival Storage¶
The High Performance Storage System (HPSS) is a modern, flexible, performance-oriented mass storage system. It has been used at NERSC for archival storage since 1998. HPSS is intended for long term storage of data that is not frequently accessed.
In order to use the NERSC facilities, you need:
- Access to an allocation of computational or storage resources as a member of a project
A user account with an associated user login name (also called username).
- NERSC Allocations
- Iris: Account and allocation management web interface
- Password rules
With Iris you can
- check allocation balances
- change passwords
- run reports
- update contact information
- clear login failures
- change login shell
- and more!
Connecting to NERSC¶
MFA is required for NERSC users
- Multi-Factor Authentication (MFA)
- Cori login nodes
- Troubleshooting connection problems
- Live status
NERSC Users Group (NUG)¶
Join the NERSC Users Group: an independent organization of users of NERSC resources.
NUG maintains a Slack workspace that all users are welcome to join.
NERSC and its vendors supply a rich set of HPC utilities, applications, and programming libraries.
- NERSC Supported Software Status List.
- Application specific documentation on this site.
- Available modules login to Cori and run
If there is something missing that you would like to have on our systems, please submit a request and we will evaluate it for appropriateness, cost, effort, and benefit to the community.
$HOME directories are shared across all NERSC systems (except HPSS)
Compiling/ building software¶
Typical usage of the system involves submitting scripts (also referred to as "jobs") to a batch system such as Slurm.
NERSC also supports interactive computing.
Security and Data Integrity¶
Sharing data with other users must be done carefully. Permissions should be set to the minimum necessary to achieve the desired access. For instance, consider carefully whether it's really necessary before sharing write permissions on data. Be sure to have archived backups of any critical shared data. It is also important to ensure that private login secrets (like SSH private keys or apache htaccess files) are not shared with other users (either intentionally or accidentally). Good practice is to keep things like this in a separate directory that is as locked down as possible.
Sharing with Other Members of Your Project¶
NERSC's Community file system is set up with group read and write permissions and is ideal for sharing with other members of your project. There is a directory for every active project at NERSC and all members of that project should have access to it by default.
Sharing with NERSC Users Outside of Your Project¶
You can share files and directories with NERSC users outside of your project by adjusting the unix file permissions. We have an extensive write up of unix file permissions and how they work here.
NERSC provides two commands:
take which are useful for sharing small amounts of data between users.
To send a file or path to
nersc$ give -u <receiving_username> <file or directory>
To receive a file sent by
nersc$ take -u <sending_username> <filename>
To take all files from
nersc$ take -a -u <sending_username>
To see what files
<sending_username> has sent to you:
nersc$ take -u <sending_username>
For a full list of options pass the
Files that remain untaken 12 weeks after being given will be purged from the staging area.
Sharing Data outside of NERSC¶
You can easily and quickly share data over the web using our Science Gateways framework.
You can also share large volumes of data externally by setting up a Globus Sharing Endpoint.
NERSC partners with ESNet to provide a high speed connection to the outside world. NERSC also provides several tools and systems optimized for data transfer.
External Data Transfer¶
NERSC recommends transferring data to and from NERSC using Globus
Globus is a web-based service that solves many of the challenges encountered moving data between systems. Globus provides the most comprehensive, efficient, and easy to use service for most NERSC users.
However, there are other tools available to transfer data between NERSC and other sites:
- scp: standard Linux utilities suitable for smaller files (<1GB)
- GridFTP: parallel transfer software for large files
Transferring Data Within NERSC¶
"Do you need to transfer at all?" If your data is on NERSC Global File Systems (
/global/cscratch), data transfer may not be necessary because these file systems are mounted on almost all NERSC systems. However, if you are doing a lot of IO with these files, you will benefit from staging them on the most performant file system. Usually that's the local scratch file system or the Burst Buffer.
- Use the the unix command
rsyncto copy files within the same computational system. For large amounts of data use Globus to leverage the automatic retry functionality
Data Transfer Nodes¶
The Data Transfer Nodes (DTNs) are servers dedicated for data transfer based upon the ESnet Science DMZ model. DTNs are tuned to transfer data efficiently, optimized for bandwidth and have direct access to most of the NERSC file systems. These transfer nodes are configured within Globus as managed endpoints available to all NERSC users.
NERSC FTP Upload Service¶
NERSC maintains an FTP upload service designed for external collaborators to be able to send data to NERSC staff and users.
NERSC places a very strong emphasis on enabling science and providing user-oriented systems and services.
NERSC maintains extensive documentation.
NERSC welcomes your contributions
New User Training Materials¶
The NERSC New User Training covers the basics on our computational systems; accounts and allocations; programming environment, tools, best practices; and data ecosystem. Slides and Recordings are good references for the specific topics.
Account support is available 8-5 Pacific Time on business days.
NERSC's consultants are HPC experts and can answer just about all of your technical questions.
Account support is available 8-5 Pacific Time on business days.
For critical system issues only.
- Please check the Online Status Page before calling 1-800-666-3772 (USA only) or 1-510-486-8600, Option 1