Skip to content

File System overview

Storage System Usage and Characteristics

Summary

File systems are configured for different purposes. Each machine has access to at least three different file systems with different levels of performance, data persistence and available capacity, and each file system is designed to be accessed and used either by a user individually or by their project, as reported in the "Access" column.

Warning

NERSC storage systems are architected to cover different needs for performance, capacity and data persistance, and not as disaster-proof data storage. Other than home backups we do not keep multiple copies of data. Do not store the only copy of your data at NERSC: make sure you have at least one copy at another institution or cloud service

File System Snapshots Backup Purging Access
Home yes yes no user
Common no no no project
Community yes no no project
Perlmutter scratch no no yes user
HPSS no no no user

See quotas for detailed information about inode, space quotas and file system purge policies.

Note

Files in the Community and Common File Systems are charged to the project quota, while files on the Home and Scratch File Systems are charged to the file owner's quota: your files in another user's directory will still be charged to your quota. If a shared access is needed on Home or Scratch file systems, consider using a Collaboration account.

Directories on the Common and Community File Systems (CFS) are designed to be used by all members of a project, and have the setgid bit set by default, which makes all directories and files inherit the group ID and allows other members of the same group read and write to those files. Some groups may prefer to disable this behavior, which can be done by removing the setgid bit on the desired directory.

If desired, multiple top-level directories can be created on CFS for each project. Each top-level CFS directory comes with its own quota (drawn out of the total quota allocated for the entire project) and could be set to have different group ownership. For instance, if a project m9999 at NERSC wanted to have separate directories for their alpha and beta groups, they could request two directories (e.g. m9999_alpha and m9999_beta). The quotas for each top-level directory can be allocated in iris by the project's PI. Additionally, if the PI wanted to limit access to these directories to only subsets of their users, then could also adjust the owning groups (e.g. m9999_alpha is owned by group alpha etc.). The PI for m9999 could then add users in their projects to the appropriate groups to allow them access to each directory as desired.

Global storage

Home

Permanent, relatively small storage for data like source code, shell scripts that you want to keep. This file system is not tuned for high performance on parallel jobs: use the more optimized Common file system to store applications that need to be sourced by more than a dozen nodes at a time, or applications composed of several packages and small files such as conda environments.
Referenced by the environment variable $HOME.

Common

A performant platform to install software stacks and compile code. Mounted read-only on Perlmutter compute nodes.

Community

Large, permanent, medium-performance file system. Community directories are intended for sharing data within a group of researchers and for storing data that will be accessed in the medium term (i.e. 1 - 2 years)

The PI toolbox can help PIs and PI Proxies fix permissions in their Community project directories.

Scratch

Perlmutter has a dedicated, large, local, parallel scratch file system based on Lustre. The scratch file system is intended for temporary uses such as storage of checkpoints or application input and output during jobs. We have more details on Perlmutter's scratch on the respective page.

Archive (HPSS)

A high capacity tape archive intended for long term storage of inactive and important data. Accessible from all systems at NERSC. Space quotas are allocation dependent.

The High Performance Storage System (HPSS) is a modern, flexible, performance-oriented mass storage system. It has been used at NERSC for archival storage since 1998. HPSS is intended for long term storage of data that is not frequently accessed.

Local storage

The following file systems provide high I/O performance, but often don't preserve data across different jobs, so they are meant to be used as scratch space, and data produced must be staged out at the end of the computation.

Access is always per-user, since these file systems only accessible within the same SLURM job (XFS and in-RAM file systems), since SLURM purges the content afterwards.

Temporary per-node Shifter file system

Shifter users can access a fast, per-node xfs file system to improve I/O.

Local temporary file system

Compute nodes have a small amount of temporary local storage that can be used to improve I/O.

Data sharing

Sharing data with other users must be done carefully. Permissions should be set to the minimum necessary to achieve the desired access. For instance, consider carefully whether it's really necessary before sharing write permissions on data, often just read permissions are enough. Be sure to have archived backups of any critical shared data. It is also important to ensure that private login secrets (like SSH private keys or apache htaccess files) are not shared with other users (either intentionally or accidentally). Good practice is to keep things like this in a separate directory that is as locked down as possible (e.g. by removing group and other permissions with chmod g-rwx,o-rwx <directory>, please see our permissions page for a longer discussion on file permissions).

Also take a look at the NERSC Data Management policy.

Sharing Data Inside NERSC

Sharing Data Within Your Project

The easiest way to share data within your project at NERSC is to use the Community File System (CFS). Permissions on CFS directories are set up to be group readable and writable by default, and any permissions drift can be corrected by the PIs using the PI toolbox.

PIs can also request an HPSS Project Directory to share HPSS data within their project.

Sharing Data Outside Your Project

Sharing One Time

If you want to share just a few files a single time, you can use NERSC's give/take utilty.

Sharing Indefinitely

If you have a large volume of data you'd like to share with several NERSC users outside your project, you may want to consider creating a dedicated top-level CFS directory that's shared between projects. Project PIs can request a new CFS directory and can also request that directory be owned by a linux group made up of users from different projects.

If you only want to share with one or two users for an indefinite period, you might want to consider setting the linux permissions such that they're accessible for multiple users. Generally it's better to use ACLs to grant access rather than to make your directory world-readable. The example below shows how user elvis could grant user adele access to their scratch directory:

nersc$ setfacl -m u:adele:rx /pscratch/sd/e/elvis 
nersc$ setfacl -m u:adele:rx /pscratch/sd/e/elvis/shared_directory

Note that anyone reading lower directories must have execute (aka x) permissions on the higher directories so they can traverse them, which is why adele must have x permissions on elvis's top level directory.

Don't Use ACLs If You Want to Use These Directories in Batch Jobs

The Community File System is served by DVS on NERSC compute nodes. Adding an ACL will slow down reading from this directory during batch jobs. Please see our DVS page for more information.

Sharing Data Outside of NERSC

Data on the Community File System can also be shared with users outside of NERSC through Globus Guest Collections.

Data can also be shared via Science Gateways.