# NERSC Usage Charging Policy¶

NERSC allocates time on compute nodes and space on its file systems and HPSS system.

## Computer Usage Charging¶

When a job runs on a NERSC supercomputer, charges accrue against one of the user's projects (repos). The unit of accounting for these charges is the "NERSC Hour". The total number of NERSC hours a job costs is a function of the number of nodes and the walltime used by the job, the QOS of the job, and the "charge factor" for the system upon which the job was run. Charge factors are set by NERSC to accommodate for the relative power of the architecture and the scarcity of the resource.

### Charge Factors for AY2020¶

Architecture Charge Factor
Cori Haswell 140
Cori KNL 80

### Calculating Charges¶

Job charges are based on the footprint of the job: the space (in terms of nodes) and time (in terms of wallclock hours) that the job occupies on NERSC resources.

Job charges are based on the number of nodes that the job took away from the pool of available resources. A job that was allocated 100 nodes and ran on only one of the nodes will still be charged for the use of 100 nodes.

Likewise, job charges are based on the actual amount of time (to the nearest second) that the job occupied resources, not the requested walltime or the amount of time spent doing computations. So a job that requested 12 hours but ran for only 3 hours and 47 minutes would be charged for 3 hours and 47 minutes, and a job that computed for three minutes and spent the remainder of its 12-hour walltime in an infinite loop would be charged for the full 12 hours.

Note

Because a reservation takes up space and time that could be otherwise used by other users' jobs, users are charged for the entirety of any reservation they request, including any time spent rebooting nodes and any gaps in which no jobs are running in the reservation.

#### Computing Charges¶

The cost of a job is computed in the following manner: $$\text{walltime in hours} \times \text{number of nodes} \times \text{QOS factor} \times \text{charge factor}$$.

Note

Jobs run in the shared QOS are charged based on the fraction of the node utilized by that job. For all other QOS, the number of nodes is a positive integer.

Example

The charge for a job that runs for 40 minutes on 3 Haswell nodes in the Premium QOS (QOS factor of 2) would be calculated $$\frac{40\ \text{mins}}{60\ \text{hrs/min}} \times 3\ \text{nodes} \times 2 \times 140\ \text{NERSC-hours/node-hour} = \frac{2}{3} \times 3 \times 2 \times 140 = 560\ \text{NERSC-hours}.$$

### Assigning Charges¶

Users who are members of more than one project (repo) can select which one should be charged for their jobs by default. In Iris, under the "Compute" tab in the user view, select the project you wish to make default.

To charge to a non-default project, use the -A projectname flag in Slurm, either in the Slurm directives preamble of your script, e.g.,

#SBATCH -A myproject

or on the command line when you submit your job, e.g., sbatch -A myproject ./myscript.sl.

## File System Allocations¶

Each user has a personal quota in their home directory and on the scratch file system, and each project has a shared quota on the Community File System. NERSC imposes quotas on space utilization as well as inodes (number of files). For more information about these quotas please see the file system quotas page.

## HPSS Charges¶

HPSS charging is based on allocations of space in GBs which are awarded into accounts called HPSS repos. If a login name belongs to only one HPSS repo, all its usage is charged to that repo. If a login name belongs to multiple HPSS repos, its daily charge is apportioned among the repos using the project percents for that login name. Default project percents are assigned by Iris based on the size of each project's storage allocation.

Users can view their project percents on the "Storage" tab in the user view in Iris. To change your project percents, change the numbers in the "% Charge to Project" column.

For more detailed information about HPSS charging please see HPSS charging.

## Running out of Allocation¶

Accounting information for the previous day is finalized in Iris once daily (in the early morning, Pacific Time). At this time actions are taken if a project or user balance is negative.

If a project runs out of time (or space in HPSS) all login names which are not associated with another active repository are restricted: * On computational machines, restricted users are able to log in, but cannot submit batch jobs or run parallel jobs, except to the "overrun" partition. * For HPSS, restricted users are able to read data from HPSS and delete files, but cannot write any data to HPSS.

Login names that are associated with more than one project (for a given resource -- compute or HPSS) are checked to see if the user has a positive balance in any of their projects (for that resource). If they do have a positive balance (for that resource), they will not be restricted and the following will happen: * On computational machines the user will not be able to charge to the restricted project. If the restricted project had been the user's default project, they will need to change their default project through Iris, or specify a different project with sufficient allocation when submitting a job, or run jobs in overrun only.

Likewise, when a user goes over their individual user quota in a given project, that user is restricted if they have no other project to charge to. A PI or Project Manager can change the user's quota.

## Usage Reports¶

In Iris, users can view graphs of their own compute and storage usage under the "Jobs" and "Storage" tabs in the user view, respectively. Likewise a user can view the compute and storage usage of their projects under the same tabs in the project view in Iris.

In addition, there is a "Reports" menu at the top of the page from which users can create reports of interest. For more information please see the Iris Users Guide.