Skip to content

Running Your App in Spin

Log into https://rancher2.spin.nersc.gov/, select the cluster you want (development or production) and navigate to your NERSC project. In Spin, a project is the highest level of encapsulation and corresponds to the NERSC projects you have access to in Iris. If you don't see your project in Spin, please contact NERSC staff by filing a ticket.

Choose a namespace in Spin

A Spin project contains namespaces, which are the next level of encapsulation for your Spin applications. Typically, a Spin application will run in a single namespace. Before you can create your first workload, you will need to choose an existing namespace or create a new one within your project. This is where your application will live in Spin.

Start by navigating to Cluster > Projects/Namespaces in the left sidebar. Existing namespaces within the projects you belong are now listed. If you are using an existing namespace proceed to the next section. To create a new namespace, make sure you are in the Group by Project view using the toggle icons to the upper right of the namespace list. Next click on the Create Namespace button on the project where you would like to add the new namespace. Give your namespace a unique descriptive name and click the Create button in the bottom left.

Deploy a workload in Spin

Assuming you have created the container images and pushed them to a registry, you can now deploy your app using the Rancher UI. Start by navigating to the Workload menu in the left sidebar, then click the Create button in the upper right. Now select the type of Workload you want to create. For most applications the Deployment type is the best choice.

Provide a unique descriptive name for your workload. The Container Image name should be the url of your container image in the registry, for example:

registry.nersc.gov/<myproject>/<myimage>:<mytag>

Warning

Do not change the default value in the Container Name field from container-0. Changing this field triggers a UI bug and you will have to start over the workload creation process.

Fill in any other settings your application needs, for example the entrypoint, env vars, etc.

Linux Capability Requirements

Spin has special rules about running apps in order to allow multiple users to use shared resources safely. Spin will not deploy workloads unless they comply with these rules.

To set the rules on your workload, while you are still in the workload configuration menu, click on the Security Context menu under the container-0 tab of your workload.

Scroll to the bottom of the menu and add ALL in the Drop Capabilities drop down. Add back any capabilities that your workload needs in the Add Capabilities drop down. Spin will only allow you to add these capabilities: CHOWN, KILL, SETGID, SETUID, NET_BIND_SERVICE, DAC_OVERRIDE, FOWNER.

Tip

If you mount a NERSC Global File System (NGF) path as a volume, you can only add the NET_BIND_SERVICE capability. Additionally, you will need to specify a NERSC UID and GID for the container to run as so it will be able to access the volume's contents. Note that some images may require additional modification to run as an alternate UID or GID; for example, they may not have write access to files or directories. For more details, see NERSC Global File Systems under Attaching Storage.

Verify the deployment

If everything went well, Rancher will deploy your workflow which will create a pod. A pod is set of containers, where a container is a running instance of a container image. For a typical microservices application a pod could for example be an nginx web server running in a container.

Click the pod name to view the running containers inside the pod. Next to the container name on the right hand side is a drop-down menu (click the three dots). Select Execute shell which will open a shell inside the container. For example, if you deployed a database image, you can run the database client from this shell to make sure everything worked.

If something failed, you can see the error in red above the pod name, or in the namespace/pod logs.