Skip to content

Attaching Storage

Containerizing an application ensures that it always runs consistently and Rancher is great for making sure that application is always available. But what happens to files and data if your container crashes? By default, since the filesystem of a running container is ephemeral, any changes would be lost and Rancher would start new container from a fresh copy of the image.

Storage Volumes

Many applications require stateful information that remains beyond the lifetime of a container. To allow long lived data, Kubernetes provides something called a Volume, a long-lived storage object that can be used to persist data beyond the lifetime of your containers and even to share data between them. Some volumes are simple mounts that are shared from the host system to the container, while others require the user to make a claim against a piece of a
Persistent Volume provisioned by the cluster administrator before attaching the volume to a container.

In Rancher, you can create and attach different volumes to a workload from the edit configuration screen, which you reach by selecting Edit Config from the three dots menu on the workload. Volumes must first be added to the pod via the Pod > Storage tab. Once the volume has been attached to the pod, it will appear in the drop down selection menu for individual containers e.g. in the container-0 > Storage tab. Instructions for specific storage types are provided in more detail below.

Types of Storage in Spin

There are various types of storage volumes depending on the needs of your application or service. Each type of storage has characteristics which makes it more or less suitable for different use cases. Some of these characteristics and uses are summarized in the following table. The various types of storage available in Spin are discussed in greater detail below.

Persistent Volume Type Characteristics Access Example Usage
NFS Client fast local to Spin application logs, application state database
NERSC Global File System large-scale Spin and other NERSC systems large scientific dataset to serve via app
Secrets encrypted local to Spin database root password

NFS Client

Persistent Volumes using the NFS Client provider use a high-performance networked storage appliance accessible only to Spin nodes. Storage here is fast and available across nodes and restarts. It is a great choice for general application state storage, for example, a database.

To attach NFS client storage, start from the edit configuration menu on a workload:

  1. Under the Pod > Storage tab, scroll to the bottom of the screen, and select Create Persistent Volume Claim from the Add Volume drop down. and select the Add Volume drop down and select Create Persistent Volume Claim.
  2. Give the new Persistent Volume Claim a name, under Storage Class select nfs-client, and give it a capacity of 1Gb.
  3. Optionally, replace the prefilled Volume Name with something more descriptive to describe what the volume will be used for.
  4. Don't save yet, switch to the container-0 > Storage tab.
  5. Click the Select Volume drop down to select your newly created volume and attach it to the container.
  6. Specify a Mount Point where the storage will be accessible inside your container. The mount point should be an absolute path (eg.: /my_app_data).

NERSC Global File Systems

Mounting a NERSC Global File System (NGF) in Spin, such as the Community File System (CFS), allows you to access your files both in your Spin app and on other NERSC systems. This storage option is suitable for large data files but is not suitable for heavy transactional IO load.

To configure an NGF mount, start from the edit configuration menu on a workload:

  1. Under the Pod > Storage tab, choose Bind-Mount from the Add Volume drop down.
  2. Under Path on Node specify the absolute path the the NGF resource you want to add, e.g. /global/cfs/cdirs/myproj/myapp
  3. Under The Path on the Node must be select An existing directory.
  4. As before, switch to the container-0 > Storage tab, attach the new volume to the container, and specify a mount point.

Additionally, because NGF contains data owned by other users and projects, workloads with NGF mounts must run as the user and group of their owners and with more limited capabilities.

  1. In the container-0 > Security Context tab, in the Run as User ID box, you must enter the your numeric NERSC user id or the id of a collab user you can become.
  2. In the Add Capabilities drop down, remove everything except for NET_BIND_SERVICE (if present).
  3. In the Pod > Security Context tab, in Filesystem Group box, you must enter the id of a group you belong to.
  4. Additionally, make sure the directory in GPFS has the o+x permissions from the root path to the mount point.

These settings will cause Rancher to run your container image as the user specified, ensuring that access to data is secured.

The following Global File Systems are available in Spin:

Path on Spin node Access type
/cvmfs read-only
/global/cfs/ read/write
/global/common/ read-only
/global/dna/ read-only

When creating an external volume mount, the Path on Node value must start with one of the paths in the above table. Set The path on the node must be drop-down to an existing directory. The mount point in the container can be any absolute path.

Note

When the UID and GID are specified as required with an NGF mount, the process(es) within the container will inherit them and will no longer correspond to users or belong to groups defined locally within the container image. This may affect the behavior of applications. In this situation, we recommend modifying the image by a) using the groupmod command in the Dockerfile to add the UID to any groups required for the application and/or modifying their GID to match one you belong to, and b) using the chown -fR command to update file ownership as needed.

Secrets

Another option in the Add Volume drop-down is to use a Secret. A secret is a piece of information you want your running container to have, but don't want others looking at your Spin configuration to see.
For example, you could specify the database password as an environment variable, but then anyone inspecting the configuration would see it.

A better way is to create a secret. This can be done in the Rancher UI before you deploy your application:

  1. In the left sidebar navigate to Storage > Secrets.
  2. Click Create in the upper right corner of the Secrets panel.
  3. Select the default secret type Opaque.
  4. Give your secret a descriptive name, fill in a key value pair, and click Create in the bottom right to save the new secret.

Now the secret may be attached to as many workloads as you require.
From the edit configuration menu of a workload:

  1. Under the Pod > Storage tab, select Secret from the Add Volume drop down.
  2. Select your newly created secret. Optionally give the volume a descriptive name.
  3. Under the container-0 > Storage tab, use the Select Volume drop down to attach your new volume containing your secret, and specify a mount point as before.

Config Maps

Config Maps are very similar to Secrets, except they are not hidden or encrypted. Config Maps are commonly used in place of specifying environment variables in the container configuration. There are several benefits to using Config Maps:

  • Settings in a Config Maps persist even if a deployment is deleted.
  • Config Maps may be shared between several deployments easily.
  • Versioning Config Maps provides a snapshot in time of application settings, and make it easy to roll back or switch settings.

These benefits increase for users taking advantage of version control and CI/CD to manage their Kubernetes deployments.

The process for creating and attaching a Config Map is the same as with Secrets, but using the Storage > ConfigMaps menu item in the left navigation sidebar.