Print Email PDF

Connecting Your Kubernetes Cluster to Your Qumulo Cluster by Using the Qumulo Container Storage Interface (CSI) Driver

IN THIS ARTICLE 

This article introduces the Qumulo Container Storage Interface (CSI) driver and explains how you can connect your Kubernetes cluster to your Qumulo cluster by using the Qumulo CSI driver.

To automate container storage, enable dynamic volumes, and help you scale your application container images based on usage and workflows, Qumulo uses its CSI driver to connect the Kubernetes orchestrator to Qumulo persistent storage. (In comparison, for example, the NFS CSI Driver for Kubernetes requires unprivileged NFS access for dynamic volumes and doesn't support volume sizing and expansion.)

For general driver information, see Container Storage Interface (CSI) Specification.

SUPPORTED FEATURES

The Qumulo CSI Driver supports both static and dynamic (expansion) provisioning over NFSv3.

Currently, the driver doesn't support the following features:

REQUIREMENTS

  • Qumulo cluster
  • Kubernetes 1.19 (and higher)

CONNECTING YOUR QUMULO CLUSTER TO KUBERNETES

This section explains how you can configure, provision, and mount Qumulo storage for each Pod (a logical wrapper for a container) on Kubernetes by using dynamic provisioning. This gives you more control over persistent volume capacity.

Step 1: Install the Qumulo CSI Driver

  1. Log in to a machine that has kubectl and can access your Kubernetes cluster.
  2. Download the .zip file manually or by using one of the following commands.
    • S3
      aws s3 cp s3://csi-driver-qumulo/deploy_v1.0.0.zip ./
    • HTTP
      wget https://csi-driver-qumulo.s3.us-west-2.amazonaws.com/deploy_v1.0.0.zip
  3. Extract the contents of the .zip file.
  4. Run the shell script, specifying the current release version, for example:
    • On Linux
      cd deploy_v1.0.0
      chmod +x install_driver.sh
      ./install-driver.sh
    • On Windows
      cd deploy_v.1.0.0
      install-driver.bat
    The script configures Qumulo's prebuilt Elastic Container Registry (ECR) image (from public.ecr.aws/qumulo/csi-driver-qumulo:v1.0.0) and installs it on your Kubernetes system.

Step 2: Configure Volume and NFS Export Paths

To prepare your Qumulo cluster for connecting to your Kubernetes cluster, you must first configure your volume and NFS export paths on your Qumulo cluster by setting the following parameters for each storage class that you define.

Note: You will need the paths for the following YAML keys for the storageclass-qumulo.yaml file in Step 5.

  1. For storeRealPath, from the root of the Qumulo file system, create a directory for storing volumes on your Qumulo cluster, for example /csi/volumes1.
  2. For storeExportPath, create the NFS export for hosting the persistent volume.

Step 3: Configure Credentials

To connect your Kubernetes cluster to your Qumulo cluster, you must either use an existing account or create a new account for the CSI driver to communicate with the Qumulo API.

  1. Configure a username and password for a user on your Qumulo cluster.
  2. The configured username must have the following permissions:

    • Lookup on storeRealPath
    • Directory creation in storeRealPath
    • Creating and modifying quotas:
      • PRIVILEGE_QUOTA_READ
      • PRIVILEGE_QUOTA_WRITE
    • Reading NFS exports: PRIVILEGE_NFS_EXPORT_READ
    • TreeDelete for volume directories: PRIVILEGE_FS_DELETE_TREE_WRITE

For more information, see Role-Based Access Control (RBAC) with Qumulo Core.

Step 4: Create and Configure Secrets

To allow the CSI driver to operate with your Qumulo cluster, you must create and configure Secrets.

  1. Configure a Secret for the username, for example:
    kubectl create secret generic cluster1-login \
    --type="kubernetes.io/basic-auth" \
    --from-literal=username=bill \
    --from-literal=password=SuperSecret \
    --namespace=kube-system
  2. Give the CSI driver access to the Secrets, for example:
    kubectl create role access-secrets \
    --verb=get,list,watch \
    --resource=secrets \
    --namespace kube-system
    kubectl create rolebinding \
    --role=access-secrets default-to-secrets \
    --serviceaccount=kube-system:csi-qumulo-controller-sa \
    --namespace kube-system

Step 5: Create a Storage Class

To link your Kubernetes cluster to your Qumulo cluster, you must create a storage class on your Kubernetes cluster.

  1. Start from the example Qumulo storage class configuration:
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: cluster1
    provisioner: qumulo.csi.k8s.io
    parameters:
      server: 203.0.113.0
      storeRealPath: "/regions/4234/volumes"
      storeExportPath: "/some/export"
      csi.storage.k8s.io/provisioner-secret-name: cluster1-login
      csi.storage.k8s.io/provisioner-secret-namespace: kube-system
      csi.storage.k8s.io/controller-expand-secret-name: cluster1-login
      csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
    reclaimPolicy: Delete
    volumeBindingMode: Immediate
    mountOptions:
      - nolock
      - intr
      - proto=tcp
    allowVolumeExpansion: true
  2. Edit the configuration for your Qumulo cluster:
    1. Name your storage class.
    2. Specify server and storeRealPath.
    3. Specify storeExportPath.
    4. Configure the following parameters to point to the Secrets (that you created in Step 4) in the namespace in which you installed the CSI driver.
      • provisioner-secret-name
      • provisioner-secret-namespace
      • controller-expand-secret-name
      • controller-expand-secret-namespace
    5. Specify the NFS mountOptions, for example:
      mountOptions:
      - nolock
      - intr
      - proto=tcp
      - vers=3
    6. To create the class, apply the configuration, for example:
      kubectl create -f storageclass-qumulo.yaml

Step 6: Create a Persistent Volume Claim (PVC) and Apply it to a Pod

To apply a PVC claim to a Pod dynamically, you must first configure and create it.

  1. Start from the example PVC configuration.
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: claim1
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: cluster1
      resources:
        requests:
          storage: 1Gi
  2. Edit the configuration for your PVC claim:
    1. Name your claim.
    2. Change storeClassName to the name of your claim.
    3. Specify the capacity in spec.resources.requests.storage.
      This parameter lets you create a quota on your Qumulo cluster.
    4. To create the claim, apply the configuration, for example:
      kubectl apply -f dynamic-pvc.yaml
  3. Use the claim in a Pod or a Deployment, for example:
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: claim1-pod
    spec:
      volumes:
        - name: cluster1
          persistentVolumeClaim:
            claimName: claim1
      containers:
        - name: claim1-container
          image: ...
          volumeMounts:
            - mountPath: "/cluster1"
              name: cluster1
    Important: When the PVC is released, a tree-delete is initiated on the Qumulo cluster for the directory that the PVC indicates. To prevent this behavior, set reclaimPolicy to Retain.
  4. You can now launch and use your container image.

 

Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.

Have more questions?
Open a Case
Share it, if you like it.