All Products
Search
Document Center

Container Service for Kubernetes:Use JindoRuntime to persist storage for the JindoFS master

Last Updated:Mar 26, 2026

When a JindoFS master container restarts or is rescheduled in Kubernetes, all metadata and mount points it maintains are lost, making the JindoFS cluster unavailable. To prevent this, use Fluid JindoRuntime to persist JindoFS master metadata to a Kubernetes persistent volume (PV). The metadata survives pod restarts and rescheduling, keeping your distributed caching cluster available.

How it works

JindoFS uses a master-worker architecture. The master maintains metadata and mount points for cached data; the workers manage the cached data itself. When you containerize the master and workers in a Kubernetes cluster, the master pod can be restarted or rescheduled at any time.

Without persistent storage, the master's metadata lives only in the pod's local filesystem — it disappears on restart. By mounting a PersistentVolumeClaim (PVC) to the master pod and setting namespace.meta-dir to the mount path, JindoRuntime writes metadata to the PV. When the pod is recreated on another node, it reads the same metadata from the PV and resumes normal operation.

This guide uses a disk volume for persistent storage. A disk volume persists across pod restarts and rescheduling within the same availability zone. Make sure your cluster has at least two nodes in the same zone so the master pod can be rescheduled to a different node while still accessing the same disk.

Prerequisites

Before you begin, make sure you have:

Step 1: Create a PVC for master metadata

Create a file named pvc.yaml with the following content:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: demo-jindo-master-meta
spec:
  accessModes:
    - ReadWriteOnce                        # Disk volumes support single-node read/write
  storageClassName: alicloud-disk-topology-alltype
  resources:
    requests:
      storage: 30Gi

For a full list of PVC parameters, see Use a dynamically provisioned disk volume.

Apply the manifest:

kubectl create -f pvc.yaml

Expected output:

persistentvolumeclaim/demo-jindo-master-meta created

Step 2: Create a Dataset and a JindoRuntime

Create credentials

Create a file named secret.yaml to store the AccessKey ID and AccessKey secret that the RAM user uses to access the OSS bucket:

apiVersion: v1
kind: Secret
metadata:
  name: access-key
stringData:
  fs.oss.accessKeyId: ******     # Replace with your AccessKey ID
  fs.oss.accessKeySecret: ******  # Replace with your AccessKey secret

Apply it:

kubectl create -f secret.yaml

Expected output:

secret/access-key created

Create the Dataset and JindoRuntime

Create a file named dataset.yaml with the following content. The file defines both the Dataset and the JindoRuntime in a single manifest.

apiVersion: data.fluid.io/v1alpha1
kind: Dataset
metadata:
  name: demo
spec:
  mounts:
    - mountPoint: oss://<OSS_BUCKET>/<BUCKET_DIR>   # Replace with your OSS bucket path
      name: demo
      path: /
      options:
        fs.oss.endpoint: <OSS_BUCKET_ENDPOINT>       # Replace with your OSS endpoint
      encryptOptions:
        - name: fs.oss.accessKeyId
          valueFrom:
            secretKeyRef:
              name: access-key
              key: fs.oss.accessKeyId
        - name: fs.oss.accessKeySecret
          valueFrom:
            secretKeyRef:
              name: access-key
              key: fs.oss.accessKeySecret
---
apiVersion: data.fluid.io/v1alpha1
kind: JindoRuntime
metadata:
  name: demo
spec:
  replicas: 2
  # [ Required for persistent storage ]
  volumes:
    - name: meta-vol
      persistentVolumeClaim:
        claimName: demo-jindo-master-meta  # The PVC created in Step 1
  master:
    # [ Required for persistent storage ]
    volumeMounts:
      - name: meta-vol
        mountPath: /root/jindofsx-meta    # Path where the PV is mounted in the master pod
    properties:
      namespace.meta-dir: "/root/jindofsx-meta"  # Must match mountPath above
  tieredstore:
    levels:
      - mediumtype: MEM
        path: /dev/shm
        volumeType: emptyDir   # Worker cache — not persistent across restarts
        quota: 12Gi
        high: "0.99"
        low: "0.99"
Warning

The tieredstore uses emptyDir, which is NOT persistent. Worker cache data will be LOST when the worker pod restarts or is rescheduled. Only the master metadata (written to the PVC) survives restarts. Do not use emptyDir as a persistent storage solution in production.

The three JindoRuntime fields that enable persistent storage work together:

Field Purpose
volumes Declares the PVC to attach to the JindoRuntime. Set claimName to the PVC you created in Step 1.
master.volumeMounts Mounts the declared volume into the master pod at the specified mountPath.
master.properties Sets the metadata directory. namespace.meta-dir must match the mountPath in master.volumeMounts.

Apply the manifest:

kubectl create -f dataset.yaml

Expected output:

dataset.data.fluid.io/demo created
jindoruntime.data.fluid.io/demo created

Verify the Dataset is bound:

kubectl get dataset

Expected output:

NAME   UFS TOTAL SIZE   CACHED   CACHE CAPACITY   CACHED PERCENTAGE   PHASE   AGE
demo   531.89MiB        0.00B    24.00GiB         0.0%                Bound   5m35s

When the PHASE column shows Bound, the Dataset and JindoRuntime are ready. Fluid automatically creates a PVC named demo that application pods can mount to access cached data.

Step 3: Verify persistent storage

Simulate a master pod reschedule to confirm that metadata survives.

  1. Create a file named pod.yaml to deploy a test application that mounts the Dataset:

    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
    spec:
      containers:
        - name: nginx
          image: registry.openanolis.cn/openanolis/nginx:1.14.1-8.6
          volumeMounts:
            - mountPath: /data
              name: data-vol
      volumes:
        - name: data-vol
          persistentVolumeClaim:
            claimName: demo
  2. Deploy the pod:

    kubectl create -f pod.yaml

    Expected output:

    pod/nginx created
  3. Confirm data access works before the reschedule:

    kubectl exec -it nginx -- ls /data

    The output lists files from the OSS bucket specified in Dataset.spec.mountPoint.

  4. Find the node running the JindoFS master:

    master_node=$(kubectl get pod -o wide | awk '/demo-jindofs-master-0/ {print $7}')
  5. Add a taint to that node to prevent pods from being scheduled back to it:

    kubectl taint node $master_node test-jindofs-master=reschedule:NoSchedule

    Expected output:

    node/cn-beijing.192.168.xx.xx tainted
  6. Delete the master pod. Kubernetes recreates it on a different node in the same zone and mounts the same PVC:

    The new master pod must be scheduled to a node in the same availability zone as the original node, because disk volumes cannot be mounted across zones. Make sure your cluster has at least two nodes in the zone where the original master pod ran.
    kubectl delete pod demo-jindofs-master-0

    Expected output:

    pod "demo-jindofs-master-0" deleted
  7. Recreate the application pod to verify data is still accessible after the master reschedule:

    kubectl delete -f pod.yaml && kubectl create -f pod.yaml

    Expected output:

    pod "nginx" deleted
    pod/nginx created
  8. Confirm data access from the recreated pod:

    kubectl exec -it nginx -- ls /data

    The output lists the same files as before the reschedule, confirming that persistent storage is working.

Step 4: Clean up

  1. Delete the application pod:

    kubectl delete -f pod.yaml

    Expected output:

    pod "nginx" deleted
  2. Remove the taint from the node:

    kubectl taint node $master_node test-jindofs-master-

    Expected output:

    node/cn-beijing.192.168.xx.xx untainted
  3. (Optional) Delete the Dataset, JindoRuntime, and PVC:

    Important

    After you create a disk volume, you are billed for the underlying disk. If you no longer need data acceleration, delete the resources to stop incurring charges. See Disk volumes for pricing details. Before deleting, make sure no application is using the Dataset and no I/O operations are in progress.

    kubectl delete -f dataset.yaml
    kubectl delete -f pvc.yaml