All Products
Search
Document Center

Container Service for Kubernetes:Mount a dynamically provisioned NAS volume

Last Updated:Dec 15, 2023

You can use the Container Storage Interface (CSI) driver to mount a dynamically provisioned Apsara File Storage NAS (NAS) volume to a Container Service for Kubernetes (ACK) cluster in subpath and filesystem modes. This topic describes how to mount a dynamically provisioned NAS volume to an ACK cluster. It also describes how to test whether the NAS volume can persist and share data as expected.

Prerequisites

  • An ACK cluster is created. For more information, see Create an ACK managed cluster.

  • A NAS file system is created. For more information, see Create a file system.

    If you want to encrypt data in a NAS volume, configure the encryption settings when you create the NAS file system.

  • A mount target is created for the NAS file system. For more information, see Manage mount targets.

    The mount target and the node to which you want to mount the NAS file system must belong to the same virtual private cloud (VPC).

Scenarios

  • Your application requires high disk I/O.

  • You need a storage service that offers higher read and write throughput than Object Storage Service (OSS).

  • You want to share files across hosts. For example, you want to use a NAS file system as a file server.

Considerations

  • To mount an Extreme NAS file system, set the path parameter of the NAS volume to a subdirectory of /share. For example, a value of 0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/share/subpath indicates that the mounted subdirectory of the NAS file system is /share/subpath.

  • If a NAS file system is mounted to multiple pods, the data in the file system is shared by the pods. In this case, the application must be able to synchronize data across the pods if the data in the NAS file system is modified by multiple pods.

    Note

    You cannot grant permissions to access the / directory (root directory) of the NAS file system. The user account and user group to which the directory belongs cannot be modified.

  • If the securityContext.fsgroup parameter is set in the application template, the kubelet performs the chmod or chown operation after the volume is mounted, which increases the time consumption.

    Note

    For more information about how to speed up the mounting process when the securityContext.fsgroup parameter is set, see Why does it require a long time to mount a NAS volume?.

Mount a dynamically provisioned NAS volume in the ACK console

You can mount a dynamically provisioned NAS volume only in subpath mode if you use the console. To mount a dynamically provisioned NAS volume in filesystem mode, you must use the kubectl command-line tool.

Step 1: Create a StorageClass

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the cluster that you want to manage and choose Volumes > StorageClasses in the left-side navigation pane.

  3. In the upper-right corner of the StorageClasses page, click Create.

  4. In the Create dialog box, configure the parameters.

    The following table describes some of the parameters.

    Parameter

    Description

    Name

    The name of the StorageClass.

    The name must start with a lowercase letter and can contain only lowercase letters, digits, periods (.), and hyphens (-).

    PV Type

    You can select Cloud Disk or NAS. In this example, NAS is selected.

    Volume Plug-in

    By default, CSI is selected.

    Reclaim Policy

    The reclaim policy. By default, this parameter is set to Delete. You can also set this parameter to Retain.

    • Delete: If you use this policy, you must set the archiveOnDelete parameter.

      • If you set archiveOnDelete to true, the persistent volume (PV) and NAS file system associated with a persistent volume claim (PVC) are renamed and retained after you delete the PVC.

      • If you set archiveOnDelete to false, the PV and NAS file system associated with a PVC are deleted after you delete the PVC.

    • Retain: When a PVC is deleted, the associated PV and NAS file system are retained and can only be manually deleted.

    If you have high requirements for data security, we recommend that you use the Retain policy to prevent data loss caused by user errors.

    Mount Options

    The mount options, such as the Network File System (NFS) version.

    Mount Target Domain Name

    The mount target of the NAS file system.

    If no mount target is available, you must create a NAS file system first. For more information, see Use CNFS to manage NAS file systems.

    Path

    The mount path of the NAS file system.

  5. After you complete the parameter configurations, click Create.

    You can find the created StorageClass in the StorageClasses list.

Step 2: Create a PVC

  1. In the left-side navigation pane of the details page, choose Volumes > Persistent Volume Claims.

  2. In the upper-right corner of the Persistent Volume Claims page, click Create.

  3. In the Create PVC dialog box, set the following parameters.

    Parameter

    Description

    PVC Type

    You can select Cloud Disk, NAS, or OSS. In this example, NAS is selected.

    Name

    The name of the PVC. The name must be unique in the cluster.

    Allocation Mode

    In this example, Use StorageClass is selected.

    Existing Storage Class

    Click Select. In the Select Storage Class dialog box, find the StorageClass that you want to use and click Select in the Actions column.

    Capacity

    The capacity claimed by the PVC.

    Access mode

    You can select ReadWriteMany or ReadWriteOnce.

  4. Click Create.

    After the PVC is created, you can find the PVC in the PVCs list. The PVC is bound to the corresponding PV.

Step 3: Create an application

  1. In the left-side navigation pane of the details page, choose Workloads > Deployments.

  2. In the upper-right corner of the Deployments page, click Create from Image.

  3. Configure the application parameters.

    This example shows how to configure the volume parameters. For more information about other parameters, see Create a stateless application by using a Deployment.

    You can add local volumes and cloud volumes.

    • Add Local Storage: You can select HostPath, ConfigMap, Secret, or EmptyDir from the PV Type drop-down list. Then, set the Mount Source and Container Path parameters to mount the volume to a container path. For more information, see Volumes.

    • Add PVC: You can add cloud volumes.

    In this example, a NAS volume is mounted to the /tmp path in the container.

  4. Set other parameters and click Create.

    After the application is created, you can use the volume to store application data.

Mount a dynamically provisioned NAS volume in subpath mode by using kubectl

The subpath mode is applicable to scenarios where you want to share a NAS volume among different applications or pods. You can also use this mode to mount different subdirectories of the same NAS file system to different pods.

To mount a dynamically provisioned NAS volume in subpath mode, you must manually create a NAS file system and a mount target.

  1. Create a NAS file system and a mount target.

    1. Log on to the NAS console.

    2. Create a NAS file system. For more information, see Create a file system.

    3. Create a mount target. For more information, see Manage mount targets.

  2. Create a StorageClass.

    1. Create a file named alicloud-nas-subpath.yaml and copy the following content to the file:

      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: alicloud-nas-subpath
      mountOptions:
      - nolock,tcp,noresvport
      - vers=3
      parameters:
        volumeAs: subpath
        server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/k8s/"
      provisioner: nasplugin.csi.alibabacloud.com
      reclaimPolicy: Retain

      Parameter

      Description

      mountOptions

      Set the mount options of the NAS file system in the mountOptions field. For example, you can specify the NFS version that you want to use.

      volumeAs

      Valid values: subpath or filesystem. To mount a subdirectory to the cluster, set the value to subpath. To mount a file system to the cluster, set the value to filesystem.

      server

      When you mount a subdirectory of the NAS file system as a PV, this parameter specifies the mount target of the NAS file system.

      provisioner

      The type of the storage driver that is used to provision the volume. In this example, the parameter is set to nasplugin.csi.alibabacloud.com. This indicates that the CSI plug-in provided by Alibaba Cloud is used.

      reclaimPolicy

      The reclaim policy of the PV. Default value: Delete. You can also set the value to Retain.

      • Delete: If you use this policy, you must set the archiveOnDelete parameter.

        • If you set archiveOnDelete to true, the PV and NAS file system associated with a PVC are renamed and retained after you delete the PVC.

        • If you set archiveOnDelete to false, the PV and NAS file system associated with a PVC are deleted after you delete the PVC.

      • Retain: When a PVC is deleted, the associated PV and NAS file system are retained and can only be manually deleted.

      If you have high requirements for data security, we recommend that you use the Retain policy to prevent data loss caused by user errors.

      archiveOnDelete

      This parameter specifies the reclaim policy of backend storage when reclaimPolicy is set to Delete. NAS is a shared storage service. You must set both reclaimPolicy and archiveOnDelete to ensure data security. Configure the policy in the parameters section. The default value is true. This value indicates that the subdirectory or files are not deleted when the PVC is deleted. Instead, the subdirectory or files are renamed in the format of archived-{pvName}.{timestamp}. If the value is set to false, it indicates that the backend storage is deleted when the PVC is deleted.

      Note

      We recommend that you do not set the value to false when the service receives a large amount of network traffic. For more information, see What do I do if the task queue of alicloud-nas-controller is full and PVs cannot be created when I use a dynamically provisioned NAS volume?.

    2. Run the following command to create a StorageClass:

      kubectl create -f alicloud-nas-subpath.yaml
  3. Create a PVC.

    1. Create a file named pvc.yaml and copy the following content to the file:

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata: 
        name: nas-csi-pvc
      spec:
        accessModes:
        - ReadWriteMany 
        storageClassName: alicloud-nas-subpath
        resources: 
          requests:
            storage: 20Gi

      Parameter

      Description

      name

      The name of the PVC.

      accessModes

      The access mode of the PVC.

      storageClassName

      The name of the StorageClass that you want to associate with the PVC.

      storage

      The storage that is claimed by the PVC.

      Note

      This parameter does not limit the storage that the application can use. In addition, the storage claimed by the PVC does not automatically increase. Quotas are set on the subdirectory of the mounted NAS file system only when the file system is a general-purpose NAS file system and the allowVolumeExpasion parameter of the StorageClass is set to true.

    2. Run the following command to create a PVC:

      kubectl create -f pvc.yaml
  4. Create applications.

    Create two applications named nginx-1 and nginx-2 to share the same subdirectory of the NAS file system.

    1. Create a file named nginx-1.yml and copy the following content to the file:

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: deployment-nas-1
        labels:
          app: nginx-1
      spec:
        selector:
          matchLabels:
            app: nginx-1
        template:
          metadata:
            labels:
              app: nginx-1
          spec:
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
              volumeMounts:
                - name: nas-pvc
                  mountPath: "/data"
            volumes:
              - name: nas-pvc
                persistentVolumeClaim:
                  claimName: nas-csi-pvc
      • mountPath: the path where the NAS file system is mounted in the container.

      • claimName: the name of the PVC that the application uses to mount the NAS file system. In this example, the value is set to nas-csi-pvc.

    2. Create a file named nginx-2.yml and copy the following content to the file:

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: deployment-nas-2
        labels:
          app: nginx-2
      spec:
        selector:
          matchLabels:
            app: nginx-2
        template:
          metadata:
            labels:
              app: nginx-2
          spec:
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
              volumeMounts:
                - name: nas-pvc
                  mountPath: "/data"
            volumes:
              - name: nas-pvc
                persistentVolumeClaim:
                  claimName: nas-csi-pvc
      • mountPath: the path where the NAS file system is mounted in the container. In this example, the value is set to /data.

      • claimName: Enter the name of the PVC that is used by nginx-1. In this example, the value is set to nas-csi-pvc.

    3. Run the following command to deploy the nginx-1 and nginx-2 applications:

      kubectl create -f nginx-1.yaml -f nginx-2.yaml
  5. Run the following command to query the pods that are created for the applications:

    kubectl get pod

    Expected output:

    NAME                                READY   STATUS    RESTARTS   AGE
    deployment-nas-1-5b5cdb85f6-n****   1/1     Running   0          32s
    deployment-nas-2-c5bb4746c-4****    1/1     Running   0          32s
    Note

    The subdirectory 0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/share/nas-79438493-f3e0-11e9-bbe5-00163e09**** of the NAS volume is mounted to the /data directory of pods deployment-nas-1-5b5cdb85f6-n**** and deployment-nas-2-c5bb4746c-4****. The following information is displayed:

    • /share: the subdirectory is mounted in subpath mode as specified in the StorageClass configurations.

    • nas-79438493-f3e0-11e9-bbe5-00163e09****: the name of the PV.

    To mount different subdirectories of a NAS file system to different pods, you must create a separate PVC for each pod. To do this, you can create pvc-1 for nginx-1 and create pvc-2 for nginx-2.

Mount a dynamically provisioned NAS volume in filesystem mode by using kubectl

Important

By default, if you delete a PV that is mounted in filesystem mode, the system retains the related NAS file system and mount target. To delete the NAS file system and mount target together with the PV, set reclaimPolicy to Delete and set deleteVolume to true in the StorageClass configurations.

The filesystem mode is applicable to scenarios where you want to dynamically create and delete NAS file systems and mount targets.

When you mount a NAS volume to a pod in filesystem mode, you can create only one NAS file system and one mount target. The following procedure shows how to mount a dynamically provisioned NAS volume in filesystem mode.

  1. Optional: Configure a Resource Access Management (RAM) policy and attach it to the RAM role assigned to your cluster.

    If you use an ACK dedicated cluster, you must perform this step. If you use an ACK managed cluster, you can skip this step.

    The filesystem mode allows you to dynamically create and delete NAS file systems and mount targets. To perform these operations in an ACK dedicated cluster, you must grant the required permissions to csi-provisioner. The following code block shows a RAM policy that contains the required permissions:

    {
        "Action": [
            "nas:DescribeMountTargets",
            "nas:CreateMountTarget",
            "nas:DeleteFileSystem",
            "nas:DeleteMountTarget",
            "nas:CreateFileSystem"
        ],
        "Resource": [
            "*"
        ],
            "Effect": "Allow"
    }

    You can grant the permissions by using one of the following methods:

    • Attach the preceding RAM policy to the master RAM role of your ACK dedicated cluster. For more information, see ACK default roles.自定义授权

    • Create a RAM user and attach the preceding RAM policy to the RAM user. Then, generate an AccessKey pair and specify the AccessKey pair in the env variable in the configurations of the csi-provisioner configurations. For more information, see ACK default roles.

      env:
      - name: CSI_ENDPOINT
        value: unix://socketDir/csi.sock
      - name: ACCESS_KEY_ID
        value: ""
      - name: ACCESS_KEY_SECRET
        value: ""
  2. Create a StorageClass.

    1. Create a file named alicloud-nas-fs.yaml and copy the following content to the file:

      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: alicloud-nas-fs
      mountOptions:
      - nolock,tcp,noresvport
      - vers=3
      parameters:
        volumeAs: filesystem
        fileSystemType: standard
        storageType: Performance
        regionId: cn-beijing
        zoneId: cn-beijing-e
        vpcId: "vpc-2ze2fxn6popm8c2mzm****"
        vSwitchId: "vsw-2zwdg25a2b4y5juy****"
        accessGroupName: DEFAULT_VPC_GROUP_NAME
        deleteVolume: "false"
      provisioner: nasplugin.csi.alibabacloud.com
      reclaimPolicy: Retain

      Parameter

      Description

      volumeAs

      The mount mode of the NAS file system. Valid values:

      • filesystem: The provisioner automatically creates a NAS file system. Each PV corresponds to a NAS file system.

      • subpath: The provisioner automatically creates a subdirectory in a NAS file system. Each PV corresponds to a subdirectory of the NAS file system.

      fileSystemType

      The type of NAS file system. Valid values:

      • standard: general-purpose NAS file system

      • extreme: extreme NAS file system

      Default value: standard.

      storageType

      The storage type of NAS file system.

      • If fileSystemType is set to standard, the valid values are Performance and Capacity. Default value: Performance.

      • If fileSystemType is set to extreme, the valid values are standard and advance. Default value: standard.

      regionId

      The ID of the region to which the NAS file system belongs.

      zoneId

      The ID of the zone to which the NAS file system belongs.

      vpcId

      The ID of the VPC to which the mount target of the NAS file system belongs.

      vSwitchId

      The ID of the vSwitch to which the mount target of the NAS file system belongs.

      accessGroupName

      The permission group to which the mount target of the NAS file system belongs. Default value: DEFAULT_VPC_GROUP_NAME.

      deleteVolume

      The reclaim policy of the NAS file system when the related PV is deleted. NAS is a shared storage service. Therefore, you must specify both deleteVolume and reclaimPolicy to ensure data security.

      provisioner

      The type of the storage driver that is used to provision the volume. In this example, the parameter is set to nasplugin.csi.alibabacloud.com. This indicates that the CSI plug-in provided by Alibaba Cloud is used.

      reclaimPolicy

      The reclaim policy of the PV When you delete a PVC, the related NAS file system is automatically deleted only if you set deleteVolume to true and reclaimPolicy to Delete.

    2. Run the following command to create a StorageClass:

      kubectl create -f alicloud-nas-fs.yaml
  3. Create a PVC and pods to mount a NAS volume.

    1. Create a file named pvc.yaml and copy the following content to the file:

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: nas-csi-pvc-fs
      spec:
        accessModes:
          - ReadWriteMany
        storageClassName: alicloud-nas-fs
        resources:
          requests:
            storage: 20Gi
    2. Create a file named nginx.yaml and copy the following content to the file:

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: deployment-nas-fs
        labels:
          app: nginx
      spec:
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
              volumeMounts:
                - name: nas-pvc
                  mountPath: "/data"
            volumes:
              - name: nas-pvc
                persistentVolumeClaim:
                  claimName: nas-csi-pvc-fs
    3. Run the following command to create the PVC and pods:

      kubectl create -f pvc.yaml -f nginx.yaml

In filesystem mode, the CSI driver automatically creates a NAS file system and a mount target when you create the PVC. When the PVC is deleted, the file system and the mount target are retained or deleted based on the settings of the deleteVolume and reclaimPolicy parameters.

Verify that the NAS file system can be used to persist data

NAS provides persistent storage. When a pod is deleted, the recreated pod automatically synchronizes the data of the deleted pod.

Perform the following steps to verify that NAS file system can be used to persist data:

  1. Query the pods that are created for the application and the files in the mounted NAS file system.

    1. Run the following command to query the pods that are created for the application:

      kubectl get pod 

      Expected output:

      NAME                                READY   STATUS    RESTARTS   AGE
      deployment-nas-1-5b5cdb85f6-n****   1/1     Running   0          32s
      deployment-nas-2-c5bb4746c-4****    1/1     Running   0          32s
    2. Run the following command to query files in the /data path of a pod. In this example, the pod deployment-nas-1-5b5cdb85f6-n**** is used.

      kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- ls /data

      No output is returned. This indicates that no file exists in the /data path.

  2. Run the following command to create a file named nas in the /data path of the pod deployment-nas-1-5b5cdb85f6-n****:

    kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- touch /data/nas
  3. Run the following command to query files in the /data path of the pod deployment-nas-1-5b5cdb85f6-n****:

    kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- ls /data

    Expected output:

    nas
  4. Run the following command to delete the pod:

    kubectl delete pod deployment-nas-1-5b5cdb85f6-n****
  5. Open another CLI and run the following command to view how the pod is deleted and recreated:

    kubectl get pod -w -l app=nginx
  6. Verify that the file still exists after the pod is deleted.

    1. Run the following command to query the name of the recreated pod:

      kubectl get pod

      Expected output:

      NAME                                READY   STATUS    RESTARTS   AGE
      deployment-nas-1-5b5cdm2g5-m****    1/1     Running   0          32s
      deployment-nas-2-c5bb4746c-4****    1/1     Running   0          32s
    2. Run the following command to query files in the /data path of the pod deployment-nas-1-5b5cdm2g5-m****:

      kubectl exec deployment-nas-1-5b5cdm2g5-m**** -- ls /data

      Expected output:

      nas

      The nas file still exists in the /data path. This indicates that data is persisted in the NAS file system.

Verify that data in the NAS file system can be shared across pods

You can mount a NAS volume to multiple pods. When the data is modified in one pod, the modifications are automatically synchronized to other pods.

Perform the following steps to verify that data in the NAS file system can be shared across pods:

  1. Query the pods that are created for the application and the files in the mounted NAS file system.

    1. Run the following command to query the pods that are created for the application:

      kubectl get pod 

      Expected output:

      NAME                                READY   STATUS    RESTARTS   AGE
      deployment-nas-1-5b5cdb85f6-n****   1/1     Running   0          32s
      deployment-nas-2-c5bb4746c-4****    1/1     Running   0          32s
    2. Run the following command to query files in the /data path of each pod:

      kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- ls /data
      kubectl exec deployment-nas-2-c5bb4746c-4**** -- ls /data
  2. Run the following command to create a file named nas in the /data path of a pod:

     kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- touch /data/nas
  3. Run the following command to query files in the /data path of each pod:

    1. Run the following command to query files in the /data path of the pod deployment-nas-1-5b5cdb85f6-n****:

      kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- ls /data

      Expected output:

      nas
    2. Run the following command to query files in the /data path of the pod deployment-nas-2-c5bb4746c-4****:

      kubectl exec deployment-nas-2-c5bb4746c-4**** -- ls /data

      Expected output:

      nas

      After you create a file in the /data path of one pod, you can also find the file in the /data path of the other pod. This indicates that data in the NAS file system is shared by the two pods.

Enable user isolation or user group isolation

  1. Use the following YAML template to create an application. The containers of the application start processes and create directories as the nobody user. The user identifier (UID) and group identifier (GID) of the nobody user are 65534.

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: nas-sts
    spec:
      selector:
        matchLabels:
          app: busybox
      serviceName: "busybox"
      replicas: 1
      template:
        metadata:
          labels:
            app: busybox
        spec:
          securityContext:
            fsGroup: 65534    # The containers create directories as the nobody user. The UID and GID of the nobody user are 65534. 
            fsGroupChangePolicy: "OnRootMismatch"    # Permissions and ownership are changed only if the permissions and the ownership of the root directory do not meet the requirements of the volume. 
          containers:
          - name: busybox
            image: busybox
            command:
            - sleep
            - "3600"
            securityContext:
              runAsUser: 65534    # All processes in the containers run as the nobody user (UID 65534). 
              runAsGroup: 65534   # All processes in the containers run as the nobody user (GID 65534). 
              allowPrivilegeEscalation: false
            volumeMounts:
            - name: nas-pvc
              mountPath: /data
      volumeClaimTemplates:
      - metadata:
          name: nas-pvc
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: "alicloud-nas-subpath"
          resources:
            requests:
              storage: 100Gi
  2. Run the following top command in a container to check whether the command is run as the nobody user:

    kubectl exec nas-sts-0 -- "top"

    Expected output:

    Mem: 11538180K used, 52037796K free, 5052K shrd, 253696K buff, 8865272K cached
    CPU:  0.1% usr  0.1% sys  0.0% nic 99.7% idle  0.0% io  0.0% irq  0.0% sirq
    Load average: 0.76 0.60 0.58 1/1458 54
      PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
       49     0 nobody   R     1328  0.0   9  0.0 top
        1     0 nobody   S     1316  0.0  10  0.0 sleep 3600

    The output shows that the top command is run as the nobody user.

  3. Run the following to check whether the nobody user is used to create the directories and files in the mount directory of the NAS file system:

    kubectl exec nas-sts-0 -- sh -c "touch /data/test; mkdir /data/test-dir; ls -arlth /data/"

    Expected output:

    total 5K
    drwxr-xr-x    1 root     root        4.0K Aug 30 10:14 ..
    drwxr-sr-x    2 nobody   nobody      4.0K Aug 30 10:14 test-dir
    -rw-r--r--    1 nobody   nobody         0 Aug 30 10:14 test
    drwxrwsrwx    3 root     nobody      4.0K Aug 30 10:14 .

    The output shows that the nobody user is used to create the test file and the test-dir directory in the /data directory.