All Products
Search
Document Center

Container Service for Kubernetes:Recommended storage settings for cross-zone deployment

Last Updated:Feb 02, 2024

You can optimize storage settings for cross-zone deployment to greatly reduce application release interruptions and ensure that business-critical systems and applications can run as expected even if failures occur. This topic describes the recommended storage settings for cross-zone deployment.

Background information

Kubernetes provides powerful container orchestration capabilities to help users develop large-scale stateful applications on Kubernetes with ease. Kubernetes greatly simplifies the distribution and deployment of applications, and also hides the underlying hardware logic from users. However, this may cause the following issues.

  • Your application in a cross-zone cluster is accidentally deployed in Zone B instead of Zone A, which is the desired zone.

  • When you create a persistent volume (PV) and persistent volume claim (PVC) to mount a disk, the InvalidDataDiskCatagory.NotSupported message is displayed. For more information, see Why does the system prompt InvalidDataDiskCatagory.NotSupported when I create a dynamically provisioned PV?

  • When you mount a disk to an application, the The instanceType of the specified instance does not support this disk category message is displayed.

  • When you debug an application, the 0/x node are available, x nodes had volume node affinity conflict message is displayed.

The preceding issues can interrupt application releases. To reduce these issues, you can use the recommended storage settings for cross-zone deployment provided in this topic.

Recommended storage settings

  • Use disks instead of Apsara File Storage NAS (NAS) file systems to persist data. Disks are more stable than NAS file systems and provide higher bandwidth for data transfer.

  • Deploy your cluster across three zones to ensure sufficient node and storage resources.

  • Add nodes when no available nodes can be used in the zones of the cluster.

  • Use multiple types of disks to avoid mount failures.

  • Make sure that your application can be evenly distributed to the nodes in different zones.

storage

Recommended node pool settings

  • Make sure that each node pool is associated with only one zone, and create a node pool for each newly added zone. For more information, see Create a node pool.

    Important

    Make sure that each node pool is associated with a different zone. We recommend that you specify zone IDs in node pool names.

  • Enable auto scaling for the node pools. For more information, see Auto scaling of nodes.

  • Use the same type of Elastic Compute Service (ECS) instance across zones, or use ECS instances that support the same type of block storage.

  • Add taints to all nodes in a node pool to ensure that other application pods are not scheduled to the nodes and adversely affect the current application.污点

Configuration description:

  • Associate each node pool with only one zone and enable auto scaling for the node pool.

    The system can automatically select a node in another zone for pod scheduling if no available nodes can be used in the current zone, as shown in the following figure.pod

  • Use the same type of ECS instance across zones.

    ECS and block storage are correlated. Pod scheduling within a zone may fail if some nodes do not support the claimed block storage resource. As a result, the pod cannot be launched due to the disk mount failure.

Recommended cluster settings

  • Make sure that the cluster version is 1.20 or later.

  • Make sure that the version of the Container Storage Interface (CSI) plug-in is 1.22 or later. For more information, see csi-provisioner.

  • To specify multiple types of disks for high availability, use the following StorageClass template:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: alicloud-disk-topology-alltype
    parameters:
      type: cloud_essd,cloud_ssd,cloud_efficiency
    provisioner: diskplugin.csi.alibabacloud.com
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer
    allowVolumeExpansion: true
    allowedTopologies:
    - matchLabelExpressions:
      - key: topology.diskplugin.csi.alibabacloud.com/zone
        values:
        - cn-beijing-a
        - cn-beijing-b

Parameter description:

  • type: cloud_essd,cloud_ssd,cloud_efficiency:

    This parameter ensures that the CSI plug-in preferably creates enhanced SSDs (ESSDs). If ESSDs in the zone are out of stock, the CSI plug-in creates SSDs. This helps avoid application startup failures due to insufficient disk resources.

  • volumeBindingMode: WaitForFirstConsumer:

    This parameter specifies a disk creation mode provided by Kubernetes. In this mode, the CSI plug-in creates a disk based on the StorageClass only after the pod is scheduled to a specific node. This way, disks can be created based on the information of the node to which the pod is scheduled.

  • allowedTopologies:

    This parameter limits the zones of the topology of the provisioned volumes. If the StorageClass uses the WaitForFirstConsumer mode, the scheduler schedules pods to the specified topology because disks can be created only in the specified topology by using the StorageClass.

Recommended application settings

The following code block shows a standard StatefulSet template. You can customize the template based on your business requirements.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: "mysql"
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      topologySpreadConstraints:
      - labelSelector:
          matchLabels:
            app: mysql
        maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: ScheduleAnyway
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "mysql"
        volumeMounts:
        - name: disk-csi
          mountPath: /var/lib/mysql
      tolerations:
      - key: "app"
        operator: "Exists"
        effect: "NoSchedule"
  volumeClaimTemplates:
  - metadata:
      name: disk-csi
    spec:
      accessModes: [ "ReadWriteMany" ]
      storageClassName: alicloud-disk-topology-alltype
      resources:
        requests:
          storage: 40Gi

Parameter description:

  • topologySpreadConstraints:

    This parameter ensures that the pods created by the application are scheduled to different zones to implement high availability. For more information, see Topology Spread Constraints.

  • volumeClaimTemplates:

    You can use this parameter to automatically create a disk for each replicated pod. This helps quickly scale out storage resources.

Important
When a PV is dynamically provisioned, the YAML file of the PV contains information about the zones of the nodes to which the PV is mounted. The PV and the associated PVC can be scheduled only within the zones of the nodes. This ensures that the PV can be mounted to pods.

References