All Products
Search
Document Center

Container Service for Kubernetes:Use a StatefulSet to create a stateful application

Last Updated:Jul 08, 2024

Stateful workloads can save data or states while they are running. You can quickly create stateful applications by using StatefulSets in the Container Service for Kubernetes (ACK) console. This topic describes how to create a stateful NGINX application and how to verify the data persistence feature of stateful applications.

Prerequisites

Before you create a StatefulSet from an image, make sure that the following steps are performed:

Background information

StatefulSets provide the following features:

Feature

Description

Pod consistency

Pod consistency ensures that pods are started and terminated in the specified order and ensures network consistency. Pod consistency is determined by pod configurations, regardless of the node to which a pod is scheduled.

Stable and persistent storage

VolumeClaimTemplate allows you to mount a persistent volume (PV) to each pod. The PVs mounted to replicated pods are not deleted after you delete the replicated pods or scale in the number of the replicated pods.

Stable network identifiers

Each pod in a StatefulSet derives its hostname from the name of the StatefulSet and the ordinal of the pod. The pattern of the hostname is StatefulSet name-pod ordinal.

Stable orders

For a StatefulSet with N replicated pods, each pod is assigned an unique integer ordinal from 0 to N-1.

Use a StatefulSet to create a stateful application

Step 1: Configure basic settings

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Workloads > StatefulSets.

  3. In the upper-right corner of the StatefulSets page, click Create from Image.

  4. On the Basic Information wizard page, configure the basic settings. Set the Type parameter to StatefulSet.

    Parameter

    Description

    Name

    The name of the application.

    Replicas

    The number of pods that you want to provision for the application. Default value: 2.

    Type

    The type of the resource object that you want to create. Valid values: Deployment, StatefulSet, Job, CronJob, and DaemonSet.

    Label

    The label to be added to the application, which identifies the application.

    Annotations

    The annotations to be added to the application.

    Synchronize Timezone

    Specifies whether to synchronize the time zone between nodes and containers.

  5. Click Next to proceed to the Container wizard page.

Step 2: Configure containers

On the Container wizard page, configure images, resources, ports, environment variables, health check, lifecycle, volumes, and logs for containers.

Note

To add containers, click Add Container to the right of Container1.

  1. In the General section, configure the basic settings of containers.

    Parameter

    Description

    Image Name

    • Select images

      You can click Select images to select an image. The following types of images are supported:

      • Container Registry Enterprise Edition: Select an image stored in a Container Registry Enterprise Edition instance. You must select the region and the Container Registry instance to which the image belongs. For more information about Container Registry, see What is Container Registry?

      • Container Registry Personal Edition: Select an image stored in a Container Registry Personal Edition instance. You must select the region and the Container Registry instance to which the image belongs.

      • Artifact Center: The artifact center contains base operating system images, base language images, and AI- and big data-related images for application containerization. In this example, an NGINX image is selected. For more information, see Overview of the artifact center.

        Note

        The artifact center of Container Registry provides base images that are updated and patched by Alibaba Cloud or OpenAnolis. If you have other requirements or questions, join the DingTalk group (ID 33605007047) to request technical support.

      You can also enter the address of an image that is stored in a private registry. The image address must be specified in the following format: domainname/namespace/imagename:tag.

    • Image Pull Policy

      The policy for pulling images. Valid values:

      • IfNotPresent: If the image that you want to pull is found on your on-premises machine, the image on your on-premises machine is used. Otherwise, ACK pulls the image from the image registry.

      • Always: ACK pulls the image from Container Registry each time the application is deployed or expanded.

      • Never: ACK uses only images on your on-premises machine.

      Note

      If you select Image Pull Policy, no image pulling policy is applied by default.

    • Set Image Pull Secret

      You can click Set Image Pull Secret to set a Secret for pulling images from a private registry.

    Resource Limit

    The maximum amount of CPU, memory, and ephemeral storage resources that the application can use. The resource limits prevent the application from occupying an excessive amount of resources. For more information about how to configure resource limits, see Resource profiling.

    Required Resources

    The amount of CPU, memory, and ephemeral storage resources that are reserved for the application. These resources are dedicated to the pods of the application and cannot be preempted by other applications. For more information about how to reserve resources, see Resource profiling.

    Container Start Parameter

    • stdin: specifies that start parameters are sent to the container as standard input (stdin).

    • tty: specifies that start parameters defined in a virtual terminal are sent to the container.

    The two options are usually used together. In this case, the virtual terminal (tty) is associated with the stdin of the container. For example, an interactive program receives the stdin from the user and displays the content in the terminal.

    Privileged Container

    • If you select Privileged Container, privileged=true is set for the container and the privilege mode is enabled.

    • If you do not select Privileged Container, privileged=false is set for the container and the privilege mode is disabled.

    Init Containers

    Select this option to create an init container.

    Init containers can be used to block or postpone the startup of application containers. Application containers in a pod concurrently start only after init containers start. For example, you can use init containers to verify the availability of a service on which the application depends. You can run tools or scripts that are not provided by an application image in init containers to initialize the runtime environment for application containers. For example, you can run tools or scripts to configure kernel parameters or generate configuration files. For more information, see Init containers.

  2. Optional: In the Ports section, click Add to add container ports.

    Parameter

    Description

    Name

    The name of the container port.

    Container Port

    The number or name of the container port that you want to expose. Valid values of the port number: 1 to 65535.

    Protocol

    Valid values: TCP and UDP.

  3. Optional: In the Environments section, click Add to add environment variables.

    You can configure environment variables in key-value pairs. Environment variables are used to apply pod configurations to containers. For more information, see Pod variables.

    Parameter

    Description

    Type

    The type of the environment variable that you want to add. Valid values:

    • Custom

    • ConfigMaps

    • Secrets

    • Value/ValueFrom

    • ResourceFieldRef

    If you select ConfigMaps or Secrets, you can pass all data in the selected ConfigMap or Secret to the container environment variables. In this example, Secrets is selected. Select Secrets from the Type drop-down list and select a Secret from the Value/ValueFrom drop-down list. By default, all data in the selected Secret is passed to the environment variable.环境变量

    In this case, the YAML file that is used to deploy the application contains the settings that reference all data in the selected Secret.yaml

    If you select ResourceFieldRef, the resourceFieldRef parameter is specified to reference the resource values from the pod specifications and then pass the resource values to the container as environment variables. The following YAML file provides an example:

    image

    Variable Key

    The name of the environment variable.

    Value/ValueFrom

    The value of the environment variable.

  4. Optional: In the Health Check section, enable Liveness, Readiness, and Startup on demand.

    For more information, see Configure Liveness, Readiness and Startup Probes.

    Parameter

    Request type

    Description

    • Liveness: Liveness probes are used to determine when to restart a container.

    • Readiness: Readiness probes are used to determine whether a container is ready to receive traffic.

    • Startup: Startup probes are used to determine when to start a container.

      Note

      Startup probes are supported only in Kubernetes 1.18 and later.

    HTTP

    Sends an HTTP GET request to the container. You can configure the following parameters:

    • Protocol: the protocol over which the request is sent. Valid values: HTTP and HTTPS.

    • Path: the requested HTTP path on the server.

    • Port: the number or name of the container port that you want to expose. Valid values of the port number: 1 to 65535.

    • HTTP Header: the custom headers in the HTTP request. Duplicate headers are allowed. You can specify HTTP headers in key-value pairs.

    • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the waiting time before the first probe is performed after the container is started. Default value: 3. Unit: seconds.

    • Period (s): the periodSeconds field in the YAML file. This field specifies the interval at which probes are performed. Default value: 10. Minimum value: 1. Unit: seconds.

    • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time period after which a probe times out. Default value: 1. Minimum value: 1. Unit: seconds.

    • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

    • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

    TCP

    Sends a TCP socket to the container. kubelet attempts to open the socket on the specified port. If the connection can be established, the container is considered healthy. Otherwise, the container is considered unhealthy. You can configure the following parameters:

    • Port: the number or name of the container port that you want to open. Valid values of the port number: 1 to 65535.

    • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the waiting time before the first probe is performed after the container is started. Default value: 15. Unit: seconds.

    • Period (s): the periodSeconds field in the YAML file. This field specifies the interval at which probes are performed. Default value: 10. Minimum value: 1. Unit: seconds.

    • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time after which a probe times out. Default value: 1. Minimum value: 1. Unit: seconds.

    • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

    • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

    Command

    Runs a probe command in the container to check the health status of the container. You can configure the following parameters:

    • Command: the probe command that is run to check the health status of the container.

    • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the waiting time before the first probe is performed after the container is started. Default value: 5. Unit: seconds.

    • Period (s): the periodSeconds field in the YAML file. This field specifies the interval at which probes are performed. Default value: 10. Minimum value: 1. Unit: seconds.

    • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time after which a probe times out. Default value: 1. Minimum value: 1. Unit: seconds.

    • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

    • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

  5. Optional: In the Lifecycle section, set the lifecycle of containers.

    You can configure the following parameters for the lifecycle of the container: Start, Post Start, and Pre Stop. For more information, see Attach Handlers to Container Lifecycle Events.生命周期

    Parameter

    Description

    Start

    Specify a command and parameter that takes effect before the container starts.

    Post Start

    Specify a command that takes effect after the container starts.

    Pre Stop

    Specify a command that takes effect before the container stops.

  6. Optional: In the Volume section, add local volumes or persistent volume claims (PVCs).

    Parameter

    Description

    Add Local Storage

    You can select HostPath, ConfigMap, Secret, or EmptyDir from the PV Type drop-down list. Then, configure the Mount Source and Container Path parameters to mount the volume to the container. For more information, see Volumes.

    Add PVC

    You can mount persistent volumes (PVs) by using persistent volume claims (PVCs). You must create a PVC before you can select the PVC to mount a PV. For more information, see Create a PVC.

    In this example, a disk volume is mounted to the /tmp path of containers. 配置数据卷

  7. Optional: In the Log section, configure log collection and add custom tags.

    Important

    Make sure that the Simple Log Service agent is installed in the cluster.

    Parameter

    Description

    Collection Configuration

    Logstore: creates a Logstore in Simple Log Service to store the collected logs.

    Log Path in Container: Specify stdout or a container path to collect log data.

    • Collect stdout files: If you specify stdout, the stdout files are collected.

    • Text Logs: specifies that the logs in the specified path of the container are collected. In this example, /var/log/nginx is specified as the path. Wildcard characters can be used in the path.

    Custom Tag

    You can also add custom tags. The tags are added to the log of the container when the log is collected. Custom tags help filter and analyze the collected logs.

  8. Click Next to go to the Advanced wizard page.

Step 3: Configure advanced settings

On the Advanced wizard page, configure the following settings: access control, scaling, scheduling, annotations, and labels.

  1. In the Access Control section, you can configure access control settings for exposing backend pods.

    Note

    You can configure the following access control settings based on your business requirements:

    • Internal applications: For applications that provide services within the cluster, you can create a ClusterIP or NodePort Service to enable internal communication.

    • External applications: For applications that are exposed to the Internet, you can configure access control by using one of the following methods:

      • Create a LoadBalancer Service. When you create a Service, set the Type parameter to Server Load Balancer. You can select or create a Server Load Balancer (SLB) instance for the Service and use the Service to expose your application to the Internet.

      • Create an Ingress and use it to expose your application to the Internet. For more information, see Ingress.

    You can also specify how backend pods are exposed to the Internet. In this example, a ClusterIP Service and an Ingress are created to expose the NGINX application to the Internet.

    • To create a Service, click Create on the right side of Services. In the Create dialog box, set the parameters.

      Parameter

      Description

      Name

      The name of the Service.

      Service Type

      The type of the Service. This parameter specifies how the Service is accessed. Valid values:

      Cluster IP

      A Cluster IP Service. This type of Service is exposed by using an internal IP address of the cluster. If you select this type, the Service is accessible only within the cluster. This is the default type. If you select Headless Service, you can interact with other service discovery mechanisms without the need to rely on the ClusterIP-based service discovery and load balancing features provided by Kubernetes by default.

      Server Load Balancer.

      Note

      A LoadBalancer Service. The Create SLB Instance and Use Existing SLB Instance features are in canary release. You can submit a ticket to apply for using the features.

      This type of Service exposes applications in clusters by using a Classic Load Balancer (CLB) or a Network Load Balancer (NLB) instance.What is SLB? Compared with the Node Port method, the availability and performance of applications exposed by using this method are significantly improved.

      Create CLB Instance

      If you select Create CLB Instance, you can set the access mode of the CLB instance to public access or internal access and the billing method of the CLB instance to pay-by-specification or pay-as-you-go. For more information, see Create and manage a CLB instance.

      Advanced Settings

      Name: the name of the CLB instance. The parameter is required only if you create a CLB instance.

      IP Version: the version of the IP address. Valid values: ipv4 and ipv6.

      Scheduling Algorithm: the scheduling algorithm. Valid values: Round Robin (RR) and Weighted Round Robin (WRR). Default value: Round Robin (RR). Round Robin (RR): Requests are distributed to backend servers in sequence. Weighted Round Robin (WRR): Backend servers with higher weights receive more requests than those with lower weights.

      Access Control: specifies whether to enable the access control feature for the listener. For more information, see Access control.

      Health Check: specifies whether to enable the health check feature. You can set the Health Check Protocol parameter to TCP or HTTP. After the health check feature is enabled, you can determine the service availability of backend servers by using the health check feature. For more information about how health checks work, see How CLB health checks work.

      Others: You can also use annotations to configure CLB instances. For more information, see Add annotations to the YAML file of a Service to configure CLB instances.

      Use Existing CLB Instance

      You can select an existing CLB instance from the drop-down list below Use Existing CLB Instance.

      Important

      You must take note of some limits and usage notes when you use an existing CLB instance. For more information, see the Usage notes section of the "Considerations for configuring a LoadBalancer Service" topic.

      Advanced Settings

      Scheduling Algorithm: the scheduling algorithm. Valid values: Round Robin (RR) and Weighted Round Robin (WRR). Default value: Round Robin (RR). Round Robin (RR): Requests are distributed to backend servers in sequence. Weighted Round Robin (WRR): Backend servers with higher weights receive more requests than those with lower weights.

      Access Control: specifies whether to enable the access control feature for the listener. For more information, see Access control.

      Health Check: specifies whether to enable the health check feature. You can set the Health Check Protocol parameter to TCP or HTTP. After the health check feature is enabled, you can determine the service availability of backend servers by using the health check feature. For more information about how health checks work, see How CLB health checks work.

      Others: You can also use annotations to configure CLB instances. For more information, see Add annotations to the YAML file of a Service to configure CLB instances.

      Create NLB Instance

      If you select Create NLB Instance, you can set the access mode of the NLB instance to public access or internal access. For more information, see Create and manage an NLB instance.

      Advanced Settings

      Name: the name of the NLB instance. This parameter is required only if you create an NLB instance.

      IP Version: the version of the IP address. Valid values: ipv4 and Dual-stack.

      Scheduling Algorithm: the scheduling algorithm. Valid values:

      • Round-Robin: Requests are forwarded to backend servers in sequence.

      • Weighted Round-Robin (default): Backend servers that have higher weights receive more requests than backend servers that have lower weights.

      • Source IP Hashing: specifies consistent hashing that is based on source IP addresses. Requests from the same source IP address are distributed to the same backend server.

      • Four-Element Hashing: specifies consistent hashing that is based on the following factors: source IP address, destination IP address, source port, and destination port. Requests that contain the same information based on the preceding factors are forwarded to the same backend server.

      • QUIC ID Hashing: specifies consistent hashing based on QUIC IDs. Requests with the same QUIC ID are forwarded to the same backend server.

      • Weighted Least Connections: Requests are forwarded based on the weights and number of connections to backend servers. If two backend servers have the same weight, the backend server that has fewer connections receives more requests.

      Health Check: specifies whether to enable the health check feature.

      • TCP: To perform TCP health checks, NLB sends SYN packets to a backend server to check whether the port of the backend server can receive requests. This is the default value.

        • Health Check Response Timeout: the amount of time to wait before you receive a response from a health check. If a backend server does not respond within the specified timeout period, the backend server is considered unhealthy.

        • Health Check Interval: the interval at which health checks are performed.

        • Healthy Threshold: the minimum number of times that a backend server must consecutively pass health checks before the backend server is considered healthy.

        • Unhealthy Threshold: the minimum number of times that a backend server must consecutively fail to pass health checks before the backend server is considered unhealthy.

      • HTTP: To perform HTTP health checks, NLB sends HEAD or GET requests to a backend server to check whether the backend server is healthy.

        • Domain Name: the domain name that is used for the health check.

          • Backend Server Internal IP: uses the private IP addresses of backend servers for health checks. This is the default value.

          • Custom Domain Name: Enter a domain name.

        • Path: the URL of the health check page.

        • Health Check Status Codes: the status code that is returned for a health check. Valid values: http_2xx, http_3xx, http_4xx, and http_5xx. Default value: http_2xx.

      Others: You can also use annotations to configure NLB instances. For more information, see Configure NLB instances by using annotations.

      Use Existing NLB Instance

      If you select Use Existing NLB Instance, you can select an existing NLB instance from the drop-down list below Use Existing NLB Instance.

      Important

      You must take note of some limits and usage notes when you use an existing NLB instance. For more information, see Usage notes section of the "Considerations for configuring a LoadBalancer Service" topic.

      Advanced Settings

      Scheduling Algorithm: the scheduling algorithm. Valid values:

      • Round-Robin: Requests are forwarded to backend servers in sequence.

      • Weighted Round-Robin (default): Backend servers that have higher weights receive more requests than backend servers that have lower weights.

      • Source IP Hashing: specifies consistent hashing that is based on source IP addresses. Requests from the same source IP address are distributed to the same backend server.

      • Four-Element Hashing: specifies consistent hashing that is based on the following factors: source IP address, destination IP address, source port, and destination port. Requests that contain the same information based on the preceding factors are forwarded to the same backend server.

      • QUIC ID Hashing: specifies consistent hashing based on QUIC IDs. Requests with the same QUIC ID are forwarded to the same backend server.

      • Weighted Least Connections: Requests are forwarded based on the weights and number of connections to backend servers. If two backend servers have the same weight, the backend server that has fewer connections receives more requests.

      Health Check: specifies whether to enable the health check feature.

      • TCP: To perform TCP health checks, NLB sends SYN packets to a backend server to check whether the port of the backend server can receive requests. This is the default value.

        • Health Check Response Timeout: the amount of time to wait before you receive a response from a health check. If a backend server does not respond within the specified timeout period, the backend server is considered unhealthy.

        • Health Check Interval: the interval at which health checks are performed.

        • Healthy Threshold: the minimum number of times that a backend server must consecutively pass health checks before the backend server is considered healthy.

        • Unhealthy Threshold: the minimum number of times that a backend server must consecutively fail to pass health checks before the backend server is considered unhealthy.

      • HTTP: To perform HTTP health checks, NLB sends HEAD or GET requests to a backend server to check whether the backend server is healthy.

        • Domain Name: the domain name that is used for the health check.

          • Backend Server Internal IP: uses the private IP addresses of backend servers for health checks. This is the default value.

          • Custom Domain Name: Enter a domain name.

        • Path: the URL of the health check page.

        • Health Check Status Codes: the status code that is returned for a health check. Valid values: http_2xx, http_3xx, http_4xx, and http_5xx. Default value: http_2xx.

      Others: You can also use annotations to configure NLB instances. For more information, see Configure NLB instances by using annotations.

      Node Port

      A NodePort Service. This type of Service allows external users to access Services in a cluster by using the IP address and specified port of a node. You can access an address in the <NodeIP>:<NodePort> format to connect to the NodePort Service. However, you must manually complete the configurations for load balancing.

      External Traffic Policy

      The External Traffic Policy parameter is available only if you set the Service Type parameter to Node Port or Server Load Balancer. For more information about external traffic policies, see the Differences between external traffic policies section of the "Getting started" topic. Valid values:

      • Local: routs traffic only to the pods of the current node.

      • Cluster: routes traffic to pods on other nodes in the cluster.

      Backend

      The backend application that you want to associate with the Service. If you do not select a backend application, no Endpoint objects are created. For more information, see Services-without-selectors.

      Port Mapping

      The Service port and container port. The Service port corresponds to the port field in the YAML file and the container port corresponds to the targetPort field in the YAML file. The container port must be the same as the port that is exposed in the backend pod.

      Annotations

      The annotations to be added to the Service to configure the SLB instance. For more information, see Add annotations to the YAML file of a Service to configure CLB instances and Configure NLB instances by using annotations.

      Important

      Do not reuse the SLB instance of the API server of the cluster. Otherwise, access to the cluster may become abnormal.

      Label

      The label to be added to the Service, which identifies the Service.

    • To create an Ingress, click Create on the right side of Ingresses. In the Create dialog box, set the parameters.

      Parameter

      Description

      Name

      The name of the Ingress.

      Rules

      Ingress rules are used to enable access to specified Services in a cluster. For more information, see Create an NGINX Ingress.

      • Domain Name: Enter the domain name of the Ingress.

      • Path: Enter the Service URL. The default path is the root path /. In this example, the default path is used. Each path is associated with a backend Service. SLB forwards traffic to a backend Service only when inbound requests match the domain name and path.

      • Service: Select a Service and a Service port.

      • TLS Settings: Select this check box to enable TLS. For more information, see Advanced NGINX Ingress configurations.

      Canary Release

      Specifies whether to enable the canary release feature. We recommend that you select Open Source Solution.

      Ingress Class

      The class of the Ingress.

      Annotations

      You can add custom annotations or select existing annotations. For more information about Ingress annotations, see Annotations.

      Click Add Annotation to add annotations. You can add an unlimited number of annotations.

      Labels

      The labels that describe the characteristics of the Ingress.

      Click Add Label to add labels. You can add an unlimited number of labels.

    In the Access Control section, you can view existing Services and Ingresses. You can also click Update or Delete to update or delete Services and Ingresses.

  2. Optional: In the Scaling section, enable HPA and CronHPA on demand to meet the requirements of different applications.

    • HPA can automatically scale the number of pods in an ACK cluster based on the CPU and memory usage metrics.

      Note

      To enable HPA, you must configure resources required by each container. Otherwise, HPA does not take effect.

      Parameter

      Description

      Metric

      Select CPU Usage or Memory Usage. The selected resource type must be the same as that specified in the Required Resources field.

      Condition

      The resource usage threshold. The Horizontal Pod Autoscaling (HPA) feature triggers scale-out events when the threshold is exceeded.

      Max. Replicas

      The maximum number of replicated pods to which the application can be scaled.

      Min. Replicas

      The minimum number of replicated pods that must run.

    • CronHPA can scale an ACK cluster at a scheduled time. Before you enable CronHPA, you must install ack-kubernetes-cronhpa-controller. For more information about CronHPA, see Create CronHPA jobs.

  3. Optional:In the Scheduling section, you can set the following parameters: Update Method, Node Affinity, Pod Affinity, and Pod Anti Affinity. For more information, see Affinity and anti-affinity.

    Note

    Node affinity and pod affinity affect pod scheduling based on node labels and pod labels. You can add node labels and pod labels that are provided by Kubernetes to configure node affinity and pod affinity. You can also add custom labels to nodes and pods, and then configure node affinity and pod affinity based on these custom labels.

    Parameter

    Description

    Update Method

    Select Rolling Update or OnDelete. For more information, see StatefulSets.

    Node Affinity

    Set Node Affinity by selecting worker node labels as selectors.

    Node affinity supports required and preferred rules and various operators, such as In, NotIn, Exists, DoesNotExist, Gt, and Lt.

    • Required: Specify the rules that must be matched for pod scheduling. In the YAML file, these rules are defined by the requiredDuringSchedulingIgnoredDuringExecution affinity. These rules have the same effect as the NodeSelector parameter. In this example, pods can be scheduled only to nodes with the specified labels. You can create multiple required rules. However, only one of them must be met.

    • Preferred: Specify the rules that are not required to be matched for pod scheduling. Pods are scheduled to a node that matches the preferred rules when multiple nodes match the required rules. In the YAML file, these rules are defined by the preferredDuringSchedulingIgnoredDuringExecution affinity. If you specify a preferred rule, the scheduler attempts to schedule a pod to a node that matches the preferred rule. You can also set weights for preferred rules. If multiple nodes match the rule, the node with the highest weight is preferred. You can create multiple preferred rules. However, a pod is scheduled only if it matches all of the preferred rules.

    Pod Affinity

    Pod affinity rules specify how pods are deployed relative to other pods in the same topology domain. For example, you can use pod affinity to deploy services that communicate with each other to the same topological domain, such as a host. This reduces the network latency between these services.

    Pod affinity enables you to specify the node to which pods can be scheduled based on the labels of the pods. Pod affinity supports required and preferred rules, and the following operators: In, NotIn, Exists, and DoesNotExist.

    • Required: Specify rules that must be matched for pod scheduling. In the YAML file, these rules are defined by the requiredDuringSchedulingIgnoredDuringExecution affinity. A node must match the required rules before pods can be scheduled to the node.

      • Namespace: Specify the namespace to apply the required rule. Pod affinity rules are defined based on the labels that are added to pods and therefore must be scoped to a namespace.

      • Topological Domain: Set the topologyKey. This specifies the key for the node label that the system uses to denote the topological domain. For example, if you set the parameter to kubernetes.io/hostname, topologies are determined by nodes. If you set the parameter to beta.kubernetes.io/os, topologies are determined by the operating systems of nodes.

      • Selector: Click the plus button on the right side of Selector to add pod labels.

      • View Applications: Click View Applications to view the applications in each namespace. You can add the labels of the selected application to the Pod Affinity configuration page.

      • Required rules: Specify labels of existing applications, the operator, and the label value. In this example, the required rule specifies that the application to be created is scheduled to a host that runs applications with the app:nginx label.

    • Preferred: Specify the rules that are not required to be matched for pod scheduling. Pods are scheduled to a node that matches the preferred rules when multiple nodes match the required rules. In the YAML file, these rules are defined by the preferredDuringSchedulingIgnoredDuringExecution affinity. The scheduler attempts to schedule the pod to a node that matches the preferred rules. You can set weights for preferred rules. The other parameters are the same as those of required rules.

      Note

      Weight: Set the weight of a preferred rule to a value from 1 to 100. The scheduler calculates the weight of each node that meets the preferred rule based on an algorithm, and then schedules the pod to the node with the highest weight.

    Pod Anti-affinity

    Pod anti-affinity rules specify that pods are not scheduled to topological domains where pods with matching labels are deployed. Pod anti-affinity rules apply to the following scenarios:

    • Schedule the pods of an application to different topological domains, such as multiple hosts. This allows you to enhance the stability of your service.

    • Grant a pod exclusive access to a node. This enables resource isolation and ensures that no other pods can share the resources of the specified node.

    • Schedule pods of an application to different hosts if the pods may interfere with each other.

    Note

    The parameters of pod anti-affinity rules are the same as those of pod affinity rules. Create rules based on the actual scenarios.

    Toleration

    Configure tolerations to allow pods to be scheduled to nodes with matching taints.

    Schedule to Virtual Nodes

    Specify whether to schedule pods to virtual nodes. Only ACK Pro clusters support this parameter. This parameter is unavailable if the cluster does not contain a virtual node. For more information about how to schedule pods to virtual nodes, see Configure resource allocation based on ECS instances and elastic container instances.

  4. Optional: In the Labels,Annotations section, click Add to add labels and annotations to pods.

    Parameter

    Description

    Pod Labels

    The label to be added to the pod, which identifies the pod.

    Pod Annotations

    The annotations to be added to the pod.

  5. Click Create.

Step 4: View the deployed application

  1. After the application is created, you are redirected to the Complete wizard page. You can find the resource objects included in the application and click View Details to view application details.

  2. In the upper-left corner of the page, click the Back icon to go to the StatefulSets page. On the StatefulSets page, you can view the created application.

What to do next

View stateless application details

In the left-side navigation pane of the ACK console, click Clusters. Click the name of the cluster that you want to manage or click Details in the Actions column. Choose Workloads > StatefulSets. On the StatefulSetsrs page, click the name of the application that you want to manage or click Details in the Actions column.

Note

On the Deployments page, click Label, configure the key and value parameters of the application label, and click OK to filter the applications in the list.

On the details page of the application, you can view the YAML file of the application. You can also edit, scale, redeploy, and refresh the application.

Operation

Description

Edit

On the details page of the application, click Edit in the upper-right corner of the page to modify the application configurations.

Scale

On the details page of the application, click Scale in the upper-right corner of the page to scale the application to the required number of pods.

In this example, the NGINX application that you created is used to test the scalability.

  1. Find the NGINX application that you created and click Scale.

  2. In the Scale dialog box, set Desired Number of Pods to 3 and click OK. After you scale out the application, all pods in the application are listed based on ordinal indexes in ascending order. If you scale in the application, pods are deleted based on ordinal indexes in descending order. This ensures that all pods follow a specific order.

  3. In the left-side navigation pane, choose Volumes > Persistent Volume Claims. Verify that new volumes are created for the newly added pods after the application is scaled out. However, after the pods of the application are scaled in, the PVs and persistent volume claims (PVCs) of the pods are not deleted.

View in YAML

On the details page of the application, click View in YAML to update or download the YAML file. You can also click Save As to save the file as a different name.

Redeploy

On the details page of the application, click Redeploy in the upper-right corner of the page to redeploy the application.

Refresh

On the details page of the application, click Refresh in the upper-right corner of the page to refresh the details page.

In the left-side navigation pane of the ACK console, click Clusters. Click the name of the cluster that you want to manage or click Details in the Actions column. Choose Workloads > StatefulSets. On the StatefulSetsrs page, click the name of the application that you want to manage or click Details in the Actions column.

Note

On the Deployments page, click Label, configure the key and value parameters of the application label, and click OK to filter the applications in the list.

On the details page of the application, you can view the YAML file of the application. You can also edit, scale, redeploy, and refresh the application.

Modify an existing stateful workload

On the StatefulSets page, click the name of the application that you want to manage or click More in the Actions column.

Operation

Description

View in YAML

View the YAML file of the application.

Redeploy

Redeploy the application.

Node Affinity

Configure the node affinity settings of the application. For more information, see Scheduling.

Toleration

Configure the toleration rules of the application. For more information, see Scheduling.

Logs

View the log of the application.

Delete

Deletes an application.

Batch Redeploy

On the StatefulSets page, click Batch Redeploy to redeploy multiple applications.

Verify the data persistence feature of the stateful application

Log on to a master node and run the following commands to test persistent storage.

  1. Run the following command to create a temporary file in the disk:

    kubectl exec nginx-1 ls /tmp            # Query files (including lost+found) in the /tmp directory. 
    kubectl exec nginx-1 -- touch /tmp/statefulset         # Create a file named statefulset. 
    kubectl exec nginx-1 -- ls /tmp

    Expected output:

    lost+found
    statefulset
  2. Run the following command to delete the pod to verify data persistence:

    kubectl delete pod nginx-1

    Expected output:

    pod"nginx-1" deleted
  3. After the system recreates and starts a new pod, query the files in the /tmp directory. The following result shows that the statefulset file still exists. This shows the high availability of the stateful application.

    kubectl exec nginx-1 -- ls /tmp   # Query files (including lost+found) in the /tmp directory to verify data persistence.

    Expected output:

    statefulset