All Products
Search
Document Center

Container Service for Kubernetes:Create an ACK dedicated cluster

Last Updated:Jun 25, 2023

A Container Service for Kubernetes (ACK) dedicated cluster must contain at least three master nodes. This ensures high availability for the cluster and enables fine-grained management of the cluster infrastructure. You must manually create, maintain, and update ACK dedicated clusters. This topic describes how to create an ACK dedicated cluster in the ACK console.

Prerequisites

Resource Access Management (RAM) is activated in the RAM console. Auto Scaling is activated in the Auto Scaling console.

Note

When you create a Container Service for Kubernetes (ACK) cluster, take note of the following limits:

  • ACK clusters support only virtual private clouds (VPCs).
  • By default, each account has specific quotas on cloud resources that can be created. You cannot create clusters if the quota is reached. For more information about the quotas, see Quotas.
    • For more information, see Quotas.
    • By default, you can add up to 200 route entries to a VPC. This means that you can deploy up to 200 nodes in an ACK cluster that uses Flannel. This limit does not apply to ACK clusters that use Terway. To add more route entries to a VPC, apply for an increase on the quota of route entries in the VPC that you want to use.
    • By default, you can create at most 100 security groups with each account.
    • By default, you can create at most 60 pay-as-you-go SLB instances with each account.
    • By default, you can create at most 20 elastic IP addresses (EIPs) with each account.
  • Limits on Elastic Compute Service (ECS) instances:

    The pay-as-you-go and subscription billing methods are supported.

    After an ECS instance is created, you can change its billing method from pay-as-you-go to subscription in the ECS console. For more information, see Change the billing method of an ECS instance from pay-as-you-go to subscription.

Background information

Important

ACK no longer allows you to create ACK dedicated clusters but still provides technical support for existing ACK dedicated clusters. We recommend that you use ACK managed clusters. For more information, see Create an ACK managed cluster.

For more information about the billing rules of ACK dedicated clusters, see Billing.

Procedure

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. In the upper-right corner of the Clusters page, click Create Kubernetes Cluster.
  4. Click the Dedicated Kubernetes tab and set the cluster parameters.

    1. Configure basic settings of the cluster.

      Parameter

      Description

      Cluster Name

      Enter a name for the cluster.
      Note The name must be 1 to 63 characters in length, and can contain digits, letters, hyphens (-), and underscores (_). The name cannot start with an underscore (_).

      Region

      Select a region to deploy the cluster.

      Billing Method

      Two billing methods are supported: Pay-As-You-Go and Subscription. If you select the subscription billing method, you must set the following parameters:
      Note If you set Billing Method to Subscription, only Elastic Compute Service (ECS) instances and Server Load Balancer (SLB) instances are billed on a subscription basis. Other cloud resources are billed on a pay-as-you-go basis. For more information about the billing rules of Alibaba Cloud resources, see Billing of cloud services.
      • Duration: You can select 1, 2, 3, or 6 months. If you require a longer duration, you can select 1 year, 2 years, or 3 years.
      • Auto Renewal: Specify whether to enable auto-renewal.

      All Resources

      Move the pointer over All Resources at the top of the page and select the resource group that you want to use. After you select a resource group, virtual private clouds (VPCs) and vSwitches that belong to the resource group are displayed. When you create a cluster, only the VPCs and vSwitches that belong to the selected resource group are displayed in the console. Resource Group

      Kubernetes Version

      The Kubernetes versions supported by ACK are displayed.

      Container Runtime

      Specify the container runtime based on the Kubernetes version.
      • For Kubernetes versions earlier than 1.24, you can select containerd, Docker, or Sandboxed-Container.
      • For Kubernetes 1.24 and later versions, you can select containerd or Sandboxed-Container.
      For more information, see Comparison of Docker, containerd, and Sandboxed-Container.

      IPv6 Dual-stack

      If you enable IPv4/IPv6 dual stack, a dual-stack cluster is created. This feature is in public preview. To use this feature, go to Quota Center and submit an application.

      Note
      • Only Kubernetes 1.22 and later versions support his feature.

      • Master nodes and worker nodes still use IPv4 addresses to communicate with each other.

      • You must select Terway as the network plug-in.

      • You must use a VPC and Elastic Compute Service (ECS) instances that support IPv4/IPv6 dual stack.

      VPC

      Select a VPC to deploy the cluster. Standard VPCs and shared VPCs are supported.
      • Shared VPC: The owner of a VPC (resource owner) can share the vSwitches in the VPC with other accounts in the same organization.
      • Standard VPC: The owner of a VPC (resource owner) cannot share the vSwitches in the VPC with other accounts.
      Note ACK clusters support only VPCs. You can select a VPC from the drop-down list. If no VPC is available, click Create VPC to create one. For more information, see Create and manage a VPC.

      vSwitch

      Select vSwitches.

      You can select up to three vSwitches that are deployed in different zones. If no vSwitch is available, click Create vSwitch to create one. For more information, see Create and manage a vSwitch.

      Network Plug-in

      Select a network plug-in. Flannel and Terway are supported. For more information, see Terway and Flannel.
      • Flannel: a simple and stable Container Network Interface (CNI) plug-in that is developed by open source Kubernetes. Flannel offers a few simple features and does not support standard Kubernetes network policies.
      • Terway: a network plug-in developed by Alibaba Cloud Container Service. Terway allows you to assign Alibaba Cloud Elastic Network Interfaces (ENIs) to containers. It also allows you to customize network policies of Kubernetes to control intercommunication among containers, and implement bandwidth throttling on individual containers.
        Note
        • The number of pods that can be deployed on a node depends on the number of ENIs that are attached to the node and the maximum number of secondary IP addresses that are provided by these ENIs.
        • If you select a shared VPC for a cluster, you must select Terway as the network plug-in.
        • If you select Terway, an ENI is shared among multiple pods. A secondary IP address of the ENI is assigned to each pod.
        When you set Network Plug-in to Terway, you can configure the following parameters:
        • Specify whether to enable the Assign One ENI to Each Pod feature. To use the Assign One ENI to Each Pod feature, you need to log on to the Quota Center console and submit an application.
          • If you select the check box, a separate ENI is assigned to each pod.
            Note After you select Assign One ENI to Each Pod, the maximum number of pods supported by a node is reduced. Exercise caution before you enable this feature.
          • If you clear the check box, an ENI is shared among multiple pods. A secondary IP address that is provided by the ENI is assigned to each pod.
        • Specify whether to use IPVLAN.
          • This option is available only when you clear Assign One ENI to Each Pod.
          • If you select IPVLAN, IPVLAN and extended Berkeley Packet Filter (eBPF) are used for network virtualization when an ENI is shared among multiple pods. This improves network performance. Only the Alibaba Cloud Linux operating system is supported.
          • If you clear IPVLAN, policy-based routes are used for network virtualization when an ENI is shared among multiple pods. The CentOS 7 and Alibaba Cloud Linux operating systems are supported. This is the default setting.

          For more information about the IPVLAN feature in Terway mode, see Terway IPVLAN.

        • Select or clear Support for NetworkPolicy.
          • The NetworkPolicy feature is available only when you clear Assign One ENI to Each Pod. By default, Assign One ENI to Each Pod is unselected.
          • If you select Support for NetworkPolicy, you can use Kubernetes network policies to control the communication among pods.
          • If you clear Support for NetworkPolicy, you cannot use Kubernetes network policies to control the communication among pods. This prevents Kubernetes network policies from overloading the Kubernetes API server.
        • Select or clear Support for ENI Trunking. To use the Support for ENI Trunking feature, you need to log on to the Quota Center console and submit an application. The Terway Trunk elastic network interface (ENI) feature allows you to specify a static IP address, a separate vSwitch, and a separate security group for each pod. This allows you to manage and isolate user traffic, configure network policies, and manage IP addresses in a fine-grained manner. For more information, see Configure a static IP address, a separate vSwitch, and a separate security group for each pod.

      Pod vSwitch

      If you select Terway as the network plug-in, you must select vSwitches to allocate IP addresses to pods. For each vSwitch that allocates IP addresses to worker nodes, you must select a vSwitch in the same zone to allocate IP addresses to pods.

      Number of Pods per Node

      If you set Network Plug-in to Flannel, you must set Number of Pods per Node.

      Pod CIDR Block

      If you select Flannel, you must set Pod CIDR Block.

      The pod CIDR block must not overlap with the CIDR block of the VPC, the CIDR blocks of the ACK clusters in the VPC, or the Service CIDR block. The pod CIDR block cannot be modified after it is specified. For more information about how to plan CIDR blocks for an ACK cluster, see Plan CIDR blocks for an ACK cluster.

      Service CIDR

      Set Service CIDR. The Service CIDR block must not overlap with the CIDR block of the VPC, the CIDR blocks of the ACK clusters in the VPC, or the pod CIDR block. The Service CIDR block cannot be modified after it is specified. For more information about how to plan CIDR blocks for an ACK cluster, see Plan CIDR blocks for an ACK cluster.

      IPv6 Service CIDR

      If you enable IPv4/IPv6 dual stack, you must specify an IPv6 CIDR block for Services. When you set this parameter, take note of the following items:
      • You must specify a unique local address (ULA) space within the address range fc00::/7. The prefix must be 112 bits to 120 bits in length.
      • We recommend that you specify an IPv6 CIDR block that has the same number of IP addresses as the Service CIDR block.
      For more information about how to plan CIDR blocks for an ACK cluster, see Plan CIDR blocks for an ACK cluster.

      Configure SNAT

      By default, an ACK cluster cannot access the Internet. If the VPC that you select for the cluster cannot access the Internet, you can select Configure SNAT for VPC. This way, ACK will create a NAT gateway and configure SNAT rules to enable Internet access for the VPC.

      Access to API Server

      By default, an internal-facing SLB instance is created for the Kubernetes API server of the cluster. You can modify the specification of the SLB instance. For more information, see Instance specifications.
      Important If you delete the SLB instance, you cannot access the API server of the cluster.
      Select or clear Expose API Server with EIP. The ACK API server provides multiple HTTP-based RESTful APIs, which can be used to create, delete, modify, query, and monitor resources, such as pods and Services.
      • If you select this check box, an elastic IP address (EIP) is created and associated with an SLB instance. Port 6443 used by the API server is opened on master nodes. You can connect to and manage the cluster by using kubeconfig files over the Internet.
      • If you clear this check box, no EIP is created. You can connect to and manage the cluster by using kubeconfig files only from within the VPC.

      SSH Logon

      To enable SSH logon, you must first select Expose API Server with EIP.

      • If you enable SSH logon over the Internet, you can access the cluster by using SSH.

      • If you disable SSH logon over the Internet, you cannot access the cluster by using SSH or kubectl. If you want to access an ECS instance in the cluster by using SSH, you must manually associate an elastic IP address (EIP) with the ECS instance and configure a security group rule to open SSH port 22. For more information, see Use SSH to connect to the master nodes of an ACK dedicated cluster.

      Security Group

      You can select Create Basic Security Group, Create Advanced Security Group, or Select Existing Security Group. For more information about security groups, see Overview.
      Note
      • To enable the Select Existing Security Group option, apply to be added to the whitelist in Quota Center.
      • If you select an existing security group, the system does not automatically configure security group rules. This may cause errors when you access the nodes in the cluster. You must manually configure security group rules. For more information, see Configure security group rules to enforce access control on ACK clusters.
    2. Configure advanced settings of the cluster.

      Parameter

      Description

      Time Zone

      Select a time zone for the cluster. By default, the time zone of your browser is selected.

      Kube-proxy Mode

      iptables and IPVS are supported.
      • iptables is a mature and stable kube-proxy mode. It uses iptables rules to conduct service discovery and load balancing. The performance of this mode is restricted by the size of the Kubernetes cluster. This mode is suitable for Kubernetes clusters that manage a small number of Services.
      • IPVS is a high-performance kube-proxy mode. It uses Linux Virtual Server (LVS) to conduct service discovery and load balancing. This mode is suitable for clusters that manage a large number of Services. We recommend that you use this mode in scenarios where high-performance load balancing is required.

      Labels

      Add labels to nodes in the cluster. Enter a key and a value, and then click Add.
      Note
      • Key is required. Value is optional.
      • Keys are not case-sensitive. A key must be 1 to 64 characters in length and cannot start with aliyun, http://, or https://.
      • Values are not case-sensitive. A value cannot exceed 128 characters in length and cannot contain http:// or https://. A value can be empty.
      • The keys of labels that are added to the same resource must be unique. If you add a label with a used key, the label overwrites the label that uses the same key.
      • If you add more than 20 labels to a resource, all labels become invalid. You must remove excess labels for the remaining labels to take effect.

      Custom Image

      • You can select a custom ECS image. After you select a custom image, all nodes in the cluster are deployed by using this image. For more information about how to create a custom image, see Create a Kubernetes cluster by using a custom image.
      • You can select a shared ECS image. After you select a shared image, all nodes in the cluster are deployed by using this image. For more information about shared images, see Procedure.
      Note
      • Only custom images based on CentOS 7.x and Alibaba Cloud Linux are supported.
      • To use this feature, apply to be added to the whitelist in Quota Center.

      Cluster Domain

      Set the domain name of the cluster.
      Note The default domain name is cluster.local. You can enter a custom domain name. A domain name consists of two parts. Each part must be 1 to 63 characters in length and can contain only letters and digits. You cannot leave these parts empty.

      Custom Certificate SANs

      You can enter custom subject alternative names (SANs) for the API server certificate of the cluster to accept requests from specified IP addresses or domain names.

      Service Account Token Volume Projection

      ACK provides service account token volume projection to reduce security risks when pods use service accounts to access the Kubernetes API server. This feature enables kubelet to request and store the token on behalf of a pod. This feature also allows you to configure token properties, such as the audience and validity period. For more information, see Enable service account token volume projection.

      Cluster CA

      If you select this check box, upload a certificate authority (CA) certificate for the cluster to secure data transmission between the server and client.

      Deletion Protection

      Specify whether to enable deletion protection for the cluster. Deletion protection can prevent clusters from being accidentally released by using the console or API.

      Resource Group

      The resource group that owns the cluster to be created. Each resource can belong only to one resource group. You can regard a resource group as a project, an application, or an organization based on your business scenarios. For more information, see Resource groups.

  5. Click Next:Master Configurations to configure master nodes.

    Parameter

    Description

    Master Node Quantity

    Specify the number of master nodes that you want to deploy in the zones that you select.

    Instance Type

    Select an instance type for master nodes. For more information, see Instance families.

    System Disk

    By default, system disks are mounted to master nodes. Standard SSDs, enhanced SSDs (ESSDs), and ultra disks are supported.

    Note
    • You can select Enable Backup to back up disk data.

    • If you select ESSD Disk, you can set a custom performance level (PL) for the system disk.

      You can select higher PLs for ESSDs with larger storage capacities. For example, you can select PL 2 for an ESSD with a storage capacity of more than 460 GiB. You can select PL 3 for an ESSD with a storage capacity of more than 1,260 GiB. For more information, see Capacity and PLs.

  6. Click Next:Node Pool Configurations to configure a node pool.

    1. Set worker instances.

      • If you select Create Instance, you must set the parameters as described in the following table.

        Parameter

        Description

        Node Pool Name

        Specify a node pool name.

        Instance Type

        You can select multiple instance types. You can filter instance types by vCPU, memory, architecture, or category.
        Note

        The instance types that you select are displayed in the Selected Types section.

        When the node pool is scaled out, ECS instances of the instance types that you select for the Instance Type parameter are created. The scaling policy of the node pool determines which instance types are used to create new nodes during scale-out activities. Select multiple instance types to improve the success rate of node pool scale-out operations.

        Selected Types

        The selected instance types are displayed.

        Quantity

        Specify the number of worker nodes (ECS instances) to be created.

        System Disk

        Enhanced SSDs, standard SSDs, and ultra disks are supported. The types of system disks that you can select depend on the instance types that you select. For more information about the disk types supported by different instance types, see Overview of instance families. Disk types that are not displayed in the drop-down list are not supported by the instance types that you select.
        Note
        • If you select enhanced SSD as the system disk type, you can set a custom performance level for the system disk. You can select higher performance levels for enhanced SSDs with larger storage capacities. For example, you can select performance level 2 for an enhanced SSD with a storage capacity of more than 460 GiB. You can select performance level 3 for an enhanced SSD with a storage capacity of more than 1,260 GiB. For more information, see Capacity and PLs.
        • The Encrypt Disk option is available only for ESSDs. System disks support only the aes-256 encryption algorithm. For more information about system disk encryption, see Encrypt a system disk.

        Mount Data Disk

        ESSDs, standard SSDs, and ultra disks are supported. The disk types that you can select depend on the instance types that you select. For more information about the disk types supported by different instance types, see Overview of instance families. Disk types that are not displayed in the drop-down list are not supported by the instance types that you select.
        Note
        • If you select ESSD as the system disk type, you can set a custom performance level for the system disk. You can select higher performance levels for ESSDs with larger storage capacities. For example, you can select performance level 2 for an ESSD with a storage capacity of more than 460 GiB. You can select performance level 3 for an ESSD with a storage capacity of more than 1,260 GiB. For more information, see Capacity and PLs.
        • The Encrypt Disk option is available only for ESSDs. Data disks support only the aes-256 and sm4-128 encryption algorithms. The China (Nanjing - Local Region), China (Fuzhou - Local Region), Thailand (Bangkok), and South Korea (Seoul) regions support only the Default Service CMK for data disk encryption. The Bring Your Own Key (BYOK) feature is not supported by these regions. For more information about data disk encryption, see Encrypt a data disk.
        • The maximum number of data disks that can be mounted depends on the instance types that you select. You can view the selected data disks and the remaining number of data disks that you can mount on the right side of Mount Data Disk.

        Operating System

        ACK supports the following node operating systems:
        • Alibaba Cloud Linux 2.x. This is the default operating system.
          If you select Alibaba Cloud Linux 2.x, you can configure security reinforcement for the operating system:
          • Disable: disables security reinforcement for Alibaba Cloud Linux 2.x.
          • CIS Reinforcement: enables security reinforcement for Alibaba Cloud Linux 2.x. For more information about Center for Internet Security (CIS) reinforcement, see CIS reinforcement.
        • Alibaba Cloud Linux 3.x

          Most Alibaba Cloud Linux 3 images are supported by most instance families. However, some Alibaba Cloud Linux 3 images are supported only by specific instance families. For more information, see Release notes for Alibaba Cloud Linux 3.

        • CentOS 7.x
          Note CentOS 8.x and later are not supported.

        Logon Type

        • Key pair logon

        • Key Pair: Select an SSH key pair from the drop-down list.

        • create a key pair: Create an SSH key pair if none is available. For more information about how to create an SSH key pair, see Create an SSH key pair. After the key pair is created, set it as the credential that is used to log on to the cluster.

        • Password logon

          Password: Enter the password that is used to log on to the nodes.

          Confirm Password: Enter the password again.

          Note

          The password must be 8 to 30 characters in length, and must contain at least three of the following character types: uppercase letters, lowercase letters, digits, and special characters. The password cannot contain underscores (_).

      • If you select Add Existing Instance, you must select ECS instances that are deployed in the specified region. Then, set the Operating System, Logon Type, and Key Pair parameters based on the preceding description.

    2. Configure advanced settings of worker nodes.

      Parameter

      Description

      Node Protection

      Specify whether to enable node protection.
      Note By default, this check box is selected. Node protection prevents nodes from being accidentally deleted in the console or by calling the API.

      User Data

      For more information, see Overview of ECS instance user data.

      Custom Node Name

      Specify whether to use a custom node name. If you choose to use a custom node name, the name of the node, name of the ECS instance, and hostname of the ECS instance are changed.

      A custom node name consists of a prefix, an IP address, and a suffix.
      • A custom node name must be 2 to 64 characters in length.
      • The prefix and suffix can contain letters, digits, hyphens (-), and periods (.). The prefix and suffix must start with a letter and cannot end with a hyphen (-) or period (.). The prefix and suffix cannot contain consecutive hyphens (-) or periods (.).
      • Due to the ECS instance limit, the prefix is required. The suffix is optional.
      • For a Windows node that uses a custom node name, the hostname of the ECS instance is fixed to the IP address of the node. In the hostname, hyphens (-) are used to replace the periods (.) in the IP address. The hostname does not include the prefix or suffix.
      For example, the node IP address is 192.1xx.x.xx, the prefix is aliyun.com, and the suffix is test.
      • If the node runs Linux, the name of the node, name of the ECS instance, and hostname of the ECS instance are aliyun.com192.1xx.x.xxtest.
      • If the node runs Windows, the hostname of the ECS instance is 192-1xx-x-xx, and the names of the node and ECS instance are aliyun.com192.1xx.x.xxtest.

      Node Port Range

      Set the node port range. The default port range is 30000 to 32767.

      Taints

      Add taints to all worker nodes in the cluster.

      CPU Policy

      Set the CPU management policy.

      • none: This policy indicates that the default CPU affinity is used. This is the default policy.

      • static: This policy allows pods with specific resource characteristics on the node to be granted with enhanced CPU affinity and exclusivity.

      RDS Whitelist

      Configure the whitelist of the ApsaraDB RDS instance. Add the IP addresses of nodes in the cluster to a whitelist of the ApsaraDB RDS instance.

      Note

      To enable an ApsaraDB RDS instance to access the cluster, you must make sure that the instance is deployed in the VPC where the cluster is deployed.

  7. Click Next:Component Configurations to configure components.

    Parameter

    Description

    Ingress

    Specify whether to install an Ingress controller. By default, Nginx Ingress is selected. Valid values:

    Service Discovery

    Specify whether to install NodeLocal DNSCache. By default, NodeLocal DNSCache is installed.

    NodeLocal DNSCache runs a Domain Name System (DNS) caching agent to improve the performance and stability of DNS resolution. For more information about NodeLocal DNSCache, see Configure NodeLocal DNSCache.

    Volume Plug-in

    By default, CSI is installed as the volume plug-in. Dynamically Provision Volumes by Using Default NAS File Systems and CNFS, Enable NAS Recycle Bin, and Support Fast Data Restore is selected by default . ACK clusters can be automatically bound to Alibaba Cloud disks, Apsara File Storage NAS (NAS) file systems, and Object Storage Service (OSS) buckets that are mounted to pods. For more information, see Storage management - CSI.

    Monitoring Agents

    Specify whether to install the CloudMonitor agent. By default, Install CloudMonitor Agent on ECS Instance is selected.

    Log Service

    Specify whether to enable Log Service. You can select an existing Log Service project or create one. By default, Enable Log Service is selected. When you create an application, you can enable Log Service with a few steps. For more information, see Collect log data from containers by using Log Service.

    By default, Install node-problem-detector and Create Event Center is selected. You can specify whether to enable the Kubernetes event center in the Log Service console. For more information, see Create and use an event center.

    Workflow Engine

    Specify whether to enable Alibaba Cloud Genomics Service (AGS).
    Note To use this feature, submit a ticket to apply to be added to a whitelist.
    • If you select this check box, the system automatically installs the AGS workflow plug-in when the system creates the cluster.
    • If you clear this check box, you must manually install the AGS workflow plug-in. For more information, see Introduction to AGS CLI.

    Cluster Inspection

    Specify whether to enable the cluster inspection feature for intelligent O&M. You can enable this feature to periodically check the resource quotas, resource usage, and component versions of a cluster and identify potential risks in the cluster. For more information, see Work with the cluster inspection feature.

  8. Click Next:Confirm Order.

  9. Select Terms of Service and click Create Cluster.

    Note

    It requires about 10 minutes to create an ACK dedicated cluster with multiple nodes.

Result

  • After the cluster is created, you can view the cluster on the Clusters page in the ACK console.

  • Click View Logs in the Actions column. On the page that appears, you can view the cluster log. To view detailed log information, click Stack events.

  • On the Clusters page, find the newly created cluster and click Details in the Actions column. On the details page of the cluster, click the Basic Information tab to view basic information about the cluster and click the Connection Information tab to view information about how to connect to the cluster.

    The following information is displayed:

    • API Server Public Endpoint: the IP address and port that the API server uses to provide services over the Internet. It allows you to manage the cluster by using kubectl or other tools on the client.

    • API Server Internal Endpoint: the IP address and port that the Kubernetes API server of the cluster uses to provide services within the cluster. The endpoint belongs to the Server Load Balancer (SLB) instance that is bound to the cluster. Three master nodes work as the backend servers of the SLB instance.

  • You can obtain the kubeconfig file of the cluster and use kubectl to connect to the cluster, and then run the kubectl get node command to query information about the nodes in the cluster.

    集群查看结果