All Products
Search
Document Center

Container Service for Kubernetes:Use Terraform to create an ACK managed cluster

Last Updated:Nov 11, 2025

This topic describes how to use Terraform to create an ACK managed cluster.

Note

You can run the sample code in this topic with a single click. Run it now

Prerequisites

  • Container Service for Kubernetes (ACK) is activated. For more information about how to use Terraform to activate ACK, see Use Terraform to activate ACK and assign service roles to ACK.

  • An AccessKey pair is created for the Resource Access Management (RAM) user you log on as.

    Note

    By default, an Alibaba Cloud account has full permissions on all resources that belong to this account. We recommend using a RAM account, as it provides limited resource permissions, minimizing potential security risks in case your credentials are compromised.

  • The following policy is attached to the RAM user that you use to run commands in Terraform. The policy includes the minimum permissions required to run commands in Terraform. For more information, see Grant permissions to a RAM user.

    This access policy allows the RAM user to create, view, and delete VPCs, vSwitches, and ACK clusters.

    {
      "Version": "1",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
            "vpc:CreateVpc",
            "vpc:CreateVSwitch",
            "cs:CreateCluster",
            "vpc:DescribeVpcAttribute",
            "vpc:DescribeVSwitchAttributes",
            "vpc:DescribeRouteTableList",
            "vpc:DescribeNatGateways",
            "cs:DescribeTaskInfo",
            "cs:DescribeClusterDetail",
            "cs:GetClusterCerts",
            "cs:CheckControlPlaneLogEnable",
            "cs:CreateClusterNodePool",
            "cs:DescribeClusterNodePoolDetail",
            "cs:ModifyClusterNodePool",
            "vpc:DeleteVpc",
            "vpc:DeleteVSwitch",
            "cs:DeleteCluster",
            "cs:DeleteClusterNodepool"
          ],
          "Resource": "*"
        }
      ]
    }
  • Prepare a Terraform runtime environment. You can run Terraform using one of the following methods.

    • Use Terraform in Terraform Explorer: Alibaba Cloud provides an online runtime environment for Terraform that you can use without installation. This method is suitable for scenarios in which you want to quickly and conveniently test and debug Terraform at no cost.

    • Cloud Shell: Cloud Shell is preinstalled with Terraform and configured with your identity credentials. You can run Terraform commands directly in Cloud Shell. This method is a fast, convenient, and low-cost way to use Terraform.

    • Use Terraform in Resource Orchestration Service (ROS): ROS provides managed capabilities for Terraform. You can create Terraform templates, define Alibaba Cloud, AWS, or Azure resources, and configure resource parameters and dependencies.

    • Install and configure Terraform on your computer: This method is suitable for scenarios where you have poor network connectivity or need a custom development environment.

    Important

    Make sure that you use Terraform v0.12.28 or later. To check your current version, run the terraform --version command.

Resources used

Note

Some resources in this example incur fees. Release them when they are no longer needed to avoid charges.

Generate Terraform request parameters from the console

If the examples do not contain your required configuration or if your parameter combination is incorrect, you can generate the required parameters from the console. To do this, perform the following steps:

  1. Log on to the ACK console. In the left navigation pane, click Clusters.

  2. On the Clusters page, click Cluster Templates.

  3. In the dialog box that appears, select the type of cluster that you want to create, click Create, and then configure the cluster on the Cluster Configurations page.

  4. After you complete the configuration, click Equivalent Code in the upper-right corner of the Confirm Configuration page.

  5. In the sidebar, click the Terraform tab. The parameters required to create the cluster are displayed. You can then copy and use these parameters.

Use Terraform to create an ACK managed cluster (Terway)

This example creates an ACK managed cluster that contains a regular node pool, a managed node pool, and an auto-scaling node pool. By default, a series of components are installed on the cluster, such as Terway (network component), csi-plugin (storage component), csi-provisioner (storage component), a log collection component, Nginx Ingress Controller, ack-arms-prometheus (monitoring component), and ack-node-problem-detector (node diagnostics component).

  1. Create a working directory. In the working directory, create a configuration file named main.tf. Copy the following code to the main.tf file.

    provider "alicloud" {
      region = var.region_id
    }
    
    variable "region_id" {
      type    = string
      default = "cn-shenzhen"
    }
    
    variable "cluster_spec" {
      type        = string
      description = "The cluster specifications of kubernetes cluster,which can be empty. Valid values:ack.standard : Standard managed clusters; ack.pro.small : Professional managed clusters."
      default     = "ack.pro.small"
    }
    
    # The zones of the vSwitches.
    variable "availability_zone" {
      description = "The availability zones of vswitches."
      default     = ["cn-shenzhen-c", "cn-shenzhen-e", "cn-shenzhen-f"]
    }
    
    # A list of vSwitch IDs.
    variable "node_vswitch_ids" {
      description = "List of existing node vswitch ids for terway."
      type        = list(string)
      default     = []
    }
    
    # A list of CIDR blocks for creating new vSwitches.
    variable "node_vswitch_cidrs" {
      description = "List of cidr blocks used to create several new vswitches when 'node_vswitch_ids' is not specified."
      type        = list(string)
      default     = ["172.16.0.0/23", "172.16.2.0/23", "172.16.4.0/23"]
    }
    
    # The configuration of the Terway network component. If this is empty, a new Terway vSwitch is created based on terway_vswitch_cidrs by default.
    variable "terway_vswitch_ids" {
      description = "List of existing pod vswitch ids for terway."
      type        = list(string)
      default     = []
    }
    
    # The CIDR blocks for creating vSwitches for Terway when terway_vswitch_ids is not specified.
    variable "terway_vswitch_cidrs" {
      description = "List of cidr blocks used to create several new vswitches when 'terway_vswitch_ids' is not specified."
      type        = list(string)
      default     = ["172.16.208.0/20", "172.16.224.0/20", "172.16.240.0/20"]
    }
    
    # The ECS instance types for launching worker nodes.
    variable "worker_instance_types" {
      description = "The ecs instance types used to launch worker nodes."
      default     = ["ecs.g6.2xlarge", "ecs.g6.xlarge"]
    }
    
    # The components to install in the ACK cluster. They include Terway (network component), csi-plugin (storage component), csi-provisioner (storage component), loongcollector (log component), Nginx Ingress Controller, ack-arms-prometheus (monitoring component), and ack-node-problem-detector (node diagnostics component).
    variable "cluster_addons" {
      type = list(object({
        name   = string
        config = string
      }))
      default = [
        {
          "name"   = "terway-eniip",
          "config" = "",
        },
        {
          "name"   = "loongcollector",
          "config" = "{\"IngressDashboardEnabled\":\"true\"}",
        },
        {
          "name"   = "nginx-ingress-controller",
          "config" = "{\"IngressSlbNetworkType\":\"internet\"}",
        },
        {
          "name"   = "arms-prometheus",
          "config" = "",
        },
        {
          "name"   = "ack-node-problem-detector",
          "config" = "{\"sls_project_name\":\"\"}",
        },
        {
          "name"   = "csi-plugin",
          "config" = "",
        },
        {
          "name"   = "csi-provisioner",
          "config" = "",
        }
      ]
    }
    
    # The name prefix for the ACK managed cluster.
    variable "k8s_name_prefix" {
      description = "The name prefix used to create managed kubernetes cluster."
      default     = "tf-ack-shenzhen"
    }
    
    # Default resource names.
    locals {
      k8s_name_terway         = substr(join("-", [var.k8s_name_prefix, "terway"]), 0, 63)
      k8s_name_flannel        = substr(join("-", [var.k8s_name_prefix, "flannel"]), 0, 63)
      k8s_name_ask            = substr(join("-", [var.k8s_name_prefix, "ask"]), 0, 63)
      new_vpc_name            = "tf-vpc-172-16"
      new_vsw_name_azD        = "tf-vswitch-azD-172-16-0"
      new_vsw_name_azE        = "tf-vswitch-azE-172-16-2"
      new_vsw_name_azF        = "tf-vswitch-azF-172-16-4"
      nodepool_name           = "default-nodepool"
      managed_nodepool_name   = "managed-node-pool"
      autoscale_nodepool_name = "autoscale-node-pool"
      log_project_name        = "log-for-${local.k8s_name_terway}"
    }
    
    # ECS instance configuration for nodes. Queries for ECS instance types that meet the CPU and memory requirements.
    data "alicloud_instance_types" "default" {
      cpu_core_count       = 8
      memory_size          = 32
      availability_zone    = var.availability_zone[0]
      kubernetes_node_role = "Worker"
    }
    
    # VPC.
    resource "alicloud_vpc" "default" {
      vpc_name   = local.new_vpc_name
      cidr_block = "172.16.0.0/12"
    }
    
    # Node vSwitches.
    resource "alicloud_vswitch" "vswitches" {
      count      = length(var.node_vswitch_ids) > 0 ? 0 : length(var.node_vswitch_cidrs)
      vpc_id     = alicloud_vpc.default.id
      cidr_block = element(var.node_vswitch_cidrs, count.index)
      zone_id    = element(var.availability_zone, count.index)
    }
    
    # Pod vSwitches.
    resource "alicloud_vswitch" "terway_vswitches" {
      count      = length(var.terway_vswitch_ids) > 0 ? 0 : length(var.terway_vswitch_cidrs)
      vpc_id     = alicloud_vpc.default.id
      cidr_block = element(var.terway_vswitch_cidrs, count.index)
      zone_id    = element(var.availability_zone, count.index)
    }
    
    # ACK managed cluster.
    resource "alicloud_cs_managed_kubernetes" "default" {
      name                         = local.k8s_name_terway                                         # The name of the Kubernetes cluster.
      cluster_spec                 = var.cluster_spec                                              # Creates a Pro edition cluster.
      vswitch_ids                  = split(",", join(",", alicloud_vswitch.vswitches.*.id))        # The vSwitches for the node pool. Specify one or more vSwitch IDs. The vSwitches must be in the zone specified by availability_zone.
      pod_vswitch_ids              = split(",", join(",", alicloud_vswitch.terway_vswitches.*.id)) # Pod vSwitches.
      new_nat_gateway              = true                                                          # Specifies whether to create a new NAT Gateway when creating the Kubernetes cluster. Default value: true.
      service_cidr                 = "10.11.0.0/16"                                                # The service CIDR block. This parameter is required when cluster_network_type is set to flannel. The service CIDR block cannot be the same as the VPC CIDR block or the CIDR blocks of other Kubernetes clusters in the VPC. You cannot change the service CIDR block after the cluster is created. The maximum number of hosts in the cluster is 256.
      slb_internet_enabled         = true                                                          # Specifies whether to create an Internet-facing Server Load Balancer instance for the API server. Default value: false.
      enable_rrsa                  = true
      control_plane_log_components = ["apiserver", "kcm", "scheduler", "ccm"] # Control plane logs.
      dynamic "addons" {                                                      # Component management.
        for_each = var.cluster_addons
        content {
          name   = lookup(addons.value, "name", var.cluster_addons)
          config = lookup(addons.value, "config", var.cluster_addons)
        }
      }
    }
    
    # Regular node pool.
    resource "alicloud_cs_kubernetes_node_pool" "default" {
      cluster_id            = alicloud_cs_managed_kubernetes.default.id              # The name of the Kubernetes cluster.
      node_pool_name        = local.nodepool_name                                    # The name of the node pool.
      vswitch_ids           = split(",", join(",", alicloud_vswitch.vswitches.*.id)) # The vSwitches for the node pool. Specify one or more vSwitch IDs. The vSwitches must be in the zone specified by availability_zone.
      instance_types        = var.worker_instance_types
      instance_charge_type  = "PostPaid"
      desired_size          = 2            # The expected number of nodes in the node pool.
      install_cloud_monitor = true         # Specifies whether to install CloudMonitor on the Kubernetes nodes.
      system_disk_category  = "cloud_efficiency"
      system_disk_size      = 100
      image_type            = "AliyunLinux"
      data_disks {              # Data disk configuration for the node.
        category = "cloud_essd" # The category of the data disk.
        size     = 120          # The size of the data disk.
      }
    }
    
    # Creates a managed node pool.
    resource "alicloud_cs_kubernetes_node_pool" "managed_node_pool" {
      cluster_id     = alicloud_cs_managed_kubernetes.default.id              # The name of the Kubernetes cluster.
      node_pool_name = local.managed_nodepool_name                            # The name of the node pool.
      vswitch_ids    = split(",", join(",", alicloud_vswitch.vswitches.*.id)) # The vSwitches for the node pool. Specify one or more vSwitch IDs. The vSwitches must be in the zone specified by availability_zone.
      desired_size   = 0                                                      # The expected number of nodes in the node pool.
      management {
        auto_repair     = true
        auto_upgrade    = true
        max_unavailable = 1
      }
      instance_types        = var.worker_instance_types
      instance_charge_type  = "PostPaid"
      install_cloud_monitor = true
      system_disk_category  = "cloud_efficiency"
      system_disk_size      = 100
      image_type            = "AliyunLinux"
      data_disks {
        category = "cloud_essd"
        size     = 120
      }
    }
    
    # Creates an auto-scaling node pool. The node pool can scale out to a maximum of 10 nodes and must maintain at least 1 node.
    resource "alicloud_cs_kubernetes_node_pool" "autoscale_node_pool" {
      cluster_id     = alicloud_cs_managed_kubernetes.default.id
      node_pool_name = local.autoscale_nodepool_name
      vswitch_ids    = split(",", join(",", alicloud_vswitch.vswitches.*.id))
      scaling_config {
        min_size = 1
        max_size = 10
      }
      instance_types        = var.worker_instance_types
      install_cloud_monitor = true         # Specifies whether to install CloudMonitor on the Kubernetes nodes.
      system_disk_category  = "cloud_efficiency"
      system_disk_size      = 100
      image_type            = "AliyunLinux3"
      data_disks {              # Data disk configuration for the node.
        category = "cloud_essd" # The category of the data disk.
        size     = 120          # The size of the data disk.
      }
    }
  2. Run the following command to initialize the Terraform runtime environment.

    terraform init

    The following output indicates that the initialization is successful.

    Terraform has been successfully initialized!
    
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.
  3. Create an execution plan and preview the changes.

    terraform plan
  4. Run the following command to create the cluster.

    terraform apply

    When prompted, enter yes and press the Enter key. Wait for the command to finish running. The following output indicates that the ACK cluster is created.

    Do you want to perform these actions?
      Terraform will perform the actions described above.
      Only 'yes' will be accepted to approve.
    
      Enter a value: yes
    
    ...
    alicloud_cs_managed_kubernetes.default: Creation complete after 5m48s [id=ccb53e72ec6c447c990762800********]
    ...
    
    Apply complete! Resources: 11 added, 0 changed, 0 destroyed.
  5. Verification results

    Run the terraform show command

    You can run the following command to view the details of the resources created by Terraform.

    terraform show

    Log on to the ACK console

    Log on to the Container Service for Kubernetes console to view the created cluster.

Clean up resources

When you no longer need the resources created or managed by Terraform, run the terraform destroy command to release them. For more information about terraform destroy, see Common commands.

terraform destroy