All Products
Search
Document Center

Container Service for Kubernetes:Use Terraform to create an ACK managed cluster

Last Updated:Aug 04, 2023

Terraform is an open source tool provided by HashiCorp. Terraform allows you to securely and efficiently preview, configure, and manage cloud infrastructures and resources. You can use Terraform to automatically create and update Alibaba Cloud infrastructures and resources, and manage versions based on your requirements. This topic describes how to use Terraform to create a Container Service for Kubernetes (ACK) managed cluster.

Prerequisites

  • Terraform is installed.

    Note

    You must install Terraform 0.12.28 or later. You can run the terraform --version command to query the Terraform version.

    • By default, Cloud Shell is preinstalled with Terraform and configured with your account information. You do not need to modify the configurations.

    • If you do not use Cloud Shell, you can directly install Terraform. For more information, see Install and configure Terraform in the local PC.

  • Your account information is configured. You can specify identity information in environment variables.

    export ALICLOUD_ACCESS_KEY="************"
    export ALICLOUD_SECRET_KEY="************"
    export ALICLOUD_REGION="cn-beijing"
    Note

    To improve the flexibility and security of permission management, we recommend that you create a Resource Access Management (RAM) user named Terraform. Then, create an AccessKey pair for the RAM user and grant permissions to the RAM user. For more information, see Create a RAM user and Grant permissions to the RAM user.

  • Cskpro container service is activated.

  • View the variable.tf file that is used in this example

    variable "availability_zone" {
      description = "The availability zones of vswitches."
      default     = ["cn-shenzhen-d", "cn-shenzhen-e", "cn-shenzhen-f"]
    }
    
    variable "node_vswitch_ids" {
      description = "List of existing node vswitch ids for terway."
      type        = list(string)
      default     = []
    }
    
    variable "node_vswitch_cirds" {
      description = "List of cidr blocks used to create several new vswitches when 'node_vswitch_ids' is not specified."
      type        = list(string)
      default     = ["172.16.0.0/23", "172.16.2.0/23", "172.16.4.0/23"]
    }
    
    variable "terway_vswitch_ids" {
      description = "List of existing pod vswitch ids for terway."
      type        = list(string)
      default     = []
    }
    
    variable "terway_vswitch_cirds" {
      description = "List of cidr blocks used to create several new vswitches when 'terway_vswitch_ids' is not specified."
      type        = list(string)
      default     = ["172.16.208.0/20", "172.16.224.0/20", "172.16.240.0/20"]
    }
    
    # Node Pool worker_instance_types
    variable "worker_instance_types" {
      description = "The ecs instance types used to launch worker nodes."
      default     = ["ecs.g6.2xlarge", "ecs.g6.xlarge"]
    }
    
    # Password for Worker nodes
    variable "password" {
      description = "The password of ECS instance."
    }
    
    # Cluster Addons
    variable "cluster_addons" {
      type = list(object({
        name      = string
        config    = string
      }))
    
      default = [
        {
          "name"     = "terway-eniip",
          "config"   = "",
        },
        {
          "name"     = "logtail-ds",
          "config"   = "{\"IngressDashboardEnabled\":\"true\"}",
        },
        {
          "name"     = "nginx-ingress-controller",
          "config"   = "{\"IngressSlbNetworkType\":\"internet\"}",
        },
        {
          "name"     = "arms-prometheus",
          "config"   = "",
          "disabled": false,
        },
        {
          "name"     = "ack-node-problem-detector",
          "config"   = "{\"sls_project_name\":\"\"}",
          "disabled": false,
        },
        {
          "name"     = "csi-plugin",
          "config"   = "",
        },
        {
          "name"     = "csi-provisioner",
          "config"   = "",
        }
      ]
    }
    
    
    
    # Cluster Addons for Flannel
    variable "cluster_addons_flannel" {
      type = list(object({
        name      = string
        config    = string
      }))
    
      default = [
        {
          "name"     = "flannel",
          "config"   = "",
        },
        {
          "name"     = "logtail-ds",
          "config"   = "{\"IngressDashboardEnabled\":\"true\"}",
        },
        {
          "name"     = "nginx-ingress-controller",
          "config"   = "{\"IngressSlbNetworkType\":\"internet\"}",
        },
        {
          "name"     = "arms-prometheus",
          "config"   = "",
          "disabled": false,
        },
        {
          "name"     = "ack-node-problem-detector",
          "config"   = "{\"sls_project_name\":\"\"}",
          "disabled": false,
        },
        {
          "name"     = "csi-plugin",
          "config"   = "",
        },
        {
          "name"     = "csi-provisioner",
          "config"   = "",
        }
      ]
    }
             
        

Use Terraform to create an ACK managed cluster when Flannel is selected

  1. Create a working directory and a file named main.tf in the directory.

    The main.tf file is used to configure the following settings for Terraform:

    • Create a virtual private cloud (VPC) and create a vSwitch in the VPC.

    • Create an ACK managed cluster.

    • Create a node pool that contains two nodes.

      Click to view details

      #provider, use alicloud
      provider "alicloud" {
      }
      variable "k8s_name_prefix" {
        description = "The name prefix used to create managed kubernetes cluster."
        default     = "tf-ack-shenzhen"
      }
      resource "random_uuid" "this" {}
      # The default resource names. 
      locals {
        k8s_name_terway         = substr(join("-", [var.k8s_name_prefix,"terway"]), 0, 63)
        k8s_name_flannel        = substr(join("-", [var.k8s_name_prefix,"flannel"]), 0, 63)
        k8s_name_ask            = substr(join("-", [var.k8s_name_prefix,"ask"]), 0, 63)
        new_vpc_name            = "tf-vpc-172-16"
        new_vsw_name_azD        = "tf-vswitch-azD-172-16-0"
        new_vsw_name_azE        = "tf-vswitch-azE-172-16-2"
        new_vsw_name_azF        = "tf-vswitch-azF-172-16-4"
        nodepool_name           = "default-nodepool"
        log_project_name        = "log-for-${local.k8s_name_terway}"
      }
      # The Elastic Compute Service (ECS) instance specifications of the worker nodes. Terraform searches for ECS instance types that fulfill the CPU and memory requests. 
      data "alicloud_instance_types" "default" {
        cpu_core_count       = 8
        memory_size          = 32
        availability_zone    = var.availability_zone[0]
        kubernetes_node_role = "Worker"
      }
      // The zone that has sufficient ECS instances of the required specifications. 
      data "alicloud_zones" "default" {
        available_instance_type = data.alicloud_instance_types.default.instance_types[0].id
      }
      # The VPC. 
      resource "alicloud_vpc" "default" {
        vpc_name   = local.new_vpc_name
        cidr_block = "172.16.0.0/12"
      }
      # The node vSwitch. 
      resource "alicloud_vswitch" "vswitches" {
        count             = length(var.node_vswitch_ids) > 0 ? 0 : length(var.node_vswitch_cirds)
        vpc_id            = alicloud_vpc.default.id
        cidr_block        = element(var.node_vswitch_cirds, count.index)
        availability_zone = element(var.availability_zone, count.index)
      }
      
      # The ACK managed cluster. 
      resource "alicloud_cs_managed_kubernetes" "flannel" {
        # The name of the cluster. 
        name                      = local.k8s_name_flannel
        # Create an ACK Pro cluster. 
        cluster_spec              = "ack.pro.small"
        version                   = "1.22.10-aliyun.1"
        # The vSwitches of the new Kubernetes cluster. Specify one or more vSwitch IDs. The vSwitches must be in the zone specified by availability_zone. 
        worker_vswitch_ids        = split(",", join(",", alicloud_vswitch.vswitches.*.id))
      
        # Specify whether to create a NAT gateway when the system creates the Kubernetes cluster. Default value: true. 
        new_nat_gateway           = true
        # The pod CIDR block. If you set cluster_network_type to flannel, this parameter is required. The pod CIDR block cannot be the same as the VPC CIDR block or the CIDR blocks of other Kubernetes clusters in the VPC. You cannot change the pod CIDR block after the cluster is created. Maximum number of hosts in the cluster: 256. 
        pod_cidr                  = "10.10.0.0/16"
        # The Service CIDR block. The Service CIDR block cannot be the same as the VPC CIDR block or the CIDR blocks of other Kubernetes clusters in the VPC. You cannot change the Service CIDR block after the cluster is created. 
        service_cidr              = "10.12.0.0/16"
        # Specify whether to create an Internet-facing Server Load Balancer (SLB) instance for the API server of the cluster. Default value: false. 
        slb_internet_enabled      = true
      
        # Enable Ram Role for ServiceAccount
        enable_rrsa = true
      
        # The logs of the control plane. 
        control_plane_log_components = ["apiserver", "kcm", "scheduler", "ccm"]
      
        # The components. 
        dynamic "addons" {
          for_each = var.cluster_addons_flannel
          content {
            name     = lookup(addons.value, "name", var.cluster_addons_flannel)
            config   = lookup(addons.value, "config", var.cluster_addons_flannel)
            # disabled = lookup(addons.value, "disabled", var.cluster_addons_flannel)
          }
        }
      
        # The container runtime. 
        runtime = {
          name    = "docker"
          version = "19.03.15"
        }
      }
      
      # The node pool. 
      resource "alicloud_cs_kubernetes_node_pool" "flannel" {
        # The name of the cluster. 
        cluster_id            = alicloud_cs_managed_kubernetes.flannel.id
        # The name of the node pool. 
        name                  = local.nodepool_name
        # The vSwitches of the new Kubernetes cluster. Specify one or more vSwitch IDs. The vSwitches must be in the zone specified by availability_zone. 
        vswitch_ids           = split(",", join(",", alicloud_vswitch.vswitches.*.id))
      
        # Worker ECS Type and ChargeType
        # instance_types      = [data.alicloud_instance_types.default.instance_types[0].id]
        instance_types        = var.worker_instance_types
        instance_charge_type  = "PrePaid"
        period                = 1
        period_unit           = "Month"
        auto_renew            = true
        auto_renew_period     = 1
      
        # customize worker instance name
        # node_name_mode      = "customized,ack-flannel-shenzhen,ip,default"
      
        #Container Runtime
        runtime_name          = "docker"
        runtime_version       = "19.03.15"
      
        # The number of worker nodes in the Kubernetes cluster. Default value: 3. Maximum value: 50. 
        desired_size          = 2
        # The password that is used to log on to the cluster by using SSH. 
        password              = var.password
      
        # Specify whether to install the CloudMonitor agent on the nodes in the cluster. 
        install_cloud_monitor = true
      
        # The type of the system disks of the nodes. Valid values: cloud_ssd and cloud_efficiency. Default value: cloud_efficiency. 
        system_disk_category  = "cloud_efficiency"
        system_disk_size      = 100
      
        # OS Type
        image_type            = "AliyunLinux"
      
        # Configurations of the data disks of the nodes. 
        data_disks {
          # The disk type. 
          category = "cloud_essd"
          # The disk size. 
          size     = 120
        }
      }
  2. Run the following command to initialize the environment for Terraform:

    terraform init

    Expected output:

    Initializing the backend...
    
    Initializing provider plugins...
    - Checking for available provider plugins...
    - Downloading plugin for provider "alicloud" (hashicorp/alicloud) 1.90.1...
    ...
    
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.
  3. Run the following command to create an execution plan:

    terraform plan

    Expected output:

    Refreshing Terraform state in-memory prior to plan...
    The refreshed state will be used to calculate this plan, but will not be
    persisted to local or remote state storage.
    ...
    Plan: 5 to add, 0 to change, 0 to destroy.
    ...
  4. Run the following command to create the cluster:

    terraform apply

    Expected output:

    ...
    Do you want to perform these actions?
      Terraform will perform the actions described above.
      Only 'yes' will be accepted to approve.
    
      Enter a value: yes
    ...
    alicloud_cs_managed_kubernetes.flannel: Creation complete after 8m26s [id=************]
    
    Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Use Terraform to create an ACK managed cluster when Terway is selected

  1. Create a working directory and a file named main.tf in the directory.

    The main.tf file is used to configure the following settings for Terraform:

    • Create a VPC and create two vSwitches in the VPC.

    • Create an ACK managed cluster.

    • Create a node pool that contains two nodes.

    • Create a node pool that has auto scaling enabled.

    • Create a managed node pool.

    Click to view details

    #provider, use alicloud
    provider "alicloud" {
    }
    variable "k8s_name_prefix" {
      description = "The name prefix used to create managed kubernetes cluster."
      default     = "tf-ack-shenzhen"
    }
    resource "random_uuid" "this" {}
    # The default resource names. 
    locals {
      k8s_name_terway         = substr(join("-", [var.k8s_name_prefix,"terway"]), 0, 63)
      k8s_name_flannel        = substr(join("-", [var.k8s_name_prefix,"flannel"]), 0, 63)
      k8s_name_ask            = substr(join("-", [var.k8s_name_prefix,"ask"]), 0, 63)
      new_vpc_name            = "tf-vpc-172-16"
      new_vsw_name_azD        = "tf-vswitch-azD-172-16-0"
      new_vsw_name_azE        = "tf-vswitch-azE-172-16-2"
      new_vsw_name_azF        = "tf-vswitch-azF-172-16-4"
      nodepool_name           = "default-nodepool"
      managed_nodepool_name   = "managed-node-pool"
      autoscale_nodepool_name = "autoscale-node-pool"
      log_project_name        = "log-for-${local.k8s_name_terway}"
    }
    # The ECS instance specifications of the worker nodes. Terraform searches for ECS instance types that fulfill the CPU and memory requests. 
    data "alicloud_instance_types" "default" {
      cpu_core_count       = 8
      memory_size          = 32
      availability_zone    = var.availability_zone[0]
      kubernetes_node_role = "Worker"
    }
    // The zone that has sufficient ECS instances of the required specifications. 
    data "alicloud_zones" "default" {
      available_instance_type = data.alicloud_instance_types.default.instance_types[0].id
    }
    # The VPC. 
    resource "alicloud_vpc" "default" {
      vpc_name   = local.new_vpc_name
      cidr_block = "172.16.0.0/12"
    }
    # The node vSwitch. 
    resource "alicloud_vswitch" "vswitches" {
      count             = length(var.node_vswitch_ids) > 0 ? 0 : length(var.node_vswitch_cirds)
      vpc_id            = alicloud_vpc.default.id
      cidr_block        = element(var.node_vswitch_cirds, count.index)
      availability_zone = element(var.availability_zone, count.index)
    }
    # The pod vSwitch. 
    resource "alicloud_vswitch" "terway_vswitches" {
      count             = length(var.terway_vswitch_ids) > 0 ? 0 : length(var.terway_vswitch_cirds)
      vpc_id            = alicloud_vpc.default.id
      cidr_block        = element(var.terway_vswitch_cirds, count.index)
      availability_zone = element(var.availability_zone, count.index)
    }
    # The ACK managed cluster. 
    resource "alicloud_cs_managed_kubernetes" "default" {
      # The name of the cluster. 
      name                      = local.k8s_name_terway
      # Create an ACK Pro cluster. 
      cluster_spec              = "ack.pro.small"
      version                   = "1.22.10-aliyun.1"
      # The vSwitches of the new Kubernetes cluster. Specify one or more vSwitch IDs. The vSwitches must be in the zone specified by availability_zone. 
      worker_vswitch_ids        = split(",", join(",", alicloud_vswitch.vswitches.*.id))
    
      # The pod vSwitches. 
      pod_vswitch_ids           = split(",", join(",", alicloud_vswitch.terway_vswitches.*.id))
    
      # Specify whether to create a NAT gateway when the system creates the Kubernetes cluster. Default value: true. 
      new_nat_gateway           = true
      # The pod CIDR block. If you set cluster_network_type to flannel, this parameter is required. The pod CIDR block cannot be the same as the VPC CIDR block or the CIDR blocks of other Kubernetes clusters in the VPC. You cannot change the pod CIDR block after the cluster is created. Maximum number of hosts in the cluster: 256. 
      # pod_cidr                  = "10.10.0.0/16"
      # The Service CIDR block. The Service CIDR block cannot be the same as the VPC CIDR block or the CIDR blocks of other Kubernetes clusters in the VPC. You cannot change the Service CIDR block after the cluster is created. 
      service_cidr              = "10.11.0.0/16"
      # Specify whether to create an Internet-facing SLB instance for the API server of the cluster. Default value: false. 
      slb_internet_enabled      = true
    
      # Enable Ram Role for ServiceAccount
      enable_rrsa = true
    
      # The logs of the control plane. 
      control_plane_log_components = ["apiserver", "kcm", "scheduler", "ccm"]
    
      # The components. 
      dynamic "addons" {
        for_each = var.cluster_addons
        content {
          name     = lookup(addons.value, "name", var.cluster_addons)
          config   = lookup(addons.value, "config", var.cluster_addons)
          # disabled = lookup(addons.value, "disabled", var.cluster_addons)
        }
      }
    
      runtime = {
        name    = "docker"
        version = "19.03.15"
      }
    }
    
    # The regular node pool. 
    resource "alicloud_cs_kubernetes_node_pool" "default" {
      # The name of the cluster. 
      cluster_id            = alicloud_cs_managed_kubernetes.default.id
      # The name of the node pool. 
      name = local.nodepool_name
      # The vSwitches of the new Kubernetes cluster. Specify one or more vSwitch IDs. The vSwitches must be in the zone specified by availability_zone. 
      vswitch_ids           = split(",", join(",", alicloud_vswitch.vswitches.*.id))
    
      # Worker ECS Type and ChargeType
      # instance_types      = [data.alicloud_instance_types.default.instance_types[0].id]
      instance_types        = var.worker_instance_types
      instance_charge_type  = "PrePaid"
      period                = 1
      period_unit           = "Month"
      auto_renew            = true
      auto_renew_period     = 1
    
      # customize worker instance name
      # node_name_mode      = "customized,ack-terway-shenzhen,ip,default"
    
      #Container Runtime
      runtime_name          = "docker"
      runtime_version       = "19.03.15"
    
      # The number of worker nodes in the Kubernetes cluster. Default value: 3. Maximum value: 50. 
      desired_size          = 2
      # The password that is used to log on to the cluster by using SSH. 
      password              = var.password
    
      # Specify whether to install the CloudMonitor agent on the nodes in the cluster. 
      install_cloud_monitor = true
    
      # The type of the system disks of the nodes. Valid values: cloud_ssd and cloud_efficiency. Default value: cloud_efficiency. 
      system_disk_category  = "cloud_efficiency"
      system_disk_size      = 100
    
      # OS Type
      image_type            = "AliyunLinux"
    
      # Configurations of the data disks of the nodes. 
      data_disks {
        # The disk type. 
        category = "cloud_essd"
        # The disk size. 
        size     = 120
      }
    }
    
    # The managed node pool. 
    resource "alicloud_cs_kubernetes_node_pool" "managed_node_pool" {
      # The name of the cluster. 
      cluster_id              = alicloud_cs_managed_kubernetes.default.id
      # The name of the node pool. 
      name = local.managed_nodepool_name
      # The vSwitches of the new Kubernetes cluster. Specify one or more vSwitch IDs. The vSwitches must be in the zone specified by availability_zone. 
      vswitch_ids             = split(",", join(",", alicloud_vswitch.vswitches.*.id))
    
      # The number of worker nodes in the Kubernetes cluster. Default value: 3. Maximum value: 50. 
      desired_size            = 2
    
      # Managed Node Pool
      management {
        auto_repair     = true
        auto_upgrade    = true
        surge           = 1
        max_unavailable = 1
      }
    
      # Worker ECS Type and ChargeType
      # instance_types      = [data.alicloud_instance_types.default.instance_types[0].id]
      instance_types        = var.worker_instance_types
      instance_charge_type  = "PrePaid"
      period                = 1
      period_unit           = "Month"
      auto_renew            = true
      auto_renew_period     = 1
    
      # customize worker instance name
      # node_name_mode      = "customized,ack-terway-shenzhen,ip,default"
    
      #Container Runtime
      runtime_name          = "containerd"
      runtime_version       = "1.5.10"
    
    
      # The password that is used to log on to the cluster by using SSH. 
      password              = var.password
    
      # Specify whether to install the CloudMonitor agent on the nodes in the cluster. 
      install_cloud_monitor = true
    
      # The type of the system disks of the nodes. Valid values: cloud_ssd and cloud_efficiency. Default value: cloud_efficiency. 
      system_disk_category  = "cloud_efficiency"
      system_disk_size      = 100
    
      # OS Type
      image_type            = "AliyunLinux"
    
      # Configurations of the data disks of the nodes. 
      data_disks {
        # The disk type. 
        category = "cloud_essd"
        # The disk size. 
        size     = 120
      }
    }
    
    # The node pool that has auto scaling enabled. 
    resource "alicloud_cs_kubernetes_node_pool" "autoscale_node_pool" {
      # The name of the cluster. 
      cluster_id                      = alicloud_cs_managed_kubernetes.default.id
      # The name of the node pool. 
      name = local.autoscale_nodepool_name
      # The vSwitches of the new Kubernetes cluster. Specify one or more vSwitch IDs. The vSwitches must be in the zone specified by availability_zone. 
      vswitch_ids        = split(",", join(",", alicloud_vswitch.vswitches.*.id))
    
    
    
      # AutoScale Node Pool
      scaling_config {
        min_size = 1
        max_size = 10
      }
    
      # Worker ECS Type and ChargeType
      # instance_types      = [data.alicloud_instance_types.default.instance_types[0].id]
      instance_types        = var.worker_instance_types
    
    
      # customize worker instance name
      # node_name_mode      = "customized,ack-terway-shenzhen,ip,default"
    
      #Container Runtime
      runtime_name          = "containerd"
      runtime_version       = "1.5.10"
    
    
      # The password that is used to log on to the cluster by using SSH. 
      password              = var.password
    
      # Specify whether to install the CloudMonitor agent on the nodes in the cluster. 
      install_cloud_monitor = true
    
      # The type of the system disks of the nodes. Valid values: cloud_ssd and cloud_efficiency. Default value: cloud_efficiency. 
      system_disk_category  = "cloud_efficiency"
      system_disk_size      = 100
    
      # OS Type
      image_type            = "AliyunLinux"
    
      # Configurations of the data disks of the nodes. 
      data_disks {
        # The disk type. 
        category = "cloud_essd"
        # The disk size. 
        size     = 120
      }
    }
  2. Run the following command to initialize the environment for Terraform:

    terraform init

    Expected output:

    Initializing the backend...
    
    Initializing provider plugins...
    - Checking for available provider plugins...
    - Downloading plugin for provider "alicloud" (hashicorp/alicloud) 1.90.1...
    ...
    
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.
  3. Run the following command to create an execution plan:

    terraform plan

    Expected output:

    Refreshing Terraform state in-memory prior to plan...
    The refreshed state will be used to calculate this plan, but will not be
    persisted to local or remote state storage.
    ...
    Plan: 8 to add, 0 to change, 0 to destroy.
    ...
  4. Run the following command to create the resources:

    terraform apply

    Expected output:

    ...
    Do you want to perform these actions?
      Terraform will perform the actions described above.
      Only 'yes' will be accepted to approve.
    
      Enter a value: yes
    ...
    alicloud_cs_managed_kubernetes.default: Creation complete after 8m26s [id=************]
    
    Apply complete! Resources: 8 added, 0 changed, 0 destroyed.

Use Terraform to delete an ACK managed cluster

You can run the following command to delete an ACK managed cluster that is created by using Terraform:

terraform destroy

Expected output:

...
Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes
...
Destroy complete! Resources: 5 destroyed.