All Products
Search
Document Center

Container Service for Kubernetes:Use a custom CNI plugin in an ACK cluster

Last Updated:Aug 02, 2025

The Terway and Flannel Container Network Interface (CNI) plugins that ACK provides by default meet most container network requirements. However, for scenarios that require specific features from other CNI plugins, ACK lets you install custom CNI plugins in a cluster using the Bring Your Own Container Network Interface (BYOCNI) mode. This topic describes how to create an ACK managed cluster Pro without a CNI plugin and how to manually install a custom CNI plugin.

Disclaimer

The Container Network Interface (CNI) affects not only the east-west and north-south traffic within a cluster but also the capabilities of some core components. For example, webhooks depend on the API Server to directly access pods.

Alibaba Cloud does not provide a Service-Level Agreement (SLA) for failures caused by custom CNI plugins in an ACK managed cluster Pro. You are responsible for managing the network capabilities, troubleshooting, and resolving failures related to the custom CNI plugin. ACK does not provide technical support for custom CNI plugins.

If you require CNI-related support, use the CNI plugins that ACK provides or use commercial CNI plugins to receive professional technical support from third parties.

Precautions

If your custom CNI plugin uses an Overlay network, the API Server of the ACK managed cluster Pro cannot access any webhooks. This affects all components that use webhooks, such as metrics-server.

Step 1: Create a BYOCNI cluster

  1. You can create an ACK managed cluster Pro without a CNI plugin (a BYOCNI cluster) only by calling the CreateCluster API operation or by using Terraform to create an ACK managed cluster. You must disable the kube-flannel-ds component when you create the cluster.

    Use OpenAPI

    Use Terraform

    "addons": [
        {
            "name": "kube-flannel-ds",
            "disabled": true
        }
    ]
    addons {
      name     = "kube-flannel-ds"
      disabled = true
    }
  2. (Optional) If you use the VPC route mode, you must configure the cloud-controller-manager component. For more information, see cloud-controller-manager configurations.

    Use OpenAPI

    Use Terraform

    "addons": [
        {
            "name": "cloud-controller-manager",
            "config": "{\"EnableCloudRoutes\":\"true\",\"BackendType\":\"NodePort\"}"
        }
    ]
    addons {
      name = "cloud-controller-manager"
      config = jsonencode({
        EnableCloudRoutes = "true"
        BackendType       = "NodePort"
      })
    }
  3. After the cluster is created, all nodes have a status of NotReady because no CNI plugin is installed. This is the expected behavior. The node status automatically changes to Ready after you install a CNI plugin.

    image

Step 2: Install a custom CNI plugin

Important

The following steps describe how to install Cilium in VPC route mode and are for reference only. The operations may vary depending on the CNI plugin that you use.

Before you perform the operations in this example, make sure that you have connected to the cluster using kubectl and have installed the Helm command-line interface (CLI). For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster and Installing Helm.

The Cilium image in this example is stored in a repository outside China. The image pull may fail. You can use one of the following solutions:
Solution 1: Subscribe to images from repositories outside China using Container Registry (ACR). For more information, see Subscribe to images from repositories outside China.
Solution 2: Create a Global Accelerator (GA) instance and use the GA global network acceleration service to directly pull container images from repositories outside China. For more information, see Use GA to accelerate the cross-region pulling of container images in an ACK cluster.
  1. Run the following command to add the Helm repository of Cilium.

    helm repo add cilium https://helm.cilium.io/
  2. Run the following command to install Cilium. Modify the parameters in the command based on your cluster network planning.

    helm install --set securityContext.privileged=true \
        --set routingMode=native \
        --set ipam.mode=kubernetes \
        --set ipMasqAgent.enable=true \
        --set ipMasqAgent.config.nonMasqueradeCIDRs='{172.16.0.0/12,10.0.0.0/8 }' \
        --set ipv4NativeRoutingCIDR=172.16.0.0/12 \
        cilium cilium/cilium --version 1.17.4 \
      --namespace kube-system

    Parameter description:

    • ipv4NativeRoutingCIDR: 172.16.0.0/12 is the pod CIDR block used by the cluster.

    • ipMasqAgent.config.nonMasqueradeCIDRs: 172.16.0.0/12 is the pod CIDR block used by the cluster, and 10.0.0.0/8 is the VPC CIDR block used by the cluster.

    Expected output:

    NAME: cilium
    LAST DEPLOYED: Fri Jul 18 16:34:50 2025
    NAMESPACE: kube-system
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    You have successfully installed Cilium with Hubble.
    
    Your release version is 1.17.4.
    
    For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp
  3. After the Cilium CNI plugin is installed, the status of the nodes changes to Ready.

    image

More configurations

When you create a BYOCNI cluster, you can specify additional parameters to customize the cluster behavior and better meet the requirements of your CNI plugin.

Assign a PodCIDR block to a node

Some CNI plugins depend on the PodCIDR property of a node to assign IP addresses to pods. You can specify container_cidr and node_cidr_mask when you create a cluster to set the pod CIDR block of the cluster and the subnet mask for each node.

If you configure container_cidr and node_cidr_mask, a PodCIDR block is assigned to each node in the cluster. Otherwise, no PodCIDR block is assigned to the nodes. For more information about how to configure these parameters, see CreateCluster.

cloud-controller-manager configurations

The ACK cloud-controller-manager provides optional features for BYOCNI clusters. You can enable or disable these features by configuring the parameters of the cloud-controller-manager add-on when you create a cluster.

Parameter

Default value

Required

Description

EnableCloudRoutes

false

No

After you enable PodCIDR on nodes, if you use VPC route tables to enable communication between pods, you can enable the enableCloudRoutes feature of the cloud-controller-manager. The cloud-controller-manager automatically adds a route table for the PodCIDR block on each node. IP addresses allocated from the PodCIDR block can be directly accessed within the VPC.

BackendType

NodePort

No

The cloud-controller-manager is responsible for processing Services of the LoadBalancer type, including creating CLB/NLB instances and adding pods to the backend server groups of CLB/NLB instances. By default, the cloud-controller-manager adds the IP addresses of nodes in the cluster to the backend server group of the load balancing service. The load balancing service forwards traffic to the nodes, and then the traffic is forwarded to pods based on the Service forwarding configurations on the nodes. If your BYOCNI plugin assigns VPC IP addresses to pods, you can directly add the pod IP addresses to the backend server group of the load balancing service without the need for forwarding through nodes.

Valid values:

  • NodePort: The IP addresses of nodes are added to the backend server group of the load balancing service. Traffic is forwarded to pods through the nodes.

  • Pod: The IP addresses of pods are directly added to the backend server group of the load balancing service. The pod IP addresses must be VPC IP addresses.

Configuration example:

Use OpenAPI

Use Terraform

"addons": [
    {
        "name": "cloud-controller-manager",
        "config": "{\"EnableCloudRoutes\":\"true\",\"BackendType\":\"NodePort\"}"
    }
]
addons {
  name = "cloud-controller-manager"
  config = jsonencode({
    EnableCloudRoutes = "true"
    BackendType       = "NodePort"
  })
}