All Products
Search
Document Center

Container Service for Kubernetes:ACK Managed Cluster Network Planning

Last Updated:Mar 26, 2026

Before creating a cluster, plan your VPC layout, CIDR block allocation, and CNI plug-in selection. Getting these right upfront prevents IP conflicts, avoids hard-to-fix topology mistakes, and reserves enough capacity for future growth. Three settings cannot be changed after cluster creation: the CNI plug-in, the Service CIDR block, and the container CIDR block (Flannel only).

Network scale planning

Region and zone

Within a region, all zones communicate over the internal network. Each zone is isolated from failures in other zones. A VPC is a regional resource and cannot span regions.

When selecting a region and zone, consider the following:

Consideration Description
Latency Deploy resources close to your end users to minimize network latency.
Service availability Alibaba Cloud services vary by region and zone. Confirm that the services you need are available in your target region and zone.
Cost Cloud service pricing varies by region. Select a region that fits your budget.
High availability and disaster recovery For workloads requiring high availability, deploy across zones within the same region. For stronger isolation, deploy across regions.
Compliance Select a region that meets the data residency and regulatory requirements for your country or region.

For regions where ACK is available, see Available regions.

A VPC cannot span regions. For multi-region systems, create one VPC per region and connect them using VPC peering connections, VPN Gateway, or Cloud Enterprise Network (CEN). A vSwitch is a zonal resource.

VPC count

Different VPCs are fully isolated from each other. Resources within the same VPC communicate over the private network by default.

Scenario Use case
Single VPC Small deployment in one region with no network isolation requirement. Cost-sensitive environments where cross-VPC connection overhead is undesirable.
Multiple VPCs Services deployed across multiple regions. Business systems that require strict network isolation (for example, production and staging). Complex architectures where separate teams manage their own resources independently.
The default quota is 10 VPCs per region. To increase this quota, visit the Quota Management page or Quota Center.

vSwitch count

A vSwitch is a zonal resource. All cloud resources in a VPC are attached to vSwitches. All vSwitches within the same VPC can communicate with each other by default.

Consideration Guidance
Latency Inter-zone latency within a region is low, but complex system calls and cross-zone traffic increase it. Balance high availability against latency requirements.
High availability Create at least two vSwitches in different zones. If one zone fails, the other continues serving traffic.
Business division Group vSwitches by function—for example, separate vSwitches for the web, logic, and data layers. Place Internet-facing services in a dedicated public vSwitch to simplify security rule management.
The default quota is 150 vSwitches per VPC. To increase this quota, visit the Quota Management page or Quota Center.

Cluster size

Node count Use case VPC planning Zone planning
Fewer than 100 nodes Non-core workloads Single VPC 1 (2 or more recommended)
100 or more nodes General workloads requiring multi-zone redundancy Single VPC 2 or more
100 or more nodes Core workloads requiring high reliability across multiple regions Multiple VPCs 2 or more

Network connectivity planning

Single cluster in a single VPC

When you create a VPC, its CIDR block is fixed. When you create a cluster, assign separate CIDR blocks for pods and services. These blocks must not overlap with the VPC CIDR block.

image

Multiple clusters in a single VPC

Multiple clusters can share one VPC. When planning CIDR blocks for each cluster:

  • The VPC CIDR block is fixed at creation. Each cluster must use non-overlapping CIDR blocks for the VPC, services, and pods.

  • Pod CIDR blocks across clusters must not overlap. Service CIDR blocks are virtual networks and may overlap across clusters.

  • In Flannel mode, pod packets route through the VPC. ACK automatically adds routes to each pod CIDR block in the VPC route table.

In this topology, clusters have partial connectivity. Pods in one cluster can directly access pods and ECS instances in another cluster, but cannot access ClusterIP services in the other cluster—ClusterIP services are only reachable within their own cluster. To expose services across clusters, use LoadBalancer services or Ingress.
image

Multi-cluster inter-VPC connectivity

Plan inter-VPC connectivity when your deployment falls into one of the following scenarios.

Multi-region deployment

A VPC is a regional resource. For multi-region systems, create one VPC and cluster per region, then connect them using VPC peering connections, VPN Gateway, or Cloud Enterprise Network (CEN).

image

Isolation of multiple business systems

If business systems in one region require strict network isolation—for example, production and staging environments with different security requirements—deploy each in a separate VPC. Connect VPCs in the same region using VPC peering connections, VPN Gateway, or Cloud Enterprise Network (CEN).

image

Large-scale multi-team architecture

If multiple teams need independent VPCs to manage their own clusters and resources, deploy separate VPCs per team. This improves flexibility and simplifies access control.

image
Important

To avoid routing errors caused by IP conflicts in multi-cluster inter-VPC setups, for every new cluster verify that its CIDR blocks do not overlap with:

  • Any VPC CIDR block in the connected network

  • Any other cluster's CIDR block

  • Any other cluster's pod CIDR block

  • Any other cluster's service CIDR block

Cloud cluster to on-premises data center connectivity

When connecting a cluster to an on-premises data center (IDC), some VPC CIDR blocks may route to the IDC. Pod addresses must not overlap with those routed CIDR blocks. If the IDC needs to access pod addresses directly, configure routes in the Virtual Border Router (VBR) on the IDC side.

image

CNI plug-in planning

ACK managed clusters support two container network interface (CNI) plug-ins: Terway and Flannel. The CNI plug-in is set at cluster creation and cannot be changed afterward. Your choice determines which network features are available and how CIDR blocks are configured.

Choose a CNI plug-in

Use this table to select the plug-in that fits your requirements:

Terway Flannel
Best for Workloads requiring advanced network features: NetworkPolicy, fixed pod IPs, pod-bound elastic IP addresses (EIPs), or inter-cluster access Simpler setups where these features are not needed
Pod IP source IPs assigned from VPC vSwitches IPs assigned from a virtual CIDR block
IP pool size Limited by vSwitch CIDR block size Up to 65,536 pod IPs with a /16 container CIDR block
IPv6 Supported Not supported

Feature comparison

Feature Terway Flannel
NetworkPolicy Supported Not supported
IPv4/IPv6 dual-stack Supported Not supported
Fixed pod IP Supported Not supported
Pod-bound EIP Supported Not supported
Inter-cluster access Supported (when security groups allow the required ports) Not supported
ACK uses a modified version of the Flannel plug-in optimized for Alibaba Cloud. It does not track upstream open source changes. For Flannel update history, see Flannel.

For a detailed feature comparison, see Terway vs. Flannel container network plug-ins.

CIDR block planning

Terway network mode

In Terway mode, pods get IP addresses from dedicated pod vSwitches in your VPC. Plan pod vSwitch CIDR blocks large enough to accommodate all pods across nodes and zones, including headroom for rolling upgrades.

terway

Configuration examples

Single-zone:

VPC CIDR block vSwitch CIDR block Pod vSwitch CIDR block Service CIDR block Maximum assignable pod IPs
192.168.0.0/16 Zone I: 192.168.0.0/19 192.168.32.0/19 172.21.0.0/20 8,192

Multi-zone:

VPC CIDR block vSwitch CIDR block Pod vSwitch CIDR block Service CIDR block Maximum assignable pod IPs
192.168.0.0/16 Zone I: 192.168.0.0/19 192.168.32.0/19 172.21.0.0/20 8,192
Zone J: 192.168.64.0/19 192.168.96.0/19

VPC

Use one of the following RFC-standard private CIDR blocks—or a subnet—as your VPC's primary IPv4 CIDR block: 192.168.0.0/16, 172.16.0.0/12, or 10.0.0.0/8. Valid mask lengths range from /8 to /28 (varies by block). Example: 192.168.0.0/16.

For multi-VPC or hybrid cloud deployments, use subnets of these RFC-standard private CIDR blocks with a mask of /16 or shorter. Ensure no CIDR block overlaps between VPCs or between a VPC and your on-premises data center.

VPCs assign IPv6 CIDR blocks automatically when you enable IPv6. To use IPv6 for containers, choose Terway.

To use a public IP range for your VPC CIDR block, request the ack.white_list/supportVPCWithPublicIPRanges quota in the Quota Center.

vSwitch

vSwitches host ECS instances and handle node-to-node traffic. The vSwitch CIDR block must be a subset of the VPC CIDR block.

  • ECS instances in the vSwitch get IPs from this CIDR block. Size the vSwitch to have enough addresses for all nodes.

  • Multiple vSwitches in one VPC must not have overlapping CIDR blocks.

  • Each pod vSwitch must be in the same zone as its corresponding node vSwitch.

Pod vSwitch

The pod vSwitch assigns IPs to pods and handles pod traffic. Its CIDR block must be a subset of the VPC CIDR block.

  • Size the pod vSwitch based on the maximum number of pods you expect to run, plus buffer for upgrades.

  • The pod vSwitch CIDR block must not overlap with the Service CIDR block.

Service CIDR block

Important

The Service CIDR block cannot be modified after cluster creation.

The Service CIDR block defines the IP range for ClusterIP services. Each service gets one IP.

  • Service IPs are only reachable within the cluster—not from outside.

  • The Service CIDR block must not overlap with the vSwitch CIDR block or the pod vSwitch CIDR block.

Service IPv6 CIDR block (when IPv6 dual-stack is enabled)

  • Use a Unique Local Address (ULA) in the fc00::/7 range. The prefix length must be between /112 and /120.

  • Match the number of usable addresses in the Service CIDR block.

Flannel network mode

In Flannel mode, pod IPs come from a virtual container CIDR block—not tied to any vSwitch. Pod packets route through the VPC, and ACK automatically adds routes to each pod CIDR block in the VPC route table.

Flannel示意图

Configuration example

VPC CIDR block vSwitch CIDR block Container CIDR block Service CIDR block Maximum assignable pod IPs
192.168.0.0/16 192.168.0.0/24 172.20.0.0/16 172.21.0.0/20 65,536

VPC

Use one of the following RFC-standard private CIDR blocks—or a subnet—as your VPC's primary IPv4 CIDR block: 192.168.0.0/16, 172.16.0.0/12, or 10.0.0.0/8. Valid mask lengths range from /8 to /28 (varies by block). For multi-VPC or hybrid cloud deployments, use a mask of /16 or shorter and ensure no overlaps between VPCs or between a VPC and your on-premises data center.

To use a public IP range for your VPC CIDR block, request the ack.white_list/supportVPCWithPublicIPRanges quota in the Quota Center.

vSwitch

  • ECS instances in the vSwitch get IPs from this CIDR block. Size the vSwitch to accommodate all nodes.

  • Multiple vSwitches in one VPC must not have overlapping CIDR blocks.

Container CIDR block

Important

The container CIDR block cannot be modified after cluster creation.

This virtual CIDR block assigns pod IPs across the cluster.

  • It is not tied to any vSwitch.

  • It must not overlap with the vSwitch CIDR block or the Service CIDR block.

For example, if your VPC CIDR block is 172.16.0.0/12, do not use 172.16.0.0/16 or 172.17.0.0/16 for the container CIDR block—both fall within 172.16.0.0/12.

Service CIDR block

Important

The Service CIDR block cannot be modified after cluster creation.

The Service CIDR block defines the IP range for ClusterIP services.

  • Service IPs are only reachable within the cluster.

  • The Service CIDR block must not overlap with the vSwitch CIDR block or the container CIDR block.

What's next