All Products
Search
Document Center

Alibaba Cloud Service Mesh:Plan CIDR blocks for multiple clusters on the data plane

Last Updated:Mar 11, 2026

Service Mesh (ASM) allows you to manage multiple clusters on the data plane. When you add a cluster, ASM validates that the pod CIDR blocks, service CIDR blocks, and vSwitch CIDR blocks do not conflict with those of existing clusters. The cluster can be added only if no CIDR block conflict occurs. This ensures normal communications among clusters on the data plane.

This topic describes how to plan the virtual private cloud (VPC) CIDR blocks, vSwitch CIDR blocks, pod CIDR blocks, and service CIDR blocks for multiple clusters when the clusters use the Flannel or Terway network plug-in.

Flannel vs. Terway: CIDR conflict rules at a glance

The network plug-in determines which CIDR block types must not overlap across clusters. Terway allocates pod IPs from vSwitch CIDR blocks, so it has fewer conflict constraints than Flannel, which maintains separate pod CIDR blocks.

Constraint Flannel Terway
Service CIDR blocks must not overlap across clusters Yes Yes
Service CIDR blocks must not overlap with VPC CIDR blocks of the ASM instance -- Yes
Pod CIDR blocks must not overlap with service or vSwitch CIDR blocks of other clusters Yes -- (no separate pod CIDR)
vSwitch CIDR blocks must not overlap with pod or service CIDR blocks of other clusters Yes --
VPC CIDR blocks must not overlap with those of the ASM instance (cross-VPC only) Yes Yes
CIDR blocks starting with 7 are reserved (ACK managed clusters) Yes --

Clusters that use Flannel

CIDR conflict rules

The VPC, vSwitch, pod, and service CIDR blocks must all be free of cross-cluster overlaps:

  • Service CIDR blocks must not overlap with each other, or with the pod CIDR blocks and vSwitch CIDR blocks of another cluster.

  • Pod CIDR blocks must not overlap with each other, or with the service CIDR blocks and vSwitch CIDR blocks of another cluster.

  • vSwitch CIDR blocks must not overlap with each other, or with the service CIDR blocks and pod CIDR blocks of another cluster.

  • Do not use CIDR blocks that start with 7. This range is reserved for Container Service for Kubernetes (ACK) managed clusters.

  • If a cluster is in a different VPC than the ASM instance, the VPC CIDR blocks must not overlap.

Recommended CIDR blocks

CIDR block type Recommended range Capacity
VPC 20.0.0.0/8 through 255.0.0.0/8 Up to 236 VPCs
vSwitch 20.0.0.0/16 through 20.255.0.0/16 Up to 256 vSwitches per VPC
Pod 10.0.0.0/16 through 10.255.0.0/16 Up to 65,532 pods per cluster
Service 172.16.0.0/16 through 172.31.0.0/16 Up to 65,532 services per cluster

Examples

Example 1: ASM instance and clusters in the same VPC

Object VPC vSwitch Pod Service
ASM instance 192.168.0.0/16 192.168.0.0/24 / /
Cluster 1 192.168.0.0/16 192.168.0.0/24 10.0.0.0/16 172.16.0.0/16
Cluster 2 192.168.0.0/16 192.168.0.0/24 10.1.0.0/16 172.17.0.0/16
Cluster 3 192.168.0.0/16 192.168.0.0/24 10.2.0.0/16 172.18.0.0/16

Example 2: Clusters in the same VPC, ASM instance in a different VPC

Note

Connect the VPCs of the clusters and the ASM instance through Cloud Enterprise Network (CEN) before you add the clusters. For details, see the "Step 2: Use CEN to implement cross-region VPC communication" section of Use ASM to implement cross-region disaster recovery and load balancing.

Object VPC vSwitch Pod Service
ASM instance 192.168.0.0/16 192.168.0.0/24 / /
Cluster 1 20.0.0.0/8 20.0.0.0/16 10.0.0.0/16 172.16.0.0/16
Cluster 2 20.0.0.0/8 20.0.0.0/16 10.1.0.0/16 172.17.0.0/16
Cluster 3 20.0.0.0/8 20.0.0.0/16 10.2.0.0/16 172.18.0.0/16

Example 3: Clusters in different VPCs, one cluster shares a VPC with the ASM instance

Note

Connect all VPCs -- among the clusters and between the clusters and the ASM instance -- through CEN before you add the clusters. For details, see the "Step 2: Use CEN to implement cross-region VPC communication" section of Use ASM to implement cross-region disaster recovery and load balancing.

Object VPC vSwitch Pod Service
ASM instance 192.168.0.0/16 192.168.0.0/24 / /
Cluster 1 192.168.0.0/16 192.168.0.0/24 10.0.0.0/16 172.16.0.0/16
Cluster 2 21.0.0.0/8 21.0.0.0/16 10.1.0.0/16 172.17.0.0/16
Cluster 3 22.0.0.0/8 22.0.0.0/16 10.2.0.0/16 172.18.0.0/16

Example 4: ASM instance and clusters all in different VPCs

Note

Connect all VPCs -- among the clusters and between the clusters and the ASM instance -- through CEN before you add the clusters. For details, see the "Step 2: Use CEN to implement cross-region VPC communication" section of Use ASM to implement cross-region disaster recovery and load balancing.

Object VPC vSwitch Pod Service
ASM instance 192.168.0.0/16 192.168.0.0/24 / /
Cluster 1 20.0.0.0/8 20.0.0.0/16 10.0.0.0/16 172.16.0.0/16
Cluster 2 21.0.0.0/8 21.0.0.0/16 10.1.0.0/16 172.17.0.0/16
Cluster 3 22.0.0.0/8 22.0.0.0/16 10.2.0.0/16 172.18.0.0/16

Clusters that use Terway

CIDR conflict rules

Terway assigns pod IPs from vSwitch CIDR blocks, so the conflict rules are simpler:

  • Service CIDR blocks of one cluster must not overlap with those of another cluster.

  • Service CIDR blocks must not overlap with the VPC CIDR blocks of the ASM instance.

  • VPC CIDR blocks of clusters must not overlap with those of the ASM instance.

Recommended CIDR blocks

CIDR block type Recommended range Capacity
VPC 20.0.0.0/8 through 255.0.0.0/8 Up to 236 VPCs
vSwitch 20.0.0.0/16 through 20.255.0.0/16 Up to 256 vSwitches per VPC
Pod 10.0.0.0/16 through 10.255.0.0/16 Up to 65,532 pods per cluster
Service 172.16.0.0/16 through 172.31.0.0/16 Up to 65,532 services per cluster

Examples

Because Terway uses vSwitch CIDR blocks for pod IP allocation, the example tables below do not include a Pod column. Each cluster requires a distinct vSwitch CIDR block.

Example 1: ASM instance and clusters in the same VPC

Object VPC vSwitch Service
ASM instance 192.168.0.0/16 192.168.0.0/24 /
Cluster 1 192.168.0.0/16 192.168.1.0/24 172.16.0.0/16
Cluster 2 192.168.0.0/16 192.168.2.0/24 172.17.0.0/16
Cluster 3 192.168.0.0/16 192.168.3.0/24 172.18.0.0/16

Example 2: Clusters in the same VPC, ASM instance in a different VPC

Note

Connect the VPCs of the clusters and the ASM instance through CEN before you add the clusters. For details, see the "Step 2: Use CEN to implement cross-region VPC communication" section of Use ASM to implement cross-region disaster recovery and load balancing.

Object VPC vSwitch Service
ASM instance 192.168.0.0/16 192.168.0.0/24 /
Cluster 1 20.0.0.0/8 20.0.0.0/16 172.16.0.0/16
Cluster 2 20.0.0.0/8 20.1.0.0/16 172.17.0.0/16
Cluster 3 20.0.0.0/8 20.2.0.0/16 172.18.0.0/16

Example 3: Clusters in different VPCs, one cluster shares a VPC with the ASM instance

Note

Connect all VPCs -- among the clusters and between the clusters and the ASM instance -- through CEN before you add the clusters. For details, see the "Step 2: Use CEN to implement cross-region VPC communication" section of Use ASM to implement cross-region disaster recovery and load balancing.

Object VPC vSwitch Service
ASM instance 20.0.0.0/8 20.0.0.0/16 /
Cluster 1 20.0.0.0/8 20.1.0.0/16 172.16.0.0/16
Cluster 2 21.0.0.0/8 21.0.0.0/16 172.17.0.0/16
Cluster 3 22.0.0.0/8 22.0.0.0/16 172.18.0.0/16

Example 4: ASM instance and clusters all in different VPCs

Note

Connect all VPCs -- among the clusters and between the clusters and the ASM instance -- through CEN before you add the clusters. For details, see the "Step 2: Use CEN to implement cross-region VPC communication" section of Use ASM to implement cross-region disaster recovery and load balancing.

Object VPC vSwitch Service
ASM instance 192.168.0.0/16 192.168.0.0/24 /
Cluster 1 20.0.0.0/8 20.0.0.0/16 172.16.0.0/16
Cluster 2 21.0.0.0/8 21.0.0.0/16 172.17.0.0/16
Cluster 3 22.0.0.0/8 22.0.0.0/16 172.18.0.0/16