All Products
Search
Document Center

Container Service for Kubernetes:Use the Terway Hybrid network plugin

Last Updated:Oct 29, 2025

Connecting a hybrid cloud node pool to an on-premises data center creates complex network topologies and cross-domain routing requirements. These requirements often exceed the capabilities of standard container network plugins. The Terway Hybrid network plugin is specifically designed for such scenarios, ensuring seamless network connectivity between all pods in your cluster, whether they are running on-premises or in the cloud.

How it works

The following figure shows the network architecture of a Container Service for Kubernetes (ACK) hybrid cloud node pool. The architecture consists of two primary network domains: the Alibaba Cloud central virtual private cloud (VPC) and your on-premises data center (or other third-party cloud environments). These domains are connected via a dedicated connection (such as Express Connect) to establish Layer 3 private network connectivity. The ACK cluster in the central VPC can use a standard network plugin, such as Flannel or Terway, while the nodes in the on-premises data center must use the Terway Hybrid plugin.

image

Choose a Terway Hybrid mode

Terway Hybrid offers two modes at the node pool level. Configure the mode when you create a hybrid cloud node pool.

Comparison

Underlay mode

Overlay mode

Advantages

High performance: No VXLAN encapsulation overhead, resulting in lower network latency. Performance is about 20% higher than in Overlay mode.

Simple configuration: No special requirements for the underlying network, offering greater deployment flexibility.

Network requirements

Requires Layer 2 network connectivity between nodes. If nodes are in different Layer 3 network domains, you must configure Border Gateway Protocol (BGP) to advertise container routes.

Only requires Layer 3 private network connectivity between nodes.

Container network path

Container communication packets are transmitted directly through the host network interface card (NIC) at Layer 3.

image

Container network packets are encapsulated using VXLAN and transmitted over UDP on port 8472 via the host network.

image

Prerequisites

  • You have an ACK managed Pro cluster running Kubernetes 1.33 or later.

  • If your cluster uses the Flannel network plugin, its version must be 0.15.1.23 or later.

  • You have Elastic Compute Service (ECS) nodes within the cluster to deploy ACK management components for the hybrid cloud node pool. To ensure high availability (HA), use at least three ECS nodes.

Procedure

Step 1: Establish cross-cloud network connectivity

Before installing the Terway Hybrid plugin, ensure the network path between your cloud and on-premises environments is fully connected.

  • We recommend using Express Connect to establish a dedicated connection and enable Layer 3 connectivity between your VPC and on-premises data center. Configure the corresponding route entries in your VPC route tables and the ECR.

  • If your on-premises services need to access Alibaba Cloud services, add a route for the 100.64.0.0/10 Alibaba Cloud service CIDR block to your data center's core switch. For details, see Configure PrivateLink for an Express Connect circuit.

  • If cloud resources need to access pods in your data center, the core switch in your data center must support BGP. For configuration details, see Step 3: Expose pods to external networks.

Step 2: Install the plugin

  1. Log on to the ACK console. In the left navigation pane, click Clusters.

  2. On the Clusters page, find the one you want to manage and click its name. In the left navigation pane, click Add-ons.

  3. Install the terway-hybrid-controlplane component.

    Parameter

    Description

    Hybrid Pod CIDR Block

    The pod CIDR block for the hybrid cloud node pool. Must not overlap with the existing Service CIDR, pod CIDR, cloud node CIDR, or on-premises node CIDR blocks.

    Per-Node Pod CIDR Block Mask Size

    The mask size for the CIDR block allocated to each node in the hybrid cloud node pool. For example, a value of 24 assigns a /24 CIDR block (for example, xxx.xxx.xxx.0/24) to each node, providing 256 pod IPs.

    Source NAT

    Specifies whether to translate the source IP address of pods to the node's IP address when accessing services outside the cluster.

    If you disable this feature, you must ensure that external network devices can route traffic to your pod IPs. See Step 3: Expose pods to external networks.
  4. Install the terway-hybrid data plane component. No parameters are required. Any new nodes added to the hybrid cloud node pool will automatically have the Terway Hybrid plugin installed.

Step 3: Expose pods to external networks

If network devices (such as in your central VPC or outside the cluster) need to communicate with pods in your on-premises data center, configure BGP to dynamically advertise pod routes to your network switches.

image

Configure BGP in the cluster

Terway Hybrid uses a CustomResourceDefinition (CRD) named BGPClusterConfig to store BGP settings and associate them with a specific hybrid cloud node pool using a nodeSelector.

By default, Terway Hybrid does not create this resource. Create a BGPClusterConfig for each hybrid cloud node pool that requires BGP.
  1. Create a YAML file named bgpclusterconfig.yaml with the following content.

    apiVersion: network.alibabacloud.com/v1beta1
    kind: BGPClusterConfig
    metadata:
      name: bgp
    spec:
      localASN: 65010
      nodeSelector:
        matchLabels:
          alibabacloud.com/nodepool-id: "np-xxx"
      bgpSpeakers:
      - name: hybrid-node-1
        peers:
        - name: switch-1
          peerASN: 65001
          peerAddress: "10.10.0.1"
      - name: hybrid-node-2
        peers:
        - name: switch-1
          peerASN: 65001
          peerAddress: "10.10.0.1"
          # Optional: BGP session authentication
          authPassword:
            secretKeyRef:
              name: bgp-secret
              key: password
       
    ---
    # Optional: Secret for BGP authentication password
    apiVersion: v1
    kind: Secret
    metadata:
      name: bgp-secret
      namespace: kube-system
    type: Opaque
    data:
      password: bXxXXXxXXXXXx==  # Base64-encoded password.

    Parameter

    Required

    Description

    metadata.name

    Yes

    The name of the BGPClusterConfig resource in the cluster.

    spec.localASN

    Yes

    The Autonomous System Number (ASN) that identifies the network where the BGP speakers are located. We recommend using a private ASN in the range of 64512 to 65535.

    spec.nodeSelector

    Yes

    Specifies which nodes this BGP configuration applies to. We recommend using a label to select all nodes in a hybrid cloud node pool.

    spec.bgpSpeakers

    Yes

    A list of nodes, selected from the pool defined by nodeSelector, that will act as BGP speakers. These nodes are responsible for advertising pod routes.

    Select at least two nodes to avoid a single point of failure (SPOF).

    spec.bgpSpeakers.name

    Yes

    The name of the BGP speaker node. This name must be the same as the node selected by spec.nodeSelector.

    spec.bgpSpeakers.peers

    Yes

    A list of devices that will establish BGP peering sessions with this BGP speaker. These are typically access-layer switches.

    spec.bgpSpeakers.peers.name

    Yes

    The name of the BGP peer device.

    spec.bgpSpeakers.peers.peerASN

    Yes

    The ASN of the BGP peer device.

    spec.bgpSpeakers.peers.peerAddress

    Yes

    The IP address of the BGP peer device.

    spec.bgpSpeakers.peers.authPassword

    No

    Configures a password for BGP session authentication by referencing a Kubernetes Secret. The Secret must exist in the kube-system namespace.

  2. Create the BGPClusterConfig resource.

    kubectl apply -f bgpclusterconfig.yaml

Configure external network devices

On your on-premises network devices, enable the BGP service and configure the nodes you selected as BGP speakers in the Configure BGP in the cluster step as BGP peers. Then, add routes pointing to your on-premises pod CIDR block on your central VPC route table, dedicated connection gateway, and data center core switch.

After configuration is complete, verify that the BGP peering sessions with the pods in your cluster are successfully established.

Terway Hybrid enables BGP graceful restart with a default duration of 600 seconds. Configure BGP graceful restart on your switches accordingly.