The Terway plugin in a hybrid cluster has two components: one running in your on-premises data center and one on cloud compute nodes. This topic explains how to deploy and configure the Terway plugin on the cloud nodes of an ACK One registered cluster.
Prerequisites
Before you begin, ensure that you have:
-
Configured the container network plugins for both cloud and on-premises nodes. For details, see Configure container network plugins for cloud and on-premises nodes.
-
For Scenario 2: BGP network and Scenario 3: Host network, configured the following Terway network parameters when creating the registered cluster. For details, see Create an ACK One registered cluster.
-
Selected or cleared the IPvlan checkbox as needed.
-
Configured the pod vSwitch.
-
Configured the Service CIDR block.
-
Choose a scenario
The setup steps depend on how your on-premises container network is configured. Identify the scenario that matches your environment.
| Scenario | On-premises network type | Cloud nodes requirement |
|---|---|---|
| Scenario 1: Overlay network | Flannel VXLAN, Calico IPIP, or Cilium VXLAN | No additional setup — cloud nodes can use the same mode |
| Scenario 2: BGP network | Border Gateway Protocol (BGP) | Must use Terway. Follow the install steps below. |
| Scenario 3: Host network | Host network | Must use Terway. Follow the install steps below. |
Scenario 1: Overlay network
If your on-premises container network uses an overlay network, cloud compute nodes can use the same mode. Make sure cloud compute nodes can pull the container images required by the container network plugin DaemonSet.
Common overlay network modes include:
-
Flannel VXLAN
-
Calico IPIP
-
Cilium VXLAN
No additional installation is required for this scenario.
Scenario 2: BGP network
If your on-premises container network uses BGP, cloud compute nodes must use the Terway network. To enable network communication between cloud and on-premises containers, see Configure BGP on a Virtual Border Router (VBR).
Cloud compute nodes added by scaling out a node pool are assigned the alibabacloud.com/external=true label. Terway is scheduled only to nodes with this label by default.
To prevent scheduling conflicts, make sure that:
-
The on-premises CNI DaemonSet (for example, Calico in BGP route reflector mode) is not scheduled to cloud nodes.
-
The Terway DaemonSet is not scheduled to on-premises nodes.
Prevent the on-premises CNI from running on cloud nodes
Use nodeAffinity to prevent the Calico DaemonSet from being scheduled to nodes with the alibabacloud.com/external=true label. Apply this method to any workload that must stay on-premises.
Patching the Calico DaemonSet causes it to restart on all affected nodes. If the affinity is misconfigured, pod networking on on-premises nodes may be disrupted. Verify the patch content before applying it.
Run the following command to update the Calico DaemonSet:
cat <<EOF > calico-ds.patch
spec:
template:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: alibabacloud.com/external
operator: NotIn
values:
- "true"
EOF
kubectl -n kube-system patch ds calico-node -p "$(cat calico-ds.patch)"
After applying this patch, the Calico DaemonSet (calico-node in the kube-system namespace) runs only on on-premises nodes. The Terway DaemonSet continues to run only on cloud nodes.
After confirming the scheduling constraints are correct, proceed to install and configure the Terway plugin.
Scenario 3: Host network
If your on-premises container network uses the host network, make sure the Terway DaemonSet is not scheduled to on-premises nodes. By default, Terway is scheduled only to cloud nodes with the alibabacloud.com/external=true label, so no additional scheduling configuration is needed.
Proceed to install and configure the Terway plugin.
Install and configure the Terway plugin
Follow these steps if your environment matches Scenario 2 or Scenario 3. Each step provides two methods: the Container Service Management Console or the onectl CLI. Use whichever fits your workflow.
Tip: The onectl CLI is recommended for automation and scripted deployments. The console is better suited for one-time or exploratory setups.
Step 1: Configure RAM permissions
The Terway plugin needs Resource Access Management (RAM) permissions to manage elastic network interfaces (ENIs) on ECS instances.
Configure in the console
-
Create a RAM user and attach the following custom policy. For details, see Use RAM to grant access permissions to clusters and cloud resources. <details> <summary>View the custom policy</summary>
{ "Version": "1", "Statement": [ { "Action": [ "ecs:CreateNetworkInterface", "ecs:DescribeNetworkInterfaces", "ecs:AttachNetworkInterface", "ecs:DetachNetworkInterface", "ecs:DeleteNetworkInterface", "ecs:DescribeInstanceAttribute", "ecs:AssignPrivateIpAddresses", "ecs:UnassignPrivateIpAddresses", "ecs:DescribeInstances", "ecs:ModifyNetworkInterfaceAttribute" ], "Resource": [ "*" ], "Effect": "Allow" }, { "Action": [ "vpc:DescribeVSwitches" ], "Resource": [ "*" ], "Effect": "Allow" } ] }</details>
-
Log on to the Container Service Management Console. In the left navigation pane, click Clusters.
-
On the Clusters page, click the name of your cluster. In the left navigation pane, choose Configurations > Secrets.
-
On the Secrets page, click Create from YAML and enter the following content to create a secret named
alibaba-addon-secret.The Terway plugin uses the AccessKey ID and AccessKey secret in this secret to access cloud services. Skip this step if the secret already exists.
apiVersion: v1 kind: Secret metadata: name: alibaba-addon-secret namespace: kube-system type: Opaque stringData: access-key-id: <AccessKey ID of the RAM user> access-key-secret: <AccessKey secret of the RAM user>
Configure using onectl
-
Install onectl on your on-premises machine. For details, see Use onectl to manage registered clusters.
-
Run the following command to grant RAM permissions to the Terway plugin:
onectl ram-user grant --addon terway-eniipExpected output:
Ram policy ack-one-registered-cluster-policy-terway-eniip granted to ram user ack-one-user-ce313528c3 successfully.
Step 2: Install the Terway plugin
Install from the console
-
Log on to the Container Service Management Console. In the left navigation pane, click Clusters.
-
On the Clusters page, click the name of your cluster. In the left navigation pane, click Add-ons.
-
On the Add-ons page, click the Network tab. In the terway-eniip section, click Install.
Install using onectl
Run the following command to install the Terway plugin:
onectl addon install terway-eniip
Expected output:
Addon terway-eniip, version **** installed.
What's next
-
To configure network connectivity between cloud and on-premises containers using BGP, see Configure BGP on a VBR.