Multi-cluster gateways provided by Distributed Cloud Container Platform for Kubernetes (ACK One) can manage north-south traffic in multi-cluster deployments based on MSE Ingresses. This allows you to use features such as active zone-redundancy, traffic load balancing, and header-based traffic routing. This topic describes how to use multi-cluster gateways to manage north-south traffic.
Prerequisites
A namespace is created on the ACK One Fleet instance. The namespace is the same as the namespace of the applications deployed in associated clusters.
Benefits of multi-cluster gateways
Ingresses are commonly used in the container service sector to manage north-south traffic of Services deployed in Kubernetes clusters. Ingresses are scoped to clusters. You cannot use Ingresses to manage the traffic of multi-cluster applications. ACK One allows you to use MSE Ingresses as global Ingresses to centrally manage the north-south traffic of multi-cluster applications in a region. These Ingresses provide powerful traffic management capabilities to help you implement active zone-redundancy, traffic load balancing, and header-based traffic routing with low costs. In addition, multi-cluster gateways provide easy-to-use Ingress APIs to narrow the technical gap for beginners.
Fees are charged when you use multi-cluster gateways. For more information about the billing of multi-cluster gateways, see Billing overview.
Step 1: Create a multi-cluster gateway on an ACK One Fleet instance
Create a multi-cluster gateway from an MseIngressConfig object on an ACK One Fleet instance and then connect associated clusters to the multi-cluster gateway. By default, the gateway is deployed across zones for high availability.
Obtain and record the vSwitch ID of the ACK One Fleet instance.
Use Alibaba Cloud CLI
Run the following command to query the vSwitch ID:
aliyun adcp DescribeHubClusterDetails --ClusterId <your_fleet_clusterid>
Record the vSwitch ID in the
VSwitches
field of the output.
Use the console
Log on to the ACK One console. In the left-side navigation pane, choose Fleet > Fleet Information.
On the Fleet Information page, click the Basic Information tab. In the Associated Resources section, find vSwitch and record the vSwitch ID.
Create a file named gateway.yaml and add the following content to the file.
NoteReplace
${vsw-id}
with the vSwitch ID that you obtained. Replace${cluster1}
and${cluster2}
with the IDs of the associated clusters.Configure the security groups of
${cluster1}
and${cluster2}
to open all ports and allow access from IP addresses in the CIDR block of the vSwitch.
apiVersion: mse.alibabacloud.com/v1alpha1 kind: MseIngressConfig metadata: annotations: mse.alibabacloud.com/remote-clusters: ${cluster1},${cluster2} name: ackone-gateway-hongkong spec: common: instance: replicas: 3 spec: 2c4g network: vSwitches: - ${vsw-id} ingress: local: ingressClass: mse name: mse-ingress
Run the following command to create a multi-cluster gateway on an ACK One Fleet instance:
kubectl apply -f gateway.yaml
Run the following command to query the status of the gateway.
Make sure that the gateway is created and listens on Ingresses.
kubectl get mseingressconfig ackone-gateway-hongkong
Expected output:
NAME STATUS AGE test Listening 3m15s
The output indicates that the gateway is in the Listening state. This means that the cloud-native gateway is created and running. The gateway listens on Ingresses whose IngressClasses are
mse
.A gateway created from an MseIngressConfig goes through the following states: Pending, Running, and Listening. State description:
Pending: The cloud-native gateway is being created. It requires about 3 minutes to create the gateway.
Running: The cloud-native gateway is created and running.
Listening: The cloud-native gateway is running and listens on Ingresses.
Failed: The cloud-native gateway is abnormal. You can check the message in the Status field to troubleshoot the issue.
Run the following command to check whether the associated clusters are connected to the multi-cluster gateway:
kubectl get mseingressconfig ackone-gateway-hongkong -ojsonpath="{.status.remoteClusters}"
Expected output:
[{"clusterId":"c7fb82****"},{"clusterId":"cd3007****"}]
The output indicates the IDs of the associated clusters and no Failed information is returned. This means that the associated clusters are connected to the multi-cluster gateway.
Step 2: Use GitOps to deploy a sample application
Use GitOps to deploy the sample application to the associated clusters. For more information, see Work with GitOps.
Create a GitOps application for each of the associated clusters. In this example, the web-demo-cluster1 and web-demo-cluster2 applications are created.
Source
:Set
Repository URL
tohttps://github.com/AliyunContainerService/gitops-demo.git
.Set
Revision
toHEAD
.Set
Path
tomanifests/helm/web-demo
.
Specify the associated clusters as
DESTINATION
and setnamespace
toweb-demo
.The names of the environment variables in
Helm Values Files
areenvName
and thevalues
arecluster1
andcluster2
.
The following code block shows the YAML content of the Deployment and Service.
Step 3: Use MSE Ingresses to manage multi-cluster traffic
You can set the IngressClass of an Ingress to MSE Ingress to create an MSE Ingress and then use traffic management capabilities with different annotations. MSE Ingresses support annotations for NGINX Ingresses. MSE Ingresses also provide additional annotations to allow you to use traffic governance capabilities that are not supported by NGINX Ingresses. For more information about the annotations supported by MSE Ingresses, see Annotations supported by MSE Ingress gateways. The following examples describe the use scenarios of multi-cluster traffic management.
The Ingress objects and Service objects in the Deployment of the sample application must belong to the same namespace.
Example 1: Use load balancing to distribute traffic to all backend pods by default
Create an Ingress object on the ACK One Fleet instance to distribute traffic to backend pods whose names are the same as the names of the associated clusters. Set the traffic ratio to the ratio of pods in Cluster 1 to pods in Cluster 2. For example, if the ratio of pods in Cluster 1 to pods in Cluster 2 is 9:1, set the traffic ratio to 9:1. In this example, the ratio of pods in Cluster 1 to pods in Cluster 2 is 1:1. The following figure shows the topology.
Create a file named ingress-demo.yaml and copy the following content to the file.
In the YAML file of the following Ingress object, use
/svc1
below the domain nameexample.com
to expose the backend Serviceservice1
.apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: web-demo namespace: web-demo spec: ingressClassName: mse rules: - host: example.com http: paths: - path: /svc1 pathType: Exact backend: service: name: service1 port: number: 80
Run the following command to deploy the Ingress in the ACK One Fleet instance:
kubectl apply -f ingress-demo.yaml
Run the following command to query the public IP address of the multi-cluster gateway:
kubectl get ingress web-demo -nargocd -ojsonpath="{.status.loadBalancer}"
Run the following command to query the traffic routing information.
Replace
XX.XX.XX.XX
with the public IP address of the multi-cluster gateway that you obtained in the preceding step.for i in {1..50}; do curl -H "host: example.com" XX.XX.XX.XX/svc1; sleep 1; done
Expected output:
This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster2 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is
The output indicates that traffic is distributed to both clusters.
Example 2: Distribute traffic to only the specified cluster
Create an Ingress object on the ACK One Fleet instance to distribute traffic only to the backend pod of Cluster 1. The following figure shows the topology.
Create a file named ingress-demo-cluster-one.yaml and add the following content to the file.
Add the
mse.ingress.kubernetes.io/service-subset
andmse.ingress.kubernetes.io/subset-labels
annotations to the YAML file of the Ingress object to use/service1
below the domain nameexample.com
to expose the backend Serviceservice1
. For more information about the annotations supported by MSE Ingresses, see Annotations supported by MSE Ingress gateways.mse.ingress.kubernetes.io/service-subset
: The name of the subset of the Service. We recommend that you use a name related to the cluster.mse.ingress.kubernetes.io/subset-labels
: the ID of the associated cluster.apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: mse.ingress.kubernetes.io/service-subset: cluster-demo-1 mse.ingress.kubernetes.io/subset-labels: | topology.istio.io/cluster ${cluster1-id} name: web-demo-cluster-one namespace: web-demo spec: ingressClassName: mse rules: - host: example.com http: paths: - path: /service1 pathType: Exact backend: service: name: service1 port: number: 80
Run the following command to deploy the Ingress on the ACK One Fleet instance:
kubectl apply -f ingress-demo-cluster-one.yaml
Run the following command to query the public IP address of the multi-cluster gateway:
kubectl get ingress web-demo -nargocd -ojsonpath="{.status.loadBalancer}"
Run the following command to query the traffic routing information.
Replace
XX.XX.XX.XX
with the public IP address of the multi-cluster gateway that you obtained in the preceding step.for i in {1..50}; do curl -H "host: example.com" XX.XX.XX.XX/service1; sleep 1; done
Expected output:
This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! ...
The output indicates that traffic is distributed to Cluster 1.
Example 3: Distribute traffic that matches the header to the specified cluster
To distribute traffic that matches a header to the backend pod of the specified cluster, you need to create an Ingress object named Example 1
or Example 2
on the ACK One Fleet instance. Then, create the following Ingress object. The Ingress object cannot be used separately. The following figure shows the topology.
When you configure header-based traffic scheduling, you need to create an Ingress that is configured with the canary annotation and a header match policy, and create another Ingress without the canary annotation. Both Ingresses are configured with the same host and path. This way, the Ingress without the canary annotation can route traffic to the Service in another cluster. This is because Ingresses that use header-based traffic scheduling cannot be used separately. They must be used together with Ingresses that do not use header-based traffic scheduling so that unmatched traffic can be routed to another cluster.
Create a file named ingress-demo-header.yaml and add the following content to the file.
In the YAML files of the following Ingress objects, use
/service1
below the domain nameexample.com
to expose the backend Serviceservice1
.apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: mse.ingress.kubernetes.io/service-subset: cluster-demo-2 mse.ingress.kubernetes.io/subset-labels: | topology.istio.io/cluster c15d48ca9d1fd43f9bbb89c56a474843c nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-header: "stage" nginx.ingress.kubernetes.io/canary-by-header-value: "gray" name: web-demo-cluster-second namespace: web-demo spec: ingressClassName: mse rules: - host: example.com http: paths: - path: /service1 pathType: Exact backend: service: name: service1 port: number: 80
Run the following command to deploy the Ingresses on the ACK One Fleet instance:
kubectl apply -f ingress-demo-header.yaml
Run the following command to query the public IP address of the multi-cluster gateway:
kubectl get ingress web-demo -nargocd -ojsonpath="{.status.loadBalancer}"
Run the following command to query the traffic routing information.
Replace
XX.XX.XX.XX
with the public IP address of the multi-cluster gateway that you obtained in the preceding step.for i in {1..50}; do curl -H "host: example.com" -H "stage: gray" xx.xx.xx.xx/service1; sleep 1; done
Expected output:
This is env cluster2 ! Config file is This is env cluster2 ! Config file is This is env cluster2 ! Config file is This is env cluster2 ! Config file is This is env cluster2 ! Config file is ...
The output indicates that traffic with the
stage: gray
header is distributed to Cluster 2.
Example 4: Use cross-cluster disaster recovery for multi-cluster applications
Multi-cluster gateways provide the cross-cluster disaster recovery feature for multi-cluster applications. You can directly use this feature without configuration. For example, the preceding multi-cluster gateway manages the traffic of two associated clusters. If the Service in one of the clusters is down or deleted, traffic is automatically failed over to the other cluster. In Example 1, Example 2, and Example 3, when the Service in one of the clusters is down, traffic is failed over to the other cluster.
In the following section, Example 3 is used to demonstrate how disaster recovery is implemented. Traffic that carries the stage: gray
header is routed to Cluster 2. When the number of pods created by the Deployment in Cluster 2 is scaled to 0, traffic is failed over to Cluster 1. The following figure shows the topology.
Run the following command to query the public IP address of the multi-cluster gateway:
kubectl get ingress web-demo -nargocd -ojsonpath="{.status.loadBalancer}"
Run the following command to query the traffic routing information.
Replace
XX.XX.XX.XX
with the public IP address of the multi-cluster gateway that you obtained in the preceding step.for i in {1..50}; do curl -H "host: example.com" -H "stage: gray" XX.XX.XX.XX/service1; sleep 1; done
Expected output:
This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is ...
The output indicates that traffic is automatically failed over to Cluster 1.
Example 5: Distribute traffic based on weights
In Example 1, you can modify the pod ratio to adjust the proportion of traffic distributed to each cluster. This example demonstrates how to use annotations to distribute traffic to different clusters based on weights. You can use this method to perform canary releases. Create the following Ingress objects on the ACK One Fleet instance. The following figure shows the topology.
When you configure weight-based traffic scheduling, you need to create an Ingress that is configured with the canary annotation and a header match policy, and create another Ingress without the canary annotation. Both Ingresses are configured with the same host and path. This way, the Ingress without the canary annotation can route traffic to the Service in another cluster. This is because Ingresses that use weight-based traffic scheduling cannot be used separately. They must be used together with Ingresses that do not use weight-based traffic scheduling so that traffic can be routed to another cluster.
Create a file named ingress-weight.yaml and add the following content to the file.
In the YAML files of the following Ingress objects, replace
${cluster1-id}
with the ID of the associated cluster. Add annotations to use/svc1-w
below the domain nameexample.com
to expose the backend Serviceservice1
.mse.ingress.kubernetes.io/service-subset
: the name of the subset of the Service. We recommend that you use a name related to the cluster.mse.ingress.kubernetes.io/subset-labels
: Specify the ID of the associated cluster.nginx.ingress.kubernetes.io/canary
: Set the value to"true"
to enable canary releases.nginx.ingress.kubernetes.io/canary-weight
: Specify the percentage of traffic distributed to the cluster in a range of 0 to 100.apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: mse.ingress.kubernetes.io/service-subset: cluster-demo-1 mse.ingress.kubernetes.io/subset-labels: | topology.istio.io/cluster ${cluster1-id} name: web-demo-weight namespace: web-demo spec: ingressClassName: mse rules: - host: example.com http: paths: - path: /svc1-w pathType: Exact backend: service: name: service1 port: number: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: mse.ingress.kubernetes.io/service-subset: cluster-demo-2 mse.ingress.kubernetes.io/subset-labels: | topology.istio.io/cluster ${cluster2-id} nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-weight: "10" name: web-demo-weight-canary namespace: web-demo spec: ingressClassName: mse rules: - host: example.com http: paths: - path: /svc1-w pathType: Exact backend: service: name: service1 port: number: 80
Run the following command to deploy the Ingresses on the ACK One Fleet instance:
kubectl apply -f ingress-weight.yaml -nargocd
Run the following command to query the public IP address of the multi-cluster gateway:
kubectl get ingress web-demo -nargocd -ojsonpath="{.status.loadBalancer}"
Run the following command to query the traffic routing information.
Replace
XX.XX.XX.XX
with the public IP address of the multi-cluster gateway that you obtained in the preceding step.for i in {1..50}; do curl -H "host: example.com" XX.XX.XX.XX/svc1-w; sleep 1; done
Expected output:
This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster2 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster1 ! Config file is This is env cluster2 ! Config file is This is env cluster1 ! Config file is ...
The output indicates that 90% traffic is distributed to Cluster 1 and 10% traffic is distributed to Cluster 2.