The multi-cluster Services (MCS) feature in ACK One lets a pod in one cluster reach a Service in another cluster using a standard Kubernetes DNS domain name — no changes to application code, dnsConfig, or CoreDNS configuration required.
This guide deploys resources using kubectl. Alternatively, use the GitOps or application distribution features of the Fleet instance to distribute resources to member clusters.
Architecture
Prerequisites
Before you begin, ensure that you have:
-
Fleet management enabled. See Enable multi-cluster management
-
Two clusters associated with a Fleet instance: one acting as the provider cluster and one as the consumer cluster. See Manage associated clusters
-
Kubernetes version 1.22 or later on both clusters
-
Pod-level network connectivity between the two clusters. See MCS overview
NoteAfter enabling pod CIDR connectivity, update the security group rules for each cluster's node pools to allow traffic from the pod CIDR blocks of the interconnected clusters.
-
The kubeconfig files for the provider cluster, consumer cluster, and Fleet instance, with kubectl configured to connect to each. See Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster
Step 1: Deploy a Service in the provider cluster
Connect to the provider cluster using its kubeconfig file. Create a file named web-demo-svc-provider.yaml with the following content:
apiVersion: v1
kind: Service
metadata:
name: service1
namespace: provider-ns
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: web-demo
sessionAffinity: None
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-demo
namespace: provider-ns
spec:
replicas: 1
selector:
matchLabels:
app: web-demo
template:
metadata:
creationTimestamp: null
labels:
app: web-demo
spec:
containers:
- env:
- name: ENV_NAME
value: cluster-provider
image: registry-cn-hangzhou.ack.aliyuncs.com/acs/web-demo:0.5.0
imagePullPolicy: Always
name: web-demo
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
Apply the manifest:
kubectl apply -f web-demo-svc-provider.yaml
Verify that the Service and Pod are running:
kubectl get svc,pod -n provider-ns
Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/service1 ClusterIP <cluster-ip> <none> 80/TCP 1m
NAME READY STATUS RESTARTS AGE
pod/web-demo-<suffix> 1/1 Running 0 1m
Step 2: Create a Service in the consumer cluster
Connect to the consumer cluster using its kubeconfig file. Create a file named web-demo-svc-consumer.yaml with the following content:
Create only the Service — no application pods are needed in the consumer cluster.
apiVersion: v1
kind: Service
metadata:
name: service1
namespace: provider-ns
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: web-demo
sessionAffinity: None
type: ClusterIP
Apply the manifest:
kubectl apply -f web-demo-svc-consumer.yaml
Verify that the Service exists:
kubectl get svc service1 -n provider-ns
Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service1 ClusterIP <cluster-ip> <none> 80/TCP 30s
Step 3: Create a MultiClusterService on the Fleet instance
Connect to the Fleet instance using its kubeconfig file. Create a file named multiclusterservice.yaml with the following content:
name and namespace in the MultiClusterService spec must match those of service1 in the provider cluster. Replace <your consumer cluster id> and <your provider cluster id> with the actual cluster IDs.apiVersion: networking.one.alibabacloud.com/v1alpha1
kind: MultiClusterService
metadata:
name: service1
namespace: provider-ns
spec:
consumerClusters:
- name: <your consumer cluster id>
providerClusters:
- name: <your provider cluster id>
Apply the manifest:
kubectl apply -f multiclusterservice.yaml
Verify that the MultiClusterService was created and synced successfully:
kubectl describe multiclusterservice service1 -n provider-ns
In the output, check the Status section for conditions showing that the configuration has been propagated to both clusters. Proceed to the next step only after the sync is confirmed.
Step 4: Verify cross-cluster access from the consumer cluster
Connect to the consumer cluster using its kubeconfig file. Create a file named client-pod.yaml with the following content:
apiVersion: v1
kind: Pod
metadata:
name: curl-client
namespace: customer-ns
spec:
containers:
- name: curl-client
image: registry-cn-hangzhou.ack.aliyuncs.com/dev/curl:8.11.1
command: ["sh", "-c", "sleep 12000"]
Deploy the client pod:
kubectl apply -f client-pod.yaml
Wait for the pod to reach Running state, then open a shell in it and send a request to service1 in the provider cluster:
kubectl exec -it -ncustomer-ns curl-client -- sh
curl service1.provider-ns
Expected output:
This is cluster-provider!
The response confirms that traffic is being routed to the backend pods in the provider cluster.
What's next
-
MCS overview — Learn more about how the MCS feature works and the networking requirements between clusters.
-
GitOps — Manage and distribute Kubernetes resources across Fleet member clusters using a GitOps workflow.
-
Application distribution — Use the Fleet instance to distribute applications to associated clusters at scale.