×
Community Blog How to Use Istio to Manage Multi-cluster Deployment (2): Gateway Connection Topology with a Single Control Plane

How to Use Istio to Manage Multi-cluster Deployment (2): Gateway Connection Topology with a Single Control Plane

This post discusses the usage of Istio for multi-cluster deployments, focusing on gateway connection topology with a single control plane.

By Wang Xining, Senior Alibaba Cloud Technical Expert

In a single-control plane topology, multiple Kubernetes clusters share a single Istio control plane that runs in one of these clusters. The control plane Pilot manages services in the local and remote clusters and configures Envoy sidecar proxies for all clusters.

Cluster-aware Service Routing

Istio 1.1 introduces the cluster-aware service routing capability. In a single-control plane topology, Istio provides the Split-horizon Endpoint Discovery Service (EDS). This service can route service requests to other clusters through its ingress gateway. Istio can route requests to different endpoints based on the request source location.

In this configuration, requests that are sent from a sidecar proxy in a cluster to a service in the same cluster are still forwarded to the local service IP address. If the target workload is running in another cluster, the remote cluster's gateway IP address is used to connect to the service.

1

As shown in the preceding figure, the main cluster, Cluster 1, runs all the components of the Istio control plane. Cluster 2 runs only Istio Citadel, Sidecar Injector, and an ingress gateway. No VPN connection is required, and no direct network access is needed between workloads in different clusters.

An intermediate CA certificate is generated for each cluster's Citadel based on the shared root CA certificate. This root certificate can be used to enable cross-cluster two-way TLS communication. For simplicity, we will apply the sample root CA certificate provided during Istio installation in the samples/certs directory to two clusters. In actual deployment, you may apply a different CA certificate to each cluster. All CA certificates are signed by a public root CA.

In each Kubernetes cluster, Cluster 1 and Cluster 2 in this example, run the following commands to create a Kubernetes key for the generated CA certificate:

kubectl create namespace istio-system
kubectl create secret generic cacerts -n istio-system \
  --from-file=samples/certs/ca-cert.pem \
  --from-file=samples/certs/ca-key.pem \
  --from-file=samples/certs/root-cert.pem \
  --from-file=samples/certs/cert-chain.pem

Components of the Istio Control Plane

In Cluster 1 where all components of the Istio control plane are installed, perform the following steps:

1) Install Istio custom resource definitions (CRDs) and wait a few seconds. Then, submit the CRDs to the Kubernetes API server as follows:

for
i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i;
done

2) Deploy the Istio control plane to Cluster 1.

You can run the helm dep update command to update Helm dependencies that are missing or not up-to-date. Before you perform an update, temporarily remove the unused istio-cni from the requirements.yaml dependency. The command is as follows:

helm
template --name=istio --namespace=istio-system \
--set
global.mtls.enabled=true \
--set
security.selfSigned=false \
--set
global.controlPlaneSecurityEnabled=true \
--set
global.meshExpansion.enabled=true \
--set
global.meshNetworks.network2.endpoints[0].fromRegistry=n2-k8s-config \
--set
global.meshNetworks.network2.gateways[0].address=0.0.0.0 \
--set
global.meshNetworks.network2.gateways[0].port=15443 \
install/kubernetes/helm/istio
> ./istio-auth.yaml

Set the gateway address to 0.0.0.0. This is a temporary placeholder value, which is updated to the public IP address of Cluster 2's gateway after Cluster 2 is deployed.

Run the following command to deploy Istio to Cluster 1:

kubectl apply -f ./istio-auth.yaml

Ensure that the preceding steps were successfully performed in all Kubernetes clusters.

3) Run the following command to create a gateway to access remote services:

kubectl
create -f - <<EOF
apiVersion:
networking.istio.io/v1alpha3
kind:
Gateway
metadata:
  name: cluster-aware-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 15443
      name: tls
      protocol: TLS
    tls:
      mode: AUTO_PASSTHROUGH
    hosts:
    - "*"
EOF

This gateway is configured with a dedicated port 15443 to pass incoming traffic to the target service specified in the request's SNI header. A two-way TLS connection is established from the source service to the target service.

Though the gateway is defined to be applied to Cluster 1, the gateway instance is also applicable to Cluster 2 because both clusters communicate with the same Pilot.

istio-remote Component

To deploy the istio-remote component in Cluster 2, follow these steps:

1) Run the following command to obtain Cluster 1's ingress gateway address:

export LOCAL_GW_ADDR=$(kubectl get svc --selector=app=istio-ingressgateway \
  -n istio-system -o
jsonpath="{.items[0].status.loadBalancer.ingress[0].ip}")

Run the following commands to create an istio-remote deployment YAML file by using Helm:

helm
template --name istio-remote --namespace=istio-system \
--values
install/kubernetes/helm/istio/values-istio-remote.yaml \
--set
global.mtls.enabled=true \
--set
gateways.enabled=true \
--set
security.selfSigned=false \
--set
global.controlPlaneSecurityEnabled=true \
--set
global.createRemoteSvcEndpoints=true \
--set
global.remotePilotCreateSvcEndpoint=true \
--set
global.remotePilotAddress=${LOCAL_GW_ADDR} \
--set
global.remotePolicyAddress=${LOCAL_GW_ADDR} \
--set
global.remoteTelemetryAddress=${LOCAL_GW_ADDR} \
--set
gateways.istio-ingressgateway.env.ISTIO_META_NETWORK="network2" \
--set
global.network="network2" \
install/kubernetes/helm/istio
> istio-remote-auth.yaml

2) Run the following command to deploy the istio-remote component to Cluster 2:

kubectl apply -f ./istio-remote-auth.yaml

Ensure that the preceding steps were successfully performed in all Kubernetes clusters.

3) Run the following command to update Cluster 1's configuration item istio and obtain Cluster 2's ingress gateway address:

export
REMOTE_GW_ADDR=$(kubectl get --context=$CTX_REMOTE svc --selector=app=
istio-ingressgateway
-n istio-system -o jsonpath="{.items[0].status.loadBalancer.ingress
[0].ip}")

In Cluster 1, edit the istio configuration item in the istio-system namespace, and change Network 2's gateway address from 0.0.0.0 to Cluster 2's ingress gateway address ${REMOTE_GW_ADDR}. After the settings are saved, Pilot automatically reads the updated network configurations.

4) Create a kubeconfig file in Cluster 2. Run the following command to create a kubeconfig file for Cluster 2 by using the istio-multi service account, and save the file with the name n2-k8s-config.

CLUSTER_NAME="cluster2"
SERVER=$(kubectl
config view --minify=true -o "jsonpath={.clusters[].cluster.server}")
SECRET_NAME=$(kubectl
get sa istio-multi -n istio-system -o jsonpath='{.secrets[].name}')
CA_DATA=$(kubectl
get secret ${SECRET_NAME} -n istio-system -o
"jsonpath={.data['ca\.crt']}")
TOKEN=$(kubectl
get secret ${SECRET_NAME} -n istio-system -o
"jsonpath={.data['token']}" | base64 --decode)
cat
<<EOF > n2-k8s-config
apiVersion:
v1
kind:
Config
clusters:
  - cluster:
      certificate-authority-data: ${CA_DATA}
      server: ${SERVER}
    name: ${CLUSTER_NAME}
contexts:
  - context:
      cluster: ${CLUSTER_NAME}
      user: ${CLUSTER_NAME}
    name: ${CLUSTER_NAME}
current-context:
${CLUSTER_NAME}
users:
  - name: ${CLUSTER_NAME}
    user:
      token: ${TOKEN}
EOF

5) Add Cluster 2 to the Istio control plane.

Run the following commands in Cluster 1 to add Cluster 2's kubeconfig file to "secret" of Cluster 1. Then, Cluster 1's Istio Pilot listens to Cluster 2's services and instances just as it listens to Cluster 1's services and instances.

kubectl create secret generic n2-k8s-secret --from-file n2-k8s-config -n istio-system
kubectl label secret n2-k8s-secret istio/multiCluster=true -n istio-system

Deploy the Sample Application

To illustrate cross-cluster access, this section demonstrates the following deployment process: (1) Deploy the Sleep application service and the Helloworld service v1 to Kubernetes Cluster 1; (2) Deploy the Helloworld service v2 to Cluster 2; and (3) Verify that the Sleep application can call the Helloworld service of the local or remote cluster.

1) Run the following commands to deploy the Sleep service and the Helloworld service v1 to Cluster 1:

kubectl
create namespace app1
kubectl
label namespace app1 istio-injection=enabled
kubectl
apply -n app1 -f samples/sleep/sleep.yaml
kubectl
apply -n app1 -f samples/helloworld/service.yaml
kubectl
apply -n app1 -f samples/helloworld/helloworld.yaml -l version=v1
export
SLEEP_POD=$(kubectl get -n app1 pod -l app=sleep -o
jsonpath={.items..metadata.name})

2) Run the following commands to deploy the Helloworld service v2 to Cluster 2:

kubectl
create namespace app1
kubectl
label namespace app1 istio-injection=enabled
kubectl
apply -n app1 -f samples/helloworld/service.yaml
kubectl
apply -n app1 -f samples/helloworld/helloworld.yaml -l version=v2

3) Log on to the istio-pilot container in the istio-system namespace and run the curl localhost:8080/v1/registration | grep helloworld -A 11 -B 2 command. If the following command output is returned, Helloworld service versions 1 and 2 are registered with the Istio control plane:

2

4) Run the following command in Cluster 1 to verify that the Sleep service can call the Helloworld service of the local or remote cluster:

kubectl exec -it -n app1 $SLEEP_POD sh

3

Log on to the container and run the curl helloworld.app1:5000/hello command.

If the settings are correct, the returned call result includes two versions of the Helloworld service. You can view the istio-proxy container log of the Sleep container group to verify the IP address of the accessed endpoint. The returned result is as follows:

4

0 0 0
Share on

Xi Ning Wang(王夕宁)

56 posts | 8 followers

You may also like

Comments

Xi Ning Wang(王夕宁)

56 posts | 8 followers

Related Products