After gRPC clients send requests to access the grpc-server-svc.grpc-best.svc.cluster.local
service that is specified by the GRPC_SERVER
variable, Alibaba Cloud Service Mesh (ASM) can route the requests to gRPC servers
in round robin mode. This topic describes how to deploy a gRPC service in a Container
Service for Kubernetes (ACK) cluster to implement load balancing among gRPC servers.
This topic also describes how to verify the load balancing of the gRPC service.
Background information
In this topic, four gRPC clients and four gRPC servers in Java, Go, Node.js, and Python
are used. For example, the gRPC clients call the grpc-server-svc.grpc-best.svc.cluster.local
service that is specified by the
GRPC_SERVER
variable. When ASM receives the internal requests, ASM routes the requests to the
four gRPC servers in round robin mode. In addition, you can configure an ingress gateway
to route external requests to the four gRPC servers based on a load balancing policy.

Sample project
For information about the sample project of gRPC, download
hello-servicemesh-grpc. The directories in this topic are the directories of
hello-servicemesh-grpc.
Note The image repository in this topic is for reference only. Use an image script to build
and push images to your self-managed image repository. For more information about
the image script, see
hello-servicemesh-grpc.
Step 1: Create a gRPC service on the gRPC servers
In this example, a gRPC service named
grpc-server-svc is created on all gRPC servers.
Note The value of the spec.ports.name
parameter must start with grpc.
- Create a YAML file named grpc-server-svc.
e
apiVersion: v1
kind: Service
metadata:
namespace: grpc-best
name: grpc-server-svc
labels:
app: grpc-server-svc
spec:
ports:
- port: 9996
name: grpc-port
selector:
app: grpc-server-deploy
- Run the following command to create the gRPC service:
kubectl apply -f grpc-server-svc.yaml
Step 2: Create a Deployment on each gRPC server
In this step, you must create a Deployment on each of the four gRPC servers. The following
example shows you how to use the
grpc-server-node.yaml file of a Node.js-based gRPC server to create a Deployment on the gRPC server. For
more information about all the Deployments for gRPC servers in other languages, visit
the
kube/deployment page on GitHub.
Note You must set the app
label to grpc-server-deploy
for the four Deployments on the gRPC servers to match the selector of the gRPC service
that you create in Step 1. Each of the Deployments on the four gRPC servers in different
languages must have a unique version label.
- Create a YAML file named grpc-server-node.
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: grpc-best
name: grpc-server-node
labels:
app: grpc-server-deploy
version: v3
spec:
replicas: 1
selector:
matchLabels:
app: grpc-server-deploy
version: v3
template:
metadata:
labels:
app: grpc-server-deploy
version: v3
spec:
containers:
- name: grpc-server-deploy
image: registry.cn-hangzhou.aliyuncs.com/aliacs-app-catalog/asm-grpc-server-node:1.0.0
imagePullPolicy: Always
ports:
- containerPort: 9996
name: grpc-port
- Run the following command to create the Deployment:
kubectl apply -f grpc-server-node.yaml
Step 3: Create a Deployment on each gRPC client
The Deployments for the gRPC clients and gRPC servers are different in the following
aspects:
- The gRPC servers continuously run after they are started. The gRPC clients stop running
when the requests are complete. Therefore, an endless loop is required to keep client-side
containers from stopping.
- You must set the GRPC_SERVER variable on the gRPC clients. When the pod of a gRPC
client is started, the value of the GRPC_SERVER variable is passed to the gRPC client.
In this step, you must create a Deployment on each of the four gRPC clients. The following
example shows you how to use the grpc-client-go.yaml file of a Go-based gRPC client to create a Deployment on the gRPC client.
- Create a YAML file named grpc-client-go.
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: grpc-best
name: grpc-client-go
labels:
app: grpc-client-go
spec:
replicas: 1
selector:
matchLabels:
app: grpc-client-go
template:
metadata:
labels:
app: grpc-client-go
spec:
containers:
- name: grpc-client-go
image: registry.cn-hangzhou.aliyuncs.com/aliacs-app-catalog/asm-grpc-client-go:1.0.0
command: ["/bin/sleep", "3650d"]
env:
- name: GRPC_SERVER
value: "grpc-server-svc.grpc-best.svc.cluster.local"
imagePullPolicy: Always
- Run the following command to create the Deployment:
kubectl apply -f grpc-client-go.yaml
The command: ["/bin/sleep", "3650d"]
line keeps the container running in sleep mode after the pod of the Go-based gRPC
client is started. The GRPC_SERVER variable in env
is set to grpc-server-svc.grpc-best.svc.cluster.local
.
Step 4: Deploy the gRPC service and the Deployments
- Run the following commands to create a namespace named grpc-best in the ACK cluster:
alias k="kubectl --kubeconfig $USER_CONFIG"
k create ns grpc-best
- Run the following command to enable automatic sidecar injection for the namespace:
k label ns grpc-best istio-injection=enabled
- Run the following commands to deploy the gRPC service and the eight Deployments:
kubectl apply -f grpc-svc.yaml
kubectl apply -f deployment/grpc-server-java.yaml
kubectl apply -f deployment/grpc-server-python.yaml
kubectl apply -f deployment/grpc-server-go.yaml
kubectl apply -f deployment/grpc-server-node.yaml
kubectl apply -f deployment/grpc-client-java.yaml
kubectl apply -f deployment/grpc-client-python.yaml
kubectl apply -f deployment/grpc-client-go.yaml
kubectl apply -f deployment/grpc-client-node.yaml
Verify the result
Use pods to verify the load balancing of the gRPC service
You can check load balancing among gRPC servers by sending requests to the gRPC service
on the gRPC servers from the pods of the gRPC clients.
- Run the following commands to obtain the names of the pods of the four gRPC clients:
client_java_pod=$(k get pod -l app=grpc-client-java -n grpc-best -o jsonpath={.items..metadata.name})
client_go_pod=$(k get pod -l app=grpc-client-go -n grpc-best -o jsonpath={.items..metadata.name})
client_node_pod=$(k get pod -l app=grpc-client-node -n grpc-best -o jsonpath={.items..metadata.name})
client_python_pod=$(k get pod -l app=grpc-client-python -n grpc-best -o jsonpath={.items..metadata.name})
- Run the following commands to send requests from the pods of the gRPC clients to the
gRPC service on the four gRPC servers:
k exec "$client_java_pod" -c grpc-client-java -n grpc-best -- java -jar /grpc-client.jar
k exec "$client_go_pod" -c grpc-client-go -n grpc-best -- ./grpc-client
k exec "$client_node_pod" -c grpc-client-node -n grpc-best -- node proto_client.js
k exec "$client_python_pod" -c grpc-client-python -n grpc-best -- sh /grpc-client/start_client.sh
- Use a FOR loop to verify the load balancing among the gRPC servers. In this example,
the Node.js-based gRPC client is used.
for ((i = 1; i <= 100; i++)); do
kubectl exec "$client_node_pod" -c grpc-client-node -n grpc-best -- node kube_client.js > kube_result
done
sort kube_result grep -v "^[[:space:]]*$" uniq -c sort -nrk1
Expected output:
26 Talk:PYTHON
25 Talk:NODEJS
25 Talk:GOLANG
24 Talk:JAVA
The output indicates that the four gRPC servers on which the gRPC service is deployed
receive an approximate number of requests. The load balancing result indicates that
ASM can route external requests to the four gRPC servers on which the gRPC service
is deployed based on a load balancing policy.
Use an ingress gateway to verify the load balancing of the gRPC service
You can verify load balancing among gRPC servers by using the Istio ingress gateway.
- Log on to the ASM console.
- In the left-side navigation pane, choose .
- On the Mesh Management page, find the ASM instance that you want to configure. Click the name of the ASM
instance or click Manage in the Actions column.
- On the details page of the ASM instance, click ASM Gateways in the left-side navigation pane. On the ASM Gateways page, click Create from YAML.
- On the Create page, select a namespace as required, copy the following content to
the code editor, and then click Create.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
namespace: grpc-best
name: grpc-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 9996
name: grpc
protocol: GRPC
hosts:
- "*"
- Run the following command to obtain the IP address of the Istio ingress gateway:
INGRESS_IP=$(k -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
- Use a FOR loop to verify the load balancing among the gRPC servers.
docker run -d --name grpc_client_node -e GRPC_SERVER="${INGRESS_IP}" registry.cn-hangzhou.aliyuncs.com/aliacs-app-catalog/asm-grpc-client-node:1.0.0 /bin/sleep 3650d
client_node_container=$(docker ps -q)
docker exec -e GRPC_SERVER="${INGRESS_IP}" -it "$client_node_container" node kube_client.js
for ((i = 1; i <= 100; i++)); do
docker exec -e GRPC_SERVER="${INGRESS_IP}" -it "$client_node_container" node kube_client.js >> kube_result
done
sort kube_result grep -v "^[[:space:]]*$" uniq -c sort -nrk1
Expected output:
26 Talk:PYTHON
25 Talk:NODEJS
25 Talk:GOLANG
24 Talk:JAVA
The output indicates that the four gRPC servers on which the gRPC service is deployed
receive an approximate number of requests. The load balancing result indicates that
ASM can route external requests to the four gRPC servers on which the gRPC service
is deployed based on a load balancing policy.