In Istio-based service meshes, Envoy sidecar proxies add latency and consume CPU and memory on every pod. For gRPC workloads, Alibaba Cloud Service Mesh (ASM) supports a proxyless deployment model that eliminates sidecars entirely. gRPC services communicate directly with the Istio control plane through xDS APIs, preserving traffic routing and Mutual Transport Layer Security (mTLS) capabilities with lower resource overhead.
This guide covers deploying a gRPC application in proxyless mode, configuring weighted traffic routing, and enabling mTLS.
How proxyless mode works
In proxyless mode, gRPC services handle data plane communication directly -- no proxy sits between them. An istio-agent still runs alongside each workload to manage control plane interactions. The agent performs three tasks:
Generates a bootstrap file at startup. This file tells the gRPC library how to connect to istiod, where to find data plane certificates, and what metadata to send.
Acts as an xDS proxy that connects to istiod and authenticates on behalf of the application.
Manages certificates by fetching and rotating the TLS certificates used for data plane traffic.
Compatible frameworks
Several frameworks support Istio's proxyless model through xDS integration:
| Framework | Description |
|---|---|
| gRPC | Native xDS API support. See Proxyless gRPC from the Istio community. |
| Apache Dubbo | A high-performance, Java-based RPC framework with Istio integration. See Dubbo proxyless mode. |
| Kitex | Interacts directly with the control plane and translates xDS rules into Kitex governance rules. See Kitex proxyless practice. |
Supported xDS features
The xDS API in gRPC supports a subset of the features available with Envoy proxies:
| Resource | Supported capabilities |
|---|---|
| Service discovery | Identify other pods registered in the mesh. |
DestinationRule | Route traffic to instance subsets based on label selectors. ROUND_ROBIN load balancing. mTLS modes: DISABLE and ISTIO_MUTUAL. |
VirtualService | Header match and URI match in /ServiceName/RPCName format. Destination host and subset configuration. Weighted traffic routing. |
PeerAuthentication | mTLS modes: DISABLE and STRICT. |
For the full feature matrix across gRPC language implementations, see xDS features in gRPC.
Prerequisites
Before you begin, make sure that you have:
An ASM instance of v1.12.4.2-gc5962641-aliyun or later. For more information, see Create an ASM instance
A cluster added to the ASM instance. For more information, see Add a cluster to an ASM instance
Limitations
No PERMISSIVE mode. Plaintext and mTLS traffic cannot coexist. If the server enforces
STRICTmode, the client must explicitly specifyISTIO_MUTUALin its mTLS configuration.Startup race condition. Calls to
grpc.Serve(listener)orgrpc.Dial("xds://...")may fail if the bootstrap file or xDS proxy is not ready. SetholdApplicationUntilProxyStartstotrueto prevent this.Feature parity gap. The xDS implementation in gRPC does not match Envoy's full feature set. Some configurations may not take effect. Test all Istio configurations against your proxyless gRPC services. For the current feature matrix, see xDS features in gRPC.
Deploy a sample gRPC application
This section uses a sample application called echo to demonstrate proxyless mode. The echo application has two versions (v1 and v2) that respond with their hostname and version.
Step 1: Create the namespace
Connect to your ACK cluster by using kubectl. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Create the echo-grpc namespace and enable sidecar injection. Although proxyless mode does not use sidecar proxies, the injection label is required so that the istio-agent is deployed alongside each pod.
kubectl create namespace echo-grpc
kubectl label namespace echo-grpc istio-injection=enabledStep 2: Add proxyless annotations
Add the following annotations to the pod template in your application YAML to enable proxyless mode:
template:
metadata:
annotations:
inject.istio.io/templates: grpc-agent
proxy.istio.io/config: '{"holdApplicationUntilProxyStarts": true}'
labels:
app: echo
version: v1| Annotation | Purpose |
|---|---|
inject.istio.io/templates: grpc-agent | Tells ASM to inject the gRPC agent instead of a full Envoy sidecar. |
proxy.istio.io/config: '{"holdApplicationUntilProxyStarts": true}' | Delays application startup until the xDS proxy and bootstrap file are ready. ASM adds this annotation automatically by default. |
For the complete YAML file, see grpc-echo.yaml.
Step 3: Deploy the application
kubectl -n echo-grpc apply -f https://alibabacloudservicemesh.oss-cn-beijing.aliyuncs.com/asm-grpc-proxyless/grpc-echo.yamlStep 4: Verify the deployment
Forward port 17171 to the echo pod and send a test request:
kubectl -n echo-grpc port-forward \
$(kubectl -n echo-grpc get pods -l version=v1 -ojsonpath='{.items[0].metadata.name}') \
17171 &grpcurl -plaintext -d '{"url": "xds:///echo.echo-grpc.svc.cluster.local:7070", "count": 5}' \
:17171 proto.EchoTestService/ForwardEcho | jq -r '.output | join("")' | grep HostnameExpected output:
[0 body] Hostname=echo-v1-f76996c45-h7plh
[1 body] Hostname=echo-v1-f76996c45-h7plh
[2 body] Hostname=echo-v2-7d76b7969-ltlkp
[3 body] Hostname=echo-v2-7d76b7969-ltlkp
[4 body] Hostname=echo-v1-f76996c45-h7plhThe responses come from both v1 and v2 pods, confirming that proxyless service discovery is working.
Configure traffic routing
This section demonstrates weighted traffic routing: 80% of requests go to v2, and 20% go to v1.
Step 1: Create a DestinationRule
Define subsets for each version of the workload.
Option A: Apply YAML with kubectl (recommended)
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: echo-versions
namespace: echo-grpc
spec:
host: echo.echo-grpc.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
EOFOption B: Use the ASM console
Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.
Click the name of the ASM instance. In the left-side navigation pane, choose Traffic Management Center > DestinationRule. Click Create.
Set Namespace to echo-grpc, enter a name for the destination rule, and set Host to
echo.echo-grpc.svc.cluster.local.Click the Service Version (Subset) tab, then click Add Service Version (Subset). In Subset 1, set Name to
v1. Click Add Label and set Key toversionand Value tov1.Click Add Service Version (Subset). In Subset 2, set Name to
v2. Click Add Label and set Key toversionand Value tov2.Click Create.
Step 2: Create a VirtualService
Route 80% of traffic to v2 and 20% to v1.
Option A: Apply YAML with kubectl (recommended)
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: echo-weights
namespace: echo-grpc
spec:
hosts:
- echo.echo-grpc.svc.cluster.local
http:
- route:
- destination:
host: echo.echo-grpc.svc.cluster.local
subset: v1
weight: 20
- destination:
host: echo.echo-grpc.svc.cluster.local
subset: v2
weight: 80
EOFOption B: Use the ASM console
Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.
Click the name of the ASM instance. In the left-side navigation pane, choose Traffic Management Center > VirtualService. Click Create.
Set Namespace to echo-grpc and enter a name for the virtual service.
Click HTTP Route, then click Add Route. Enter a name for the routing rule, and click Add Route Destination. Set Host to
echo.echo-grpc.svc.cluster.local, Subset tov1, and Weight to20.Click Add Route Destination. Set Host to
echo.echo-grpc.svc.cluster.local, Subset tov2, and Weight to80.Click Create.
Step 3: Verify the routing
Send ten requests and check the version distribution:
grpcurl -plaintext -d '{"url": "xds:///echo.echo-grpc.svc.cluster.local:7070", "count": 10}' \
:17171 proto.EchoTestService/ForwardEcho | jq -r '.output | join("")' | grep ServiceVersionExpected output:
[0 body] ServiveVersion=v2
[0 body] ServiveVersion=v1
[0 body] ServiveVersion=v2
[0 body] ServiveVersion=v2
[0 body] ServiveVersion=v2
[0 body] ServiveVersion=v2
[0 body] ServiveVersion=v1
[0 body] ServiveVersion=v2
[0 body] ServiveVersion=v2
[0 body] ServiveVersion=v2Eight of the ten requests go to v2 and two go to v1, confirming that the 80/20 traffic split is in effect.
Enable mTLS
Encrypt service-to-service communication by enabling mTLS on both the client and server. This section follows a "break then fix" approach: first enable client-side mTLS (which causes requests to fail), then enable server-side mTLS to restore connectivity.
Step 1: Enable client-side mTLS
Create a DestinationRule that sets the TLS mode to ISTIO_MUTUAL for client connections.
Option A: Apply YAML with kubectl (recommended)
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: echo-mtls
namespace: echo-grpc
spec:
host: echo.echo-grpc.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
EOFOption B: Use the ASM console
Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.
Click the name of the ASM instance or click Manage in the Actions column. In the left-side navigation pane, choose Traffic Management Center > DestinationRule. Click Create.
Set Namespace to echo-grpc, enter a name for the destination rule, and set Host to
echo.echo-grpc.svc.cluster.local.Click Traffic Policy, then click Add Policy. Turn on Client TLS and select Istio Mutual from the TLS Mode drop-down list.
Click Create.
Step 2: Confirm that requests fail without server-side mTLS
The client now requires mTLS, but the server does not yet enforce it. Send a test request to confirm the expected failure:
grpcurl -plaintext -d '{"url": "xds:///echo.echo-grpc.svc.cluster.local:7070"}' \
:17171 proto.EchoTestService/ForwardEcho | jq -r '.output | join("")'Expected output:
ERROR:
Code: Unknown
Message: 1/1 requests had errors; first error: rpc error: code = Unavaliable desc = all SubConna are in TransientFailureThe request fails because the server has not been configured for mTLS.
Step 3: Enable server-side mTLS
Create a PeerAuthentication policy that enforces STRICT mTLS on the server.
Option A: Apply YAML with kubectl (recommended)
cat <<EOF | kubectl apply -f -
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: echo-strict
namespace: echo-grpc
spec:
mtls:
mode: STRICT
EOFOption B: Use the ASM console
Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.
Click the name of the ASM instance. In the left-side navigation pane, choose Mesh Security Center > PeerAuthentication. Click Create mTLS Mode.
Set Namespace to echo-grpc, enter a name for the peer authentication policy, and set mTLS Mode (Namespace-wide) to STRICT -Strictly Enforce mTLS.
Click Create.
Step 4: Verify that mTLS is working
Send another test request:
grpcurl -plaintext -d '{"url": "xds:///echo.echo-grpc.svc.cluster.local:7070"}' \
:17171 proto.EchoTestService/ForwardEcho | jq -r '.output | join("")'Expected output:
[0] grpcecho.Echo(${xds:///echo.echo-grpc.svc.cluster.local:7070 map [] 0 <nil> 5s false })
[0 body] RequestHeader=x-request-id:0
[0 body] Host=echo.echo-grpc.svc.cluster.local
[0 body] RequestHeader=:authorit:echo.echo-grpc.svc.cluster.local:7070
........The request succeeds, confirming that mTLS is active for both client and server.
Modify your gRPC code for proxyless mode
If your gRPC service does not already support proxyless mode, update the client and server code to enable xDS API support.
xDS API support requires gRPC v1.39.0 or later.
Client-side changes
1. Register the xDS resolver
Add the following side-effect import to your main package or the package that calls grpc.Dial:
import _ "google.golang.org/grpc/xds"2. Connect using the xds:/// scheme
Replace your existing dial target with an xds:///-prefixed address:
conn, err := grpc.DialContext(ctx, "xds:///foo.ns.svc.cluster.local:7070")3. Enable mTLS credentials
Pass TransportCredentials to DialContext to support mTLS. The FallbackCreds option allows the connection to succeed even if istiod has not yet sent security configurations:
import "google.golang.org/grpc/credentials/xds"
// ...
creds, err := xds.NewClientCredentials(xds.ClientOptions{
FallbackCreds: insecure.NewCredentials(),
})
// handle err
conn, err := grpc.DialContext(
ctx,
"xds:///foo.ns.svc.cluster.local:7070",
grpc.WithTransportCredentials(creds),
)Server-side changes
1. Use the xDS server constructor
Replace the standard grpc.NewServer() with xds.NewGRPCServer():
import "google.golang.org/grpc/xds"
// ...
server = xds.NewGRPCServer()
RegisterFooServer(server, &fooServerImpl)2. Regenerate protobuf code (if needed)
If your protocol buffer compiler output is outdated, regenerate it. The RegisterFooServer function must accept grpc.ServiceRegistrar:
func RegisterFooServer(s grpc.ServiceRegistrar, srv FooServer) {
s.RegisterService(&FooServer_ServiceDesc, srv)
}3. Enable server-side mTLS credentials
creds, err := xds.NewServerCredentials(xdscreds.ServerOptions{
FallbackCreds: insecure.NewCredentials(),
})
// handle err
server = xds.NewGRPCServer(grpc.Creds(creds))