Service Mesh (ASM) adds Istio-based traffic management to Spring Cloud applications without code changes. After you connect your Spring Cloud services to ASM, you can use Istio CRDs such as VirtualService and DestinationRule to control routing, load balancing, and version-based traffic splitting -- while your applications continue to use their existing service registry for discovery.
Supported implementations and registries
ASM supports the following Spring Cloud implementations. All migrations require zero code changes.
| Spring Cloud implementation | Service registry | Migration supported without code modification |
|---|---|---|
| Spring Cloud Alibaba | Microservices Engine (MSE) Nacos | Yes |
| Spring Cloud Alibaba | Self-managed Nacos | Yes |
| Spring Cloud Netflix | Eureka | Yes. ASM version 1.13.4.53 or later. |
| Spring Cloud Consul | Consul | Yes. ASM version 1.13.4.53 or later. |
| Spring Cloud Zookeeper | Zookeeper | Yes. ASM version 1.13.4.53 or later. |
How it works
Spring Cloud applications rely on a service registry (Nacos, Eureka, Consul, or Zookeeper) for service discovery. ASM bridges this registry-based discovery with Istio's mesh-native model by intercepting registry traffic through an EnvoyFilter. After interception, sidecar proxies resolve service names to pod IPs, and Istio CRDs govern all traffic between services.
Two methods enable this bridge on the ASM control plane:
| Method | Supported registries | Requirements |
|---|---|---|
| Method 1: Reverse DNS | All registries | ASM 1.13.4.32 or later |
| Method 2: Lua filter | Nacos only | Nacos client SDK earlier than 2.0 (v2.0+ uses gRPC, incompatible with Lua) |
Prerequisites
Before you begin, make sure that you have:
An ASM instance of Enterprise Edition or Ultimate Edition. See Create an ASM instance
An ACK managed cluster. See Create an ACK managed cluster
The ACK cluster added to the ASM instance. See Add a cluster to an ASM instance
An ingress gateway deployed. See Create an ingress gateway
Demo architecture
This tutorial uses a Spring Cloud Nacos demo with three services:
Consumer service: Exposes port 8080 with an
/echoendpoint. Forwards requests to the provider service and returns the response.Provider service (V1): Responds with
Hello Nacos Discovery From v1<parameter>.Provider service (V2): Responds with
Hello Nacos Discovery From v2<parameter>.
Both provider versions register with the Nacos registry. The consumer discovers them through Nacos and distributes requests across V1 and V2 in round-robin fashion.
For example, a request to /echo/world returns either Hello Nacos Discovery From v1world or Hello Nacos Discovery From v2world.

Download the demo source code from the nacos-examples repository.
Without sidecar injection, these services work normally but are invisible to Istio. The following steps add mesh-based traffic management on top of the existing Spring Cloud service discovery.
Step 1: Enable Spring Cloud support on the ASM control plane
Method 1: Reverse DNS (all registries)
Requires ASM 1.13.4.32 or later.
A Kubernetes service must exist as the destination service. The service port and destination port must match the port through which the application is routed via a Server Load Balancer (SLB) instance.
For ASM 1.23.6.32 or later, disable REGISTRY_ONLY.
Connect kubectl to the ASM control plane. See Use kubectl on the control plane to access Istio resources.
Create a file named
any-spring-cloud-support.yaml: Replace the following values based on your environment:Parameter Description Where to find it portNumberPort of your Spring Cloud service. Remove this parameter to match all ports, or create separate EnvoyFilter resources to target specific ports. Application configuration pod_cidrsPod CIDR block of the ACK or ACK Serverless cluster. In the Container Service Management Console, go to Clusters, click your cluster, and open the Cluster Resources tab. Click the VPC link to view the vSwitch CIDR block. apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: labels: provider: "asm" asm-system: "true" name: any-spring-cloud-support namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: proxy: proxyVersion: "^1.*" context: SIDECAR_OUTBOUND listener: portNumber: 8070 filterChain: filter: name: "envoy.filters.network.http_connection_manager" subFilter: name: "envoy.filters.http.router" patch: operation: INSERT_BEFORE value: name: com.aliyun.reverse_dns typed_config: "@type": "type.googleapis.com/udpa.type.v1.TypedStruct" type_url: type.googleapis.com/envoy.config.filter.reverse_dns.v3alpha.CommonConfig value: pod_cidrs: - "10.0.128.0/18"Apply the EnvoyFilter:
kubectl apply -f any-spring-cloud-support.yaml
Method 2: Lua filter (Nacos only)
Connect kubectl to the ASM control plane. See Use kubectl on the control plane to access Istio resources.
Create a file named
external-nacos-svc.yamlto define a service entry for the Nacos server: Replace<your-nacos-server-host>with the Nacos server endpoint (for example,mse-xxx-p.nacos-ans.mse.aliyuncs.com). Port8848is the default Nacos port. If your self-managed Nacos uses a different port, update thenumbervalue.kind: ServiceEntry metadata: name: external-nacos-svc spec: hosts: - "<your-nacos-server-host>" # Example: mse-xxx-p.nacos-ans.mse.aliyuncs.com location: MESH_EXTERNAL ports: - number: 8848 name: http resolution: DNSApply the service entry:
kubectl apply -f external-nacos-svc.yamlCreate a file named
external-envoyfilter.yamlfor the Lua-based EnvoyFilter:apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: labels: provider: "asm" asm-system: "true" name: nacos-subscribe-lua namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: proxy: proxyVersion: "^1.*" context: SIDECAR_OUTBOUND listener: portNumber: 8848 filterChain: filter: name: "envoy.filters.network.http_connection_manager" subFilter: name: "envoy.filters.http.router" patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: "@type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua" inlineCode: | -- copyright: ASM (Alibaba Cloud ServiceMesh) function envoy_on_request(request_handle) local request_headers = request_handle:headers() local path = request_headers:get(":path") if string.match(path,"^/nacos/v1/ns/instance/list") then local servicename = string.gsub(path,".*&serviceName.*40([%w.\\_\\-]+)&.*","%1") request_handle:streamInfo():dynamicMetadata():set("context", "request.path", path) request_handle:streamInfo():dynamicMetadata():set("context", "request.servicename", servicename) request_handle:logInfo("subscribe for serviceName: " .. servicename) else request_handle:streamInfo():dynamicMetadata():set("context", "request.path", "") end end function envoy_on_response(response_handle) local request_path = response_handle:streamInfo():dynamicMetadata():get("context")["request.path"] if request_path == "" then return end local servicename = response_handle:streamInfo():dynamicMetadata():get("context")["request.servicename"] response_handle:logInfo("modified response ip to serviceName:" .. servicename) local bodyObject = response_handle:body(true) local body= bodyObject:getBytes(0,bodyObject:length()) body = string.gsub(body,"%s+","") body = string.gsub(body,"(ip\":\")(%d+.%d+.%d+.%d+)","%1"..servicename) response_handle:body():setBytes(body) endApply the EnvoyFilter:
kubectl apply -f external-envoyfilter.yaml
Step 2: Deploy Spring Cloud services
Apply the EnvoyFilter from Step 1 before deploying services. The filter must be in place to intercept the service registration process. If services are already deployed, trigger a rolling update on those deployments.
Each Spring Cloud service requires a Kubernetes service resource with a cluster IP address.
Connect kubectl to the ACK data plane. See Obtain the kubeconfig file of a cluster.
Deploy the demo services: Replace
<your-nacos-endpoint>with the endpoint of your MSE Nacos registry or self-managed Nacos registry.# Set the Nacos registry endpoint (VPC endpoint recommended) export NACOS_ADDRESS=<your-nacos-endpoint> # Download and apply the demo deployment wget https://alibabacloudservicemesh.oss-cn-beijing.aliyuncs.com/asm-labs/springcloud/demo.yaml -O demo.yaml sed -e "s/NACOS_SERVER_CLUSTERIP/$NACOS_ADDRESS/g" demo.yaml | kubectl apply -f -Verify that all pods are running with 2/2 containers (application + sidecar): Expected output: The
2/2in the READY column confirms that both the application container and the Envoy sidecar proxy are running.kubectl get podsconsumer-bdd464654-jn8q7 2/2 Running 0 25h provider-v1-66bc67fb6d-46pgl 2/2 Running 0 25h provider-v2-76568c45f6-85z87 2/2 Running 0 25h
Step 3: Create an Istio gateway and virtual service
Create a file named
test-gateway.yamlfor the Istio gateway:apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: test-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*"Connect kubectl to the ASM control plane and apply the gateway:
kubectl apply -f test-gateway.yamlCreate a file named
consumer.yamlfor a virtual service that routes incoming traffic to the consumer:apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: consumer spec: hosts: - "*" gateways: - test-gateway http: - match: - uri: prefix: / route: - destination: host: consumer.default.svc.cluster.local port: number: 8080Apply the virtual service:
kubectl apply -f consumer.yaml
Step 4: Verify traffic management
This step confirms that ASM controls Spring Cloud traffic. First, observe the default round-robin behavior without routing rules, then apply Istio routing rules and verify the change.
Observe default behavior (before routing rules)
Get the ingress gateway IP address:
Log on to the ASM console.
In the left-side navigation pane, choose Service Mesh > Mesh Management.
Click your ASM instance name, then choose ASM Gateways > Ingress Gateway.
Note the Service address on the Ingress Gateway page.
Send several requests to the consumer service: The responses alternate between V1 and V2 in round-robin fashion: This confirms that the consumer discovers both provider versions through Nacos and distributes traffic evenly. No Istio routing rules are in effect yet.
curl <ingress-gateway-ip>/echo/worldHello Nacos Discovery From v1world Hello Nacos Discovery From v2world Hello Nacos Discovery From v1world Hello Nacos Discovery From v2world
Apply routing rules (after)
Create a file named
service-provider.yamlto define a destination rule with version-based subsets:apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: service-provider spec: host: service-provider subsets: - name: v1 labels: label: v1 - name: v2 labels: label: v2Apply the destination rule:
kubectl apply -f service-provider.yamlCreate a file named
service-provider-vs.yamlto define a virtual service that routes/echo/hellorequests to V1 and all other requests to V2:apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: service-provider spec: hosts: - service-provider http: - name: "hello-v1" match: - uri: prefix: "/echo/hello" route: - destination: host: service-provider subset: v1 - name: "default" route: - destination: host: service-provider subset: v2Apply the virtual service:
kubectl apply -f service-provider-vs.yamlTest the routing rules: All responses now come from V1 only: Compare this with the round-robin behavior observed earlier:
/echo/hellorequests now route exclusively to V1, while all other requests go to V2. Istio CRDs have taken over Spring Cloud traffic routing, confirming that ASM manages the services.curl <ingress-gateway-ip>/echo/helloHello Nacos Discovery From v1hello Hello Nacos Discovery From v1hello
FAQ
Traffic is not intercepted by the mesh
Check whether traffic blocking is enabled for the port or IP address of the registry:
Method 1 (reverse DNS): You must block the IP address of the pod. Make sure the
pod_cidrsvalue in your EnvoyFilter covers the correct pod CIDR block of your ACK cluster.Method 2 (Lua filter): You must block the Nacos server IP address and the cluster IP address.
Services are not registering correctly
This usually happens when the EnvoyFilter was not in place before the services started. The filter must intercept the registration process from the beginning. Restart the affected deployments:
kubectl rollout restart deployment <deployment-name>Also confirm that each Spring Cloud service has a corresponding Kubernetes service resource with a cluster IP address.
Routing rules are not taking effect
Two common causes:
Nacos client SDK version (Method 2 only): Method 2 requires Nacos client SDK earlier than 2.0. Version 2.0+ uses gRPC instead of HTTP, which is incompatible with the Lua filter. Switch to Method 1 if your Nacos client SDK is 2.0 or later -- Method 1 works with all Nacos versions.
Outdated sidecar version: If the sidecar image version is earlier than 1.13.4.32, the data plane may not have been updated after a control plane upgrade. Restart the affected deployments to pick up the latest sidecar version.