When you run canary releases through traffic lanes (such as loose-mode and strict-mode traffic lanes), all applications on a call chain share the same release schedule. But in practice, different applications often need independent canary ratios. For example, two applications rolling out a new feature might target 10% of users, while a third application shipping a bug fix needs to reach 50% of users -- all at the same time, on the same call chain.
Alibaba Cloud Service Mesh (ASM) solves this with the Hash tagging plug-in, which assigns multiple tags to each request at the gateway. Combined with the ASMHeaderPropagation CRD for tag propagation and Istio routing rules for tag-based matching, each application on the call chain routes traffic independently based on its own canary ratio.
Unlike Kubernetes-native canary releases that rely on replica ratios, ASM decouples traffic routing from pod scaling. Canary percentages are controlled through routing rules, independent of how many replicas each version runs. This means 10% of users can be routed to a canary version without provisioning exactly 1-in-10 pods.
How it works
In a standard traffic lane setup, a single tag determines which version every application on the call chain serves. To support independent canary ratios, the Hash tagging plug-in adds a separate tag header for each application (for example, appver-a, appver-b, appver-c). Each tag has its own hash range that controls what percentage of users are routed to the canary version.
The end-to-end flow:
A client sends a request with a user identifier (for example, the
x-user-idheader).The Hash tagging plug-in at the ASM gateway hashes the identifier, divides it by a modulo (for example, 100), and checks whether the remainder falls within each tag's configured range.
For each tag whose range matches, the plug-in adds a tag header to the request (for example,
appver-a: v2).The ASMHeaderPropagation CRD propagates all
appver-prefixed headers through the call chain.Each application's VirtualService matches the corresponding tag header and routes the request to the appropriate version.
Because the hash is deterministic, the same user always reaches the same set of versions, providing a consistent experience across requests.

Scenario overview
This tutorial uses a distributed system with three applications forming the call chain app-a -> app-b -> app-c.
| Application | Stable version | Canary version | Canary ratio | Reason |
|---|---|---|---|---|
| app-a | v1 | v2 | 10% | New feature rollout |
| app-b | v1 | v2 | 10% | New feature rollout |
| app-c | v2 | v3 | 50% | Bug fix (low risk, fast release) |
The tutorial walks through deploying the stable baseline, rolling out app-a v2 and app-b v2 to 10% of users, adding app-c v3 at 50%, and then promoting app-c v3 to 100%.
The following diagram shows the traffic lane architecture of the distributed system with multiple versions.

The following diagram shows the target state with independent canary ratios for each application.

Prerequisites
Before you begin, make sure that you have:
An ASM instance of version 1.18 or later, with a cluster added to it. For more information, see Add a cluster to an ASM instance.
A Container Service for Kubernetes (ACK) managed cluster or an ACS cluster. For more information, see Create an ACK managed cluster or Create an ACS cluster.
An ingress gateway is deployed. For more information, see Create an ingress gateway.
Step 1: Deploy the baseline applications
Deploy app-a v1, app-b v1, and app-c v2 along with their Istio routing resources. After this step, all traffic follows the call chain app-a(v1) -> app-b(v1) -> app-c(v2).

Deploy application workloads
Create an
app-init.yamlfile with the following content.
Apply the file using the kubeconfig of the data plane cluster. This example uses the
defaultnamespace. Any namespace with automatic sidecar proxy injection enabled works.
kubectl apply -f app-init.yaml -n defaultConfigure Istio routing resources
Create an
app-init-mesh.yamlfile with the following content. This file defines VirtualServices, DestinationRules, and a Gateway to route all traffic to the stable versions.
Apply the file using the kubeconfig of the ASM instance (control plane).
kubectl apply -f app-init-mesh.yamlVerify the baseline
Send a test request. Replace
<ingress-gateway-ip>with the IP address of your ingress gateway. To get this address, see Obtain the IP address of an ingress gateway.
curl -H 'x-user-id: 0001' <ingress-gateway-ip>Expected output:
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)All requests follow the call chain app-a(v1) -> app-b(v1) -> app-c(v2).
Step 2: Start the canary release of app-a and app-b
This step deploys app-a v2 and app-b v2, then configures the Hash tagging plug-in to route 10% of users to the new versions. It involves four changes:
Deploy app-a v2 and app-b v2 workloads.
Update DestinationRules and VirtualServices to add v2 subsets and tag-based routing rules.
Configure the ASMHeaderPropagation CRD to propagate
appver-prefixed headers through the call chain.Deploy the Hash tagging plug-in (WasmPlugin) with tagging rules for app-a and app-b.
Note: The order listed here is for clarity. In practice, adjust the order based on your application dependencies.

Deploy canary workloads
Create an
app-ab-v2.yamlfile with the following content.
Apply the file using the kubeconfig of the data plane cluster.
kubectl apply -f app-ab-v2.yamlUpdate routing rules
Create an
app-ab-v2-mesh.yamlfile with the following content. This adds v2 subsets to the DestinationRules and tag-matching routes to the VirtualServices for app-a and app-b. When a request carries the headerappver-a: v2, it routes to app-a v2. When it carriesappver-b: v2, it routes to app-b v2. Requests without these headers go to v1 (default).
Apply the file using the kubeconfig of the ASM instance (control plane).
kubectl apply -f app-ab-v2-mesh.yamlEnable header propagation
Create a
header-propagation.yamlfile with the following content. The ASMHeaderPropagation CRD tells the sidecar proxy to forward all headers prefixed withappverthrough the call chain. Without this, tags added at the gateway would not reach downstream applications.
apiVersion: istio.alibabacloud.com/v1beta1
kind: ASMHeaderPropagation
metadata:
name: tag-propagation
spec:
headerPrefixes:
- appverApply the file using the kubeconfig of the data plane cluster.
kubectl apply -f header-propagation.yaml -n defaultConfigure the Hash tagging plug-in
Create a
hash-tagging-plugin.yamlfile with the following content.
apiVersion: extensions.istio.io/v1alpha1
kind: WasmPlugin
metadata:
name: hash-tagging
namespace: istio-system
spec:
imagePullPolicy: IfNotPresent
selector:
matchLabels:
istio: ingressgateway
url: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-wasm-hash-tagging:v1.22.6.2-g72656ba-aliyun
phase: AUTHN
pluginConfig:
rules:
- header: x-user-id # Hash input: user identifier
modulo: 100 # Divide hash by 100
tagHeader: appver-a # Tag header for app-a routing
policies:
- range: 10 # Hash remainder <= 10 -> add tag
tagValue: v2 # Route tagged users to v2
- header: x-user-id
modulo: 100
tagHeader: appver-b # Tag header for app-b routing
policies:
- range: 100 # Hash remainder <= 100 -> always add tag
tagValue: v2The following table explains the tagging rules in this configuration.
| Rule | Tag header | Range | Effect |
|---|---|---|---|
| app-a | appver-a | 10 | Hash the x-user-id value with modulo 100. If the remainder is 10 or less, add the header appver-a: v2. Routes roughly 10% of users to app-a v2. |
| app-b | appver-b | 100 | Same hash calculation, but with a range of 100, so all users get the header appver-b: v2 and are routed to app-b v2. Adjust the range value to match your target canary ratio. |
Apply the WasmPlugin configuration.
kubectl apply -f hash-tagging-plugin.yamlVerify the canary release
Send test requests with different user IDs.
curl -H 'x-user-id: 0001' <ingress-gateway-ip>
curl -H 'x-user-id: 0002' <ingress-gateway-ip>
curl -H 'x-user-id: 0003' <ingress-gateway-ip>
curl -H 'x-user-id: 0004' <ingress-gateway-ip>
curl -H 'x-user-id: 0005' <ingress-gateway-ip>Expected output:
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
-> app-a(version: v2, ip: 10.0.250.14)-> app-b(version: v2, ip: 10.0.250.8)-> app-c(version: v2, ip: 10.0.250.11)User 0005's hash remainder falls within the configured range, so the Hash tagging plug-in tags the request and routes it to app-a v2 and app-b v2. Users 0001 through 0004 remain on v1.
Step 3: Start the canary release of app-c
While app-a v2 and app-b v2 are still in canary, the team identifies a bug in app-c v2 and wants to deploy app-c v3 to 50% of users.
Deploy app-c v3
Create an
app-c-v3.yamlfile with the following content.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-c-v3
labels:
app: app-c
version: v3
spec:
replicas: 1
selector:
matchLabels:
app: app-c
version: v3
ASM_TRAFFIC_TAG: v3
template:
metadata:
labels:
app: app-c
version: v3
ASM_TRAFFIC_TAG: v3
annotations:
instrumentation.opentelemetry.io/inject-java: "true"
instrumentation.opentelemetry.io/container-names: "default"
spec:
containers:
- name: default
image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
imagePullPolicy: IfNotPresent
env:
- name: version
value: v3
- name: app
value: app-c
ports:
- containerPort: 8000Apply the file using the kubeconfig of the data plane cluster.
kubectl apply -f app-c-v3.yamlConfigure routing for app-c v3
Create an
app-c-v3-mesh.yamlfile with the following content. This adds a v3 subset to the DestinationRule and a tag-matching route to the VirtualService for app-c. Requests withappver-c: v3route to app-c v3; all other requests go to app-c v2 (default).
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: app-c
namespace: default
spec:
hosts:
- app-c.default.svc.cluster.local
http:
- name: v3
match:
- headers:
appver-c:
exact: v3
route:
- destination:
host: app-c.default.svc.cluster.local
port:
number: 8000
subset: v3
- name: default
route:
- destination:
host: app-c.default.svc.cluster.local
port:
number: 8000
subset: v2
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: app-c
spec:
host: app-c.default.svc.cluster.local
subsets:
- labels:
version: v2
name: v2
- labels:
version: v3
name: v3Apply the file using the kubeconfig of the ASM instance (control plane).
kubectl apply -f app-c-v3-mesh.yamlUpdate the Hash tagging plug-in
Create a
wasm-plugin-ab-v2-c-v3.yamlfile with the following content. This adds a third tagging rule for app-c with a range of 50, routing 50% of users to app-c v3.
apiVersion: extensions.istio.io/v1alpha1
kind: WasmPlugin
metadata:
name: hash-tagging
namespace: istio-system
spec:
imagePullPolicy: IfNotPresent
selector:
matchLabels:
istio: ingressgateway
url: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-wasm-hash-tagging:v1.22.6.2-g72656ba-aliyun
phase: AUTHN
pluginConfig:
rules:
- header: x-user-id
modulo: 100
tagHeader: appver-a
policies:
- range: 10
tagValue: v2
- header: x-user-id
modulo: 100
tagHeader: appver-b
policies:
- range: 10
tagValue: v2
- header: x-user-id
modulo: 100
tagHeader: appver-c
policies:
- range: 50 # 50% canary ratio for app-c
tagValue: v3Apply the updated plug-in configuration.
kubectl apply -f wasm-plugin-ab-v2-c-v3.yamlVerify the results
Send test requests with different user IDs.
curl -H 'x-user-id: 0001' <ingress-gateway-ip>
curl -H 'x-user-id: 0002' <ingress-gateway-ip>
curl -H 'x-user-id: 0003' <ingress-gateway-ip>
curl -H 'x-user-id: 0004' <ingress-gateway-ip>
curl -H 'x-user-id: 0005' <ingress-gateway-ip>Expected output:
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
-> app-a(version: v2, ip: 10.0.250.14)-> app-b(version: v2, ip: 10.0.250.8)-> app-c(version: v3, ip: 10.0.250.23)The results show three distinct routing paths, each determined independently:
| User IDs | app-a | app-b | app-c | Explanation |
|---|---|---|---|---|
| 0001, 0002 | v1 | v1 | v2 | No tags matched; all stable versions |
| 0003, 0004 | v1 | v1 | v3 | Only appver-c matched (50% range) |
| 0005 | v2 | v2 | v3 | All tags matched: appver-a and appver-b (10% range) plus appver-c (50% range) |
Step 4: Promote app-c v3 to full traffic
After verifying that app-c v3 works correctly in the canary environment, promote it by routing 100% of traffic to v3. This requires two changes: update the VirtualService to remove the tag-matching route and point the default route to v3, and remove the app-c tagging rule from the plug-in to stop propagating unnecessary headers.
Update the VirtualService
Apply the following YAML using the kubeconfig of the ASM instance (control plane). This routes all app-c traffic to v3 by updating the default route.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: app-c
namespace: default
spec:
hosts:
- app-c.default.svc.cluster.local
http:
- name: default
route:
- destination:
host: app-c.default.svc.cluster.local
port:
number: 8000
subset: v3Remove the app-c tagging rule
Apply the following YAML to update the Hash tagging plug-in. The
appver-crule is removed, leaving only the rules for app-a and app-b.
apiVersion: extensions.istio.io/v1alpha1
kind: WasmPlugin
metadata:
name: hash-tagging
namespace: istio-system
spec:
imagePullPolicy: IfNotPresent
selector:
matchLabels:
istio: ingressgateway
url: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-wasm-hash-tagging:v1.22.6.2-g72656ba-aliyun
phase: AUTHN
pluginConfig:
rules:
- header: x-user-id
modulo: 100
tagHeader: appver-a
policies:
- range: 10
tagValue: v2
- header: x-user-id
modulo: 100
tagHeader: appver-b
policies:
- range: 10
tagValue: v2Verify the promotion
Send test requests with different user IDs.
curl -H 'x-user-id: 0001' <ingress-gateway-ip>
curl -H 'x-user-id: 0002' <ingress-gateway-ip>
curl -H 'x-user-id: 0003' <ingress-gateway-ip>
curl -H 'x-user-id: 0004' <ingress-gateway-ip>
curl -H 'x-user-id: 0005' <ingress-gateway-ip>Expected output:
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
-> app-a(version: v2, ip: 10.0.250.14)-> app-b(version: v2, ip: 10.0.250.8)-> app-c(version: v3, ip: 10.0.250.23)All users now reach app-c v3, confirming the promotion is complete.
Note: After all traffic is switched to app-c v3, scale the app-c v2 deployment to 0 replicas or delete it based on your operational requirements.