Traffic lanes route requests to specific service versions across an entire call chain. For example, all requests tagged v2 reach the v2 instance of every service in the chain. In permissive mode, if a matching version does not exist for a given service, traffic automatically falls back to the baseline version instead of failing. This approach enables safe canary releases and version-based testing across multi-service architectures without requiring every service to have every version deployed.
How it works
Four resource types work together to implement traffic lanes:
| Resource | Role |
|---|---|
| OpenTelemetry Instrumentation | Auto-propagates baggage headers across service calls without code changes |
| ASMHeaderPropagation | ASM-specific CRD that extracts specified headers (such as version) from baggage context and attaches them as request headers throughout the call chain |
| DestinationRule | Defines service subsets (v1, v2, v3) based on pod labels |
| VirtualService | Routes requests to the correct subset by matching the propagated version header, with a fallback to the baseline subset when the target version has no healthy endpoints |
Traffic flow:
Client request
|
v
Ingress Gateway (sets version header, distributes traffic by weight)
|
v
mocka (version matched via header) --> mockb (version matched) --> mockc (version matched)
| | |
v v v
If no matching version exists, Same fallback logic Same fallback logic
falls back to v1 (baseline)The fallback field used in VirtualService routes is an ASM-specific extension to the standard Istio VirtualService API. It triggers when the target subset has zero healthy endpoints, routing traffic to the specified fallback subset instead of returning an error. This field is not available in upstream Istio.Baggage headers
Baggage is an OpenTelemetry mechanism for propagating key-value context across processes in a distributed trace. It uses an HTTP header:
baggage: userId=alice,serverNode=DF%2028,isProduction=falseBaggage headers carry context data such as tenant IDs, trace IDs, and security credentials, enabling trace analysis and log correlation without code modifications.
For more information about traffic lanes, see Overview of traffic lanes.
Prerequisites
Before you begin, ensure that you have:
A Service Mesh (ASM) instance of Enterprise Edition or Ultimate Edition, version 1.21.6.54 or later -- see Create an ASM instance or Update an ASM instance
A Kubernetes cluster added to the ASM instance -- see Add a cluster to an ASM instance
An ASM ingress gateway named
ingressgateway-- see Create an ingress gatewayHelm installed on your local machine -- see Install Helm
Step 1: Deploy the OpenTelemetry Operator and configure auto-instrumentation
The OpenTelemetry Operator auto-instruments service pods to propagate baggage headers across calls without code changes.
Deploy the OpenTelemetry Operator
Connect to the Kubernetes cluster with kubectl. Create the
opentelemetry-operator-systemnamespace:kubectl create namespace opentelemetry-operator-systemInstall the OpenTelemetry Operator with Helm:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm install \ --namespace=opentelemetry-operator-system \ --version=0.46.0 \ --set admissionWebhooks.certManager.enabled=false \ --set admissionWebhooks.certManager.autoGenerateCert=true \ --set manager.image.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-operator" \ --set manager.image.tag="0.92.1" \ --set kubeRBACProxy.image.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/kube-rbac-proxy" \ --set kubeRBACProxy.image.tag="v0.13.1" \ --set manager.collectorImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-collector" \ --set manager.collectorImage.tag="0.97.0" \ --set manager.opampBridgeImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/operator-opamp-bridge" \ --set manager.opampBridgeImage.tag="0.97.0" \ --set manager.targetAllocatorImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/target-allocator" \ --set manager.targetAllocatorImage.tag="0.97.0" \ --set manager.autoInstrumentationImage.java.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-java" \ --set manager.autoInstrumentationImage.java.tag="1.32.1" \ --set manager.autoInstrumentationImage.nodejs.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-nodejs" \ --set manager.autoInstrumentationImage.nodejs.tag="0.49.1" \ --set manager.autoInstrumentationImage.python.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-python" \ --set manager.autoInstrumentationImage.python.tag="0.44b0" \ --set manager.autoInstrumentationImage.dotnet.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-dotnet" \ --set manager.autoInstrumentationImage.dotnet.tag="1.2.0" \ --set manager.autoInstrumentationImage.go.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-go-instrumentation" \ --set manager.autoInstrumentationImage.go.tag="v0.10.1.alpha-2-aliyun" \ opentelemetry-operator open-telemetry/opentelemetry-operatorVerify that the operator pod is running: Expected output:
kubectl get pod -n opentelemetry-operator-systemNAME READY STATUS RESTARTS AGE opentelemetry-operator-854fb558b5-pvllj 2/2 Running 0 1m
Configure auto-instrumentation
Create an instrumentation.yaml file to declare the baggage propagator. Choose the configuration based on whether an OpenTelemetry Collector is deployed in your environment.
Without an OpenTelemetry Collector -- Disables metric export and tracing to avoid errors when no collector endpoint is available:
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: demo-instrumentation
spec:
env:
- name: OTEL_METRICS_EXPORTER
value: none
propagators:
- baggage
sampler:
argument: "1"
type: always_offWith an OpenTelemetry Collector -- Enables full tracing with parent-based sampling:
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: demo-instrumentation
spec:
propagators:
- baggage
sampler:
type: parentbased_traceidratio
argument: "1"Apply the instrumentation resource to the default namespace:
kubectl apply -f instrumentation.yamlDeploying an OpenTelemetry Collector to collect observability data is a best practice. For details on collecting ASM tracing data, see Collect ASM tracing data to Managed Service for OpenTelemetry.
Step 2: Deploy sample services
This step deploys three services -- mocka, mockb, and mockc -- each with three versions (v1, v2, v3). The services form a call chain: mocka -> mockb -> mockc. Each pod is annotated for Java auto-instrumentation, so baggage headers propagate automatically.
Enable automatic sidecar proxy injection for the
defaultnamespace. See Manage global namespaces.For more information about sidecar injection, see Enable automatic sidecar proxy injection.
Create a
mock.yamlfile with the following content.Key points about the YAML manifests:
Configuration Purpose ASM_TRAFFIC_TAGlabelTags each pod with its version (v1, v2, or v3) for traffic lane routing instrumentation.opentelemetry.io/inject-java: "true"Triggers OpenTelemetry auto-instrumentation for the Java container instrumentation.opentelemetry.io/container-names: "default"Specifies which container to instrument upstream_urlenv varDefines the call chain: mocka->mockb->mockcDeploy the services: With auto-instrumentation in place, pods automatically propagate baggage headers across the call chain.
kubectl apply -f mock.yaml
Step 3: Create routing rules for traffic lanes
This step creates destination rules, an ASMHeaderPropagation CRD, virtual services, and ingress gateway rules. Together, these resources route requests to the correct service version based on the propagated version header, with fallback to v1 when a version does not exist.
Create destination rules
Destination rules define service subsets based on pod labels. Not every service has all three versions -- this is intentional and demonstrates the permissive mode fallback behavior.
| Service | Available subsets |
|---|---|
mocka | v1, v2, v3 |
mockb | v1, v3 |
mockc | v1, v2 |
Create a
dr-mock.yamlfile with the following content.Connect to the ASM instance with kubectl and apply the destination rules:
kubectl apply -f dr-mock.yaml
Create an ASMHeaderPropagation CRD
The ASMHeaderPropagation CRD tells ASM which headers to extract from baggage context and propagate across the call chain. In this example, the version header is extracted so that downstream services receive the correct routing context.
Create a
propagation.yamlfile:Field Description apiVersionistio.alibabacloud.com/v1beta1-- ASM-specific API groupkindASMHeaderPropagation-- CRD for header propagation from baggage contextspec.headersList of header names to extract from baggage and propagate as request headers apiVersion: istio.alibabacloud.com/v1beta1 kind: ASMHeaderPropagation metadata: name: version-propagation spec: headers: - versionConnect to the ASM instance with kubectl and apply the CRD:
kubectl apply -f propagation.yaml
Create virtual services
Virtual services route requests to the correct subset by matching the version header. Each rule includes a fallback target pointing to v1 (the baseline version). In permissive mode, when a request targets a nonexistent version (for example, mockb has no v2), ASM routes to the fallback subset instead of returning an error.
Create a
vs-mock.yamlfile with the following content.The routing pattern for each service follows this structure:
# For each version match, specify a primary destination and a fallback - match: - headers: version: exact: v2 # Match the propagated version header route: - destination: host: <service>.default.svc.cluster.local subset: v2 # Primary: route to matching version fallback: target: host: <service>.default.svc.cluster.local subset: v1 # Fallback: route to baseline when v2 has no healthy endpointsConnect to the ASM instance with kubectl and apply the virtual services:
kubectl apply -f vs-mock.yaml
Create ingress gateway rules
The ingress gateway distributes incoming traffic across service versions by weight and sets the version header on each request to match the target version.
Create a
gw-mock.yamlfile with the following content.This configuration distributes traffic to v1, v2, and v3 of
mockaat a 40:30:30 ratio. For each request, the gateway sets theversionheader to the target version so that downstream services in the call chain receive the correct routing context through baggage propagation.Connect to the ASM instance with kubectl and apply the gateway rules:
kubectl apply -f gw-mock.yaml
Step 4: Verify traffic lane routing
Get the public IP address of the ingress gateway. See Obtain the IP address of the ASM ingress gateway.
Set the gateway IP as an environment variable: Replace
<gateway-ip>with the actual IP address.export ASM_GATEWAY_IP=<gateway-ip>Send repeated requests to observe traffic distribution: Sample output:
for i in {1..100}; do curl http://${ASM_GATEWAY_IP}; echo ''; sleep 1; done-> mocka(version: v1, ip: 192.168.1.27)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v1, ip: 192.168.1.14) -> mocka(version: v2, ip: 192.168.1.28)-> mockb(version: v1, ip: 192.168.1.30)-> mockc(version: v2, ip: 192.168.1.1) -> mocka(version: v3, ip: 192.168.1.26)-> mockb(version: v3, ip: 192.168.1.29)-> mockc(version: v1, ip: 192.168.1.14)Verify the results. The output confirms two behaviors: Weight-based distribution: Traffic splits across v1, v2, and v3 at approximately 40:30:30. Permissive mode fallback: When a version does not exist for a service, traffic falls back to v1:
v2 lane:
mocka-v2->mockb-v1(nomockb-v2exists, so traffic falls back to v1) ->mockc-v2v3 lane:
mocka-v3->mockb-v3->mockc-v1(nomockc-v3exists, so traffic falls back to v1)
Lane mocka mockb mockc v1 v1 v1 v1 v2 v2 v1 (no v2, falls back) v2 v3 v3 v3 v1 (no v3, falls back)