Traffic lanes route end-to-end traffic through specific service versions based on request headers. When multiple teams develop different features simultaneously, traffic lanes isolate each team's request flow across the full call chain, preventing cross-contamination between versions. In permissive mode, unmatched requests fall back to baseline services automatically.
This topic covers the shared and scenario-specific preparation steps. Complete the steps that apply to your scenario, then proceed to the corresponding scenario guide.
| Step | Task | Applies to |
|---|---|---|
| 1 | Create an Istio gateway | All scenarios |
| 2 | Deploy sample services | Scenario 1 and Scenario 2 |
| 3 | Set up baggage header propagation | Scenario 3 |
Prerequisites
Before you begin, make sure that you have:
A Service Mesh (ASM) instance of Enterprise Edition or Ultimate Edition, version 1.18.2.111 or later. To create or upgrade an instance, see Create an ASM instance or Update an ASM instance
A cluster added to the ASM instance. See Add a cluster to an ASM instance
An ASM ingress gateway named
ingressgateway. See Create an ingress gateway
Step 1: Create an Istio gateway
Create an Istio gateway named ingressgateway in the istio-system namespace. This gateway listens on port 80 for HTTP traffic and accepts requests for all hosts. For more information, see Manage Istio gateways.
Save the following YAML to a file named
gateway.yaml:apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: ingressgateway namespace: istio-system spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - '*'Apply the configuration:
kubectl apply -f gateway.yamlVerify that the gateway is created:
kubectl get gateway ingressgateway -n istio-systemExpected output:
NAME AGE ingressgateway 10s
Step 2: Deploy sample services
This step applies to Scenario 1 and Scenario 2 only. If you plan to use Scenario 3, skip to Step 3.
Enable automatic sidecar proxy injection
Enable automatic sidecar proxy injection for the default namespace. See Enable automatic sidecar proxy injection.
For injection policy details, see Configure sidecar proxy injection policies.
Deploy the services
Deploy three versions of the sample services in your Container Service for Kubernetes (ACK) cluster:
kubectl apply -f https://alibabacloudservicemesh.oss-cn-beijing.aliyuncs.com/asm-labs/swimlane/v1/mock-tracing-v1.yaml
kubectl apply -f https://alibabacloudservicemesh.oss-cn-beijing.aliyuncs.com/asm-labs/swimlane/v2/mock-tracing-v2.yaml
kubectl apply -f https://alibabacloudservicemesh.oss-cn-beijing.aliyuncs.com/asm-labs/swimlane/v3/mock-tracing-v3.yamlVerify that all pods are running:
kubectl get pods -n defaultAll pods should show a Running status with 2/2 ready containers (the application container and the sidecar proxy).
The sample services for Scenario 1 and Scenario 2 are written in Golang. Scenario 3 uses Java-based services because the baggage header propagation mechanism has language-specific requirements. For details, see Injecting Auto-instrumentation.
Step 3: Set up baggage header propagation
This step applies to Scenario 3 only.
This step uses the OpenTelemetry Operator's auto-instrumentation capability to enable transparent baggage header propagation across service pods, without modifying application code.
3a. Deploy the OpenTelemetry Operator
Connect to the Kubernetes cluster added to your ASM instance and create the
opentelemetry-operator-systemnamespace:kubectl create namespace opentelemetry-operator-systemInstall the OpenTelemetry Operator with Helm. If Helm is not installed, see Install Helm.
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm install \ --namespace=opentelemetry-operator-system \ --version=0.46.0 \ --set admissionWebhooks.certManager.enabled=false \ --set admissionWebhooks.certManager.autoGenerateCert=true \ --set manager.image.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-operator" \ --set manager.image.tag="0.92.1" \ --set kubeRBACProxy.image.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/kube-rbac-proxy" \ --set kubeRBACProxy.image.tag="v0.13.1" \ --set manager.collectorImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-collector" \ --set manager.collectorImage.tag="0.97.0" \ --set manager.opampBridgeImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/operator-opamp-bridge" \ --set manager.opampBridgeImage.tag="0.97.0" \ --set manager.targetAllocatorImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/target-allocator" \ --set manager.targetAllocatorImage.tag="0.97.0" \ --set manager.autoInstrumentationImage.java.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-java" \ --set manager.autoInstrumentationImage.java.tag="1.32.1" \ --set manager.autoInstrumentationImage.nodejs.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-nodejs" \ --set manager.autoInstrumentationImage.nodejs.tag="0.49.1" \ --set manager.autoInstrumentationImage.python.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-python" \ --set manager.autoInstrumentationImage.python.tag="0.44b0" \ --set manager.autoInstrumentationImage.dotnet.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-dotnet" \ --set manager.autoInstrumentationImage.dotnet.tag="1.2.0" \ --set manager.autoInstrumentationImage.go.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-go-instrumentation" \ --set manager.autoInstrumentationImage.go.tag="v0.10.1.alpha-2-aliyun" \ opentelemetry-operator open-telemetry/opentelemetry-operatorVerify that the Operator is running:
kubectl get pod -n opentelemetry-operator-systemExpected output:
NAME READY STATUS RESTARTS AGE opentelemetry-operator-854fb558b5-pvllj 2/2 Running 0 1mBoth containers (
2/2) should be inRunningstatus.
3b. Configure auto-instrumentation
The Instrumentation resource tells the OpenTelemetry Operator how to inject instrumentation into service pods. The propagators: [baggage] setting enables W3C Baggage header propagation, which traffic lanes rely on to pass routing context between services.
Choose the configuration that matches your environment:
Option A: OpenTelemetry Collector is not deployed
Use this configuration when you only need baggage header propagation for traffic lanes without collecting metrics or traces. The always_off sampler disables trace collection, and OTEL_METRICS_EXPORTER: none disables metric export, so no telemetry data is generated.
Save the following YAML to a file named instrumentation.yaml:
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: demo-instrumentation
spec:
env:
- name: OTEL_METRICS_EXPORTER
value: none
propagators:
- baggage
sampler:
argument: "1"
type: always_offOption B: OpenTelemetry Collector is deployed
Use this configuration when you have an OpenTelemetry Collector in your cluster and want to collect both traces and baggage headers. The parentbased_traceidratio sampler with argument "1" samples 100% of traces.
Save the following YAML to a file named instrumentation.yaml:
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: demo-instrumentation
spec:
propagators:
- baggage
sampler:
type: parentbased_traceidratio
argument: "1"If an OpenTelemetry Collector is not deployed, Metric Collection and Tracing Analysis cannot be enabled. To collect ASM tracing data to Managed Service for OpenTelemetry, see Collect ASM tracing data to Managed Service for OpenTelemetry.
Apply the Instrumentation resource to the default namespace:
kubectl apply -f instrumentation.yaml -n defaultVerify the Instrumentation resource:
kubectl describe instrumentation demo-instrumentation -n defaultThe output should show the configured propagators and sampler settings.
The annotation step required to activate auto-instrumentation on individual pods is covered in Scenario 3. Deploying an OpenTelemetry Collector is beyond the scope of this topic. To collect ASM tracing data, see Collect ASM tracing data to Managed Service for OpenTelemetry.
What to do next
Proceed to the scenario that matches your use case:
| Scenario | When to use | Required steps |
|---|---|---|
| Scenario 1: Baggage header is not propagated by the application | Services do not pass through the baggage header | Step 1, Step 2 |
| Scenario 2: Baggage header is propagated by the application | Services already propagate the baggage header in application code | Step 1, Step 2 |
| Scenario 3: Transparent baggage header propagation | Automatic baggage propagation without modifying application code | Step 1, Step 3 |