All Products
Search
Document Center

Alibaba Cloud Service Mesh:Prepare for traffic lanes in permissive mode

Last Updated:Mar 11, 2026

Traffic lanes route end-to-end traffic through specific service versions based on request headers. When multiple teams develop different features simultaneously, traffic lanes isolate each team's request flow across the full call chain, preventing cross-contamination between versions. In permissive mode, unmatched requests fall back to baseline services automatically.

This topic covers the shared and scenario-specific preparation steps. Complete the steps that apply to your scenario, then proceed to the corresponding scenario guide.

StepTaskApplies to
1Create an Istio gatewayAll scenarios
2Deploy sample servicesScenario 1 and Scenario 2
3Set up baggage header propagationScenario 3

Prerequisites

Before you begin, make sure that you have:

Step 1: Create an Istio gateway

Create an Istio gateway named ingressgateway in the istio-system namespace. This gateway listens on port 80 for HTTP traffic and accepts requests for all hosts. For more information, see Manage Istio gateways.

  1. Save the following YAML to a file named gateway.yaml:

    apiVersion: networking.istio.io/v1beta1
    kind: Gateway
    metadata:
      name: ingressgateway
      namespace: istio-system
    spec:
      selector:
        istio: ingressgateway
      servers:
        - port:
            number: 80
            name: http
            protocol: HTTP
          hosts:
            - '*'
  2. Apply the configuration:

    kubectl apply -f gateway.yaml
  3. Verify that the gateway is created:

    kubectl get gateway ingressgateway -n istio-system

    Expected output:

    NAME              AGE
    ingressgateway    10s

Step 2: Deploy sample services

Note

This step applies to Scenario 1 and Scenario 2 only. If you plan to use Scenario 3, skip to Step 3.

Enable automatic sidecar proxy injection

Enable automatic sidecar proxy injection for the default namespace. See Enable automatic sidecar proxy injection.

For injection policy details, see Configure sidecar proxy injection policies.

Deploy the services

Deploy three versions of the sample services in your Container Service for Kubernetes (ACK) cluster:

kubectl apply -f https://alibabacloudservicemesh.oss-cn-beijing.aliyuncs.com/asm-labs/swimlane/v1/mock-tracing-v1.yaml
kubectl apply -f https://alibabacloudservicemesh.oss-cn-beijing.aliyuncs.com/asm-labs/swimlane/v2/mock-tracing-v2.yaml
kubectl apply -f https://alibabacloudservicemesh.oss-cn-beijing.aliyuncs.com/asm-labs/swimlane/v3/mock-tracing-v3.yaml

Verify that all pods are running:

kubectl get pods -n default

All pods should show a Running status with 2/2 ready containers (the application container and the sidecar proxy).

Note

The sample services for Scenario 1 and Scenario 2 are written in Golang. Scenario 3 uses Java-based services because the baggage header propagation mechanism has language-specific requirements. For details, see Injecting Auto-instrumentation.

Step 3: Set up baggage header propagation

Note

This step applies to Scenario 3 only.

This step uses the OpenTelemetry Operator's auto-instrumentation capability to enable transparent baggage header propagation across service pods, without modifying application code.

3a. Deploy the OpenTelemetry Operator

  1. Connect to the Kubernetes cluster added to your ASM instance and create the opentelemetry-operator-system namespace:

    kubectl create namespace opentelemetry-operator-system
  2. Install the OpenTelemetry Operator with Helm. If Helm is not installed, see Install Helm.

    helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
    helm install  \
        --namespace=opentelemetry-operator-system \
        --version=0.46.0 \
        --set admissionWebhooks.certManager.enabled=false \
        --set admissionWebhooks.certManager.autoGenerateCert=true \
        --set manager.image.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-operator" \
        --set manager.image.tag="0.92.1" \
        --set kubeRBACProxy.image.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/kube-rbac-proxy" \
        --set kubeRBACProxy.image.tag="v0.13.1" \
        --set manager.collectorImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-collector" \
        --set manager.collectorImage.tag="0.97.0" \
        --set manager.opampBridgeImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/operator-opamp-bridge" \
        --set manager.opampBridgeImage.tag="0.97.0" \
        --set manager.targetAllocatorImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/target-allocator" \
        --set manager.targetAllocatorImage.tag="0.97.0" \
        --set manager.autoInstrumentationImage.java.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-java" \
        --set manager.autoInstrumentationImage.java.tag="1.32.1" \
        --set manager.autoInstrumentationImage.nodejs.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-nodejs" \
        --set manager.autoInstrumentationImage.nodejs.tag="0.49.1" \
        --set manager.autoInstrumentationImage.python.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-python" \
        --set manager.autoInstrumentationImage.python.tag="0.44b0" \
        --set manager.autoInstrumentationImage.dotnet.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-dotnet" \
        --set manager.autoInstrumentationImage.dotnet.tag="1.2.0" \
        --set manager.autoInstrumentationImage.go.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-go-instrumentation" \
        --set manager.autoInstrumentationImage.go.tag="v0.10.1.alpha-2-aliyun" \
        opentelemetry-operator open-telemetry/opentelemetry-operator
  3. Verify that the Operator is running:

    kubectl get pod -n opentelemetry-operator-system

    Expected output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    opentelemetry-operator-854fb558b5-pvllj   2/2     Running   0          1m

    Both containers (2/2) should be in Running status.

3b. Configure auto-instrumentation

The Instrumentation resource tells the OpenTelemetry Operator how to inject instrumentation into service pods. The propagators: [baggage] setting enables W3C Baggage header propagation, which traffic lanes rely on to pass routing context between services.

Choose the configuration that matches your environment:

Option A: OpenTelemetry Collector is not deployed

Use this configuration when you only need baggage header propagation for traffic lanes without collecting metrics or traces. The always_off sampler disables trace collection, and OTEL_METRICS_EXPORTER: none disables metric export, so no telemetry data is generated.

Save the following YAML to a file named instrumentation.yaml:

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: demo-instrumentation
spec:
  env:
  - name: OTEL_METRICS_EXPORTER
    value: none
  propagators:
  - baggage
  sampler:
    argument: "1"
    type: always_off

Option B: OpenTelemetry Collector is deployed

Use this configuration when you have an OpenTelemetry Collector in your cluster and want to collect both traces and baggage headers. The parentbased_traceidratio sampler with argument "1" samples 100% of traces.

Save the following YAML to a file named instrumentation.yaml:

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: demo-instrumentation
spec:
  propagators:
    - baggage
  sampler:
    type: parentbased_traceidratio
    argument: "1"
Important

If an OpenTelemetry Collector is not deployed, Metric Collection and Tracing Analysis cannot be enabled. To collect ASM tracing data to Managed Service for OpenTelemetry, see Collect ASM tracing data to Managed Service for OpenTelemetry.

Apply the Instrumentation resource to the default namespace:

kubectl apply -f instrumentation.yaml -n default

Verify the Instrumentation resource:

kubectl describe instrumentation demo-instrumentation -n default

The output should show the configured propagators and sampler settings.

Note

The annotation step required to activate auto-instrumentation on individual pods is covered in Scenario 3. Deploying an OpenTelemetry Collector is beyond the scope of this topic. To collect ASM tracing data, see Collect ASM tracing data to Managed Service for OpenTelemetry.

What to do next

Proceed to the scenario that matches your use case:

ScenarioWhen to useRequired steps
Scenario 1: Baggage header is not propagated by the applicationServices do not pass through the baggage headerStep 1, Step 2
Scenario 2: Baggage header is propagated by the applicationServices already propagate the baggage header in application codeStep 1, Step 2
Scenario 3: Transparent baggage header propagationAutomatic baggage propagation without modifying application codeStep 1, Step 3