All Products
Search
Document Center

Alibaba Cloud Service Mesh:Route traffic lane requests with custom VirtualServices in permissive mode

Last Updated:Mar 11, 2026

The ASM console provides built-in routing rules for traffic lanes, but these rules only support header and path matching. When you need weighted traffic splitting, fallback targets, or custom header injection, create a custom VirtualService instead.

This topic describes how to create custom VirtualServices on the ASM ingress gateway and on sidecar proxies to route traffic across lanes in permissive mode.

Important

This topic builds on the traffic lane setup described in Use traffic lanes in permissive mode to manage end-to-end traffic. Complete that setup before proceeding. The examples below modify steps from Scenario 1: Pass through trace IDs in traces and Scenario 2: Pass through custom request headers in traces.

How it works

A custom VirtualService defines routing rules on the ASM ingress gateway or on sidecar proxies. Each rule matches incoming requests by header values, then routes them to specific traffic lanes (subsets) with configurable weights.

Traffic flow through ASM ingress gateway with custom VirtualService routing

The examples in this topic use three traffic lanes:

LaneSubsetRole
s1s1Baseline (default)
s2s2Canary lane A
s3s3Canary lane B

The routing logic:

  • Requests with the env: dev header split 50/50 between lanes s2 and s3.

  • If lane s3 is unavailable, traffic falls back to lane s1.

  • All other requests go to lane s1.

After the ingress gateway routes a request to a lane, the x-asm-prefer-tag request routing header pins all subsequent calls within the same trace to that lane.

Note

Do not combine custom VirtualServices with the built-in routing rule creation feature for the same traffic lane. The two can conflict and cause unexpected traffic distribution.

Create a custom VirtualService on the ingress gateway

Replace the routing rule creation step (substep 3, "Create drainage rules for the three lanes" in Step 1) of Scenario 1 with the following VirtualService. For details on managing VirtualServices, see Manage virtual services.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: swimlane-ingress-vs-custom
  namespace: istio-system
spec:
  gateways:
    - istio-system/ingressgateway
  hosts:
    - '*'
  http:
    # Route 1: Match requests with the "env: dev" header.
    # Split traffic 50/50 between lanes s2 and s3.
    - match:
        - headers:
            env:
              exact: dev
      name: dev-route
      route:
        - destination:
            host: mocka.default.svc.cluster.local
            subset: s2
          weight: 50
          headers:
            request:
              set:
                x-asm-prefer-tag: s2
        - destination:
            host: mocka.default.svc.cluster.local
            subset: s3
          fallback:
            target:
              host: mocka.default.svc.cluster.local
              subset: s1
          weight: 50
          headers:
            request:
              set:
                x-asm-prefer-tag: s3
    # Route 2: Default route. Send all other requests to lane s1.
    - name: base-route
      route:
        - destination:
            host: mocka.default.svc.cluster.local
            subset: s1
          headers:
            request:
              set:
                x-asm-prefer-tag: s1

Field reference:

FieldDescription
gatewaysBinds this VirtualService to the ASM ingress gateway
hosts: '*'Matches all hostnames at the gateway
match.headers.env.exact: devMatches requests carrying the env: dev header
route[].destination.subsetSpecifies the target traffic lane
route[].weightControls the traffic split ratio
route[].headers.request.setSets x-asm-prefer-tag to pin subsequent trace calls to the lane
fallback.targetSpecifies a fallback lane when the primary target is unavailable
Note

The headers.request.set value must match the request routing header for each lane. For example, in Scenario 2, where the pass-through request header serves as the routing header, change the values to my-trace-id: s1, my-trace-id: s2, and my-trace-id: s3 respectively.

Verify the routing behavior

Run the verification commands from Step 3: Verify that the end-to-end canary release feature takes effect. Without the env: dev header, all requests route to lane s1:

-> mocka(version: v1, ip: 192.168.0.50)-> mockb(version: v1, ip: 192.168.0.46)-> mockc(version: v1, ip: 192.168.0.48)

To test the env: dev routing rule, send 100 requests to lane s1 with that header:

for i in {1..100};  do curl -H 'x-asm-prefer-tag: s1' -H 'env: dev' -H'my-trace-id: x000'$i http://${ASM_GATEWAY_IP}/mock ;  echo ''; sleep 1; done;

Expected output (representative sample):

-> mocka(version: v1, ip: 192.168.0.50)-> mockb(version: v3, ip: 192.168.0.42)-> mockc(version: v1, ip: 192.168.0.48)
-> mocka(version: v1, ip: 192.168.0.50)-> mockb(version: v3, ip: 192.168.0.42)-> mockc(version: v1, ip: 192.168.0.48)
-> mocka(version: v2, ip: 192.168.0.47)-> mockb(version: v1, ip: 192.168.0.46)-> mockc(version: v2, ip: 192.168.0.43)
-> mocka(version: v2, ip: 192.168.0.47)-> mockb(version: v1, ip: 192.168.0.46)-> mockc(version: v2, ip: 192.168.0.43)
-> mocka(version: v1, ip: 192.168.0.50)-> mockb(version: v3, ip: 192.168.0.42)-> mockc(version: v1, ip: 192.168.0.48)
...

The output shows two trace patterns at roughly a 50:50 ratio:

Trace patternRouted to laneExplanation
v1 -> v3 -> v1s2Request matched env: dev and was sent to lane s2
v2 -> v1 -> v2s3Request matched env: dev and was sent to lane s3

This confirms that requests with the env: dev header split evenly between lanes s2 and s3, while all remaining requests go to lane s1.

Note

When you create custom VirtualServices to route traffic for traffic lanes in strict mode, you only need to set the route.destination.subset field to the name of the target traffic lane. After a request is routed to a lane, all subsequent requests in the trace are always routed to that lane.

Create custom VirtualServices on sidecar proxies

To route service-to-service calls within the cluster -- not just traffic entering through the ingress gateway -- create a VirtualService on sidecar proxies. Two fields differ from the ingress gateway version:

FieldIngress gatewaySidecar proxy
gatewaysRequired (istio-system/ingressgateway)Omitted
hosts'*' (all hostnames)Cluster domain (e.g., mocka.default.svc.cluster.local)
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: swimlane-ingress-vs-custom
  namespace: istio-system
spec:
  hosts:
    - mocka.default.svc.cluster.local
  http:
    - match:
        - headers:
            env:
              exact: dev
      name: dev-route
      route:
        - destination:
            host: mocka.default.svc.cluster.local
            subset: s2
          weight: 50
          headers:
            request:
              set:
                x-asm-prefer-tag: s2
        - destination:
            host: mocka.default.svc.cluster.local
            subset: s3
          weight: 50
          fallback:
            target:
              host: mocka.default.svc.cluster.local
              subset: s1
          headers:
            request:
              set:
                x-asm-prefer-tag: s3
    - name: base-route
      route:
        - destination:
            host: mocka.default.svc.cluster.local
            subset: s1
          headers:
            request:
              set:
                x-asm-prefer-tag: s1
Note

As with the ingress gateway version, the headers.request.set value must match the corresponding request routing header.

Verify routing without the env: dev header

Send requests to each lane without the env: dev header:

kubectl exec -it deploy/sleep -c sleep --  sh -c 'for i in $(seq 1 100);  do curl -H "my-trace-id: s1" http://mocka:8000;  echo ""; sleep 1; done;'
kubectl exec -it deploy/sleep -c sleep --  sh -c 'for i in $(seq 1 100);  do curl -H "my-trace-id: s2" http://mocka:8000;  echo ""; sleep 1; done;'
kubectl exec -it deploy/sleep -c sleep --  sh -c 'for i in $(seq 1 100);  do curl -H "my-trace-id: s3" http://mocka:8000;  echo ""; sleep 1; done;'

Expected output:

-> mocka(version: v1, ip: 192.168.0.50)-> mockb(version: v1, ip: 192.168.0.46)-> mockc(version: v1, ip: 192.168.0.48)

All requests route to lane s1 regardless of the my-trace-id value, because no env: dev header is present.

Verify routing with the env: dev header

Send requests to each lane with the env: dev header:

kubectl exec -it deploy/sleep -c sleep --  sh -c 'for i in $(seq 1 100);  do curl -H "my-trace-id: s1" -H "env: dev" http://mocka:8000;  echo ""; sleep 1; done;'
kubectl exec -it deploy/sleep -c sleep --  sh -c 'for i in $(seq 1 100);  do curl -H "my-trace-id: s2" -H "env: dev" http://mocka:8000;  echo ""; sleep 1; done;'
kubectl exec -it deploy/sleep -c sleep --  sh -c 'for i in $(seq 1 100);  do curl -H "my-trace-id: s3" -H "env: dev" http://mocka:8000;  echo ""; sleep 1; done;'

Expected output (representative sample):

-> mocka(version: v1, ip: 192.168.0.50)-> mockb(version: v3, ip: 192.168.0.42)-> mockc(version: v1, ip: 192.168.0.48)
-> mocka(version: v2, ip: 192.168.0.47)-> mockb(version: v1, ip: 192.168.0.46)-> mockc(version: v2, ip: 192.168.0.43)
-> mocka(version: v1, ip: 192.168.0.50)-> mockb(version: v3, ip: 192.168.0.42)-> mockc(version: v1, ip: 192.168.0.48)
-> mocka(version: v2, ip: 192.168.0.47)-> mockb(version: v1, ip: 192.168.0.46)-> mockc(version: v2, ip: 192.168.0.43)
...

The output shows v1 -> v3 -> v1 and v2 -> v1 -> v2 trace patterns at a 50:50 ratio, confirming that the sidecar proxy VirtualService routes env: dev requests to lanes s2 and s3 evenly. Once a request enters a lane, all subsequent calls within the trace remain in that lane.

Traffic flow through sidecar proxies with custom VirtualService routing

What to read next