All Products
Search
Document Center

Alibaba Cloud Service Mesh:Canary release of multiple applications with independent ratios using the Hash tagging plug-in

Last Updated:Mar 11, 2026

When you run canary releases through traffic lanes (such as loose-mode and strict-mode traffic lanes), all applications on a call chain share the same release schedule. But in practice, different applications often need independent canary ratios. For example, two applications rolling out a new feature might target 10% of users, while a third application shipping a bug fix needs to reach 50% of users -- all at the same time, on the same call chain.

Alibaba Cloud Service Mesh (ASM) solves this with the Hash tagging plug-in, which assigns multiple tags to each request at the gateway. Combined with the ASMHeaderPropagation CRD for tag propagation and Istio routing rules for tag-based matching, each application on the call chain routes traffic independently based on its own canary ratio.

Unlike Kubernetes-native canary releases that rely on replica ratios, ASM decouples traffic routing from pod scaling. Canary percentages are controlled through routing rules, independent of how many replicas each version runs. This means 10% of users can be routed to a canary version without provisioning exactly 1-in-10 pods.

How it works

In a standard traffic lane setup, a single tag determines which version every application on the call chain serves. To support independent canary ratios, the Hash tagging plug-in adds a separate tag header for each application (for example, appver-a, appver-b, appver-c). Each tag has its own hash range that controls what percentage of users are routed to the canary version.

The end-to-end flow:

  1. A client sends a request with a user identifier (for example, the x-user-id header).

  2. The Hash tagging plug-in at the ASM gateway hashes the identifier, divides it by a modulo (for example, 100), and checks whether the remainder falls within each tag's configured range.

  3. For each tag whose range matches, the plug-in adds a tag header to the request (for example, appver-a: v2).

  4. The ASMHeaderPropagation CRD propagates all appver-prefixed headers through the call chain.

  5. Each application's VirtualService matches the corresponding tag header and routes the request to the appropriate version.

Because the hash is deterministic, the same user always reaches the same set of versions, providing a consistent experience across requests.

Traffic flow with multi-tag routing

Scenario overview

This tutorial uses a distributed system with three applications forming the call chain app-a -> app-b -> app-c.

ApplicationStable versionCanary versionCanary ratioReason
app-av1v210%New feature rollout
app-bv1v210%New feature rollout
app-cv2v350%Bug fix (low risk, fast release)

The tutorial walks through deploying the stable baseline, rolling out app-a v2 and app-b v2 to 10% of users, adding app-c v3 at 50%, and then promoting app-c v3 to 100%.

The following diagram shows the traffic lane architecture of the distributed system with multiple versions.

Traffic lane architecture

The following diagram shows the target state with independent canary ratios for each application.

Application architecture with version distribution

Prerequisites

Before you begin, make sure that you have:

Step 1: Deploy the baseline applications

Deploy app-a v1, app-b v1, and app-c v2 along with their Istio routing resources. After this step, all traffic follows the call chain app-a(v1) -> app-b(v1) -> app-c(v2).

Baseline call chain

Deploy application workloads

  1. Create an app-init.yaml file with the following content.

app-init.yaml

apiVersion: v1
kind: Service
metadata:
  name: app-a
  labels:
    app: app-a
    service: app-a
spec:
  ports:
  - port: 8000
    name: http
  selector:
    app: app-a
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-a-v1
  labels:
    app: app-a
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-a
      version: v1
      ASM_TRAFFIC_TAG: v1
  template:
    metadata:
      labels:
        app: app-a
        version: v1
        ASM_TRAFFIC_TAG: v1
      annotations:
        instrumentation.opentelemetry.io/inject-java: "true"
        instrumentation.opentelemetry.io/container-names: "default"
    spec:
      containers:
      - name: default
        image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
        imagePullPolicy: IfNotPresent
        env:
        - name: version
          value: v1
        - name: app
          value: app-a
        - name: upstream_url
          value: "http://app-b:8000/"
        ports:
        - containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
  name: app-b
  labels:
    app: app-b
    service: app-b
spec:
  ports:
  - port: 8000
    name: http
  selector:
    app: app-b
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-b-v1
  labels:
    app: app-b
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-b
      version: v1
      ASM_TRAFFIC_TAG: v1
  template:
    metadata:
      labels:
        app: app-b
        version: v1
        ASM_TRAFFIC_TAG: v1
      annotations:
        instrumentation.opentelemetry.io/inject-java: "true"
        instrumentation.opentelemetry.io/container-names: "default"
    spec:
      containers:
      - name: default
        image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
        imagePullPolicy: IfNotPresent
        env:
        - name: version
          value: v1
        - name: app
          value: app-b
        - name: upstream_url
          value: "http://app-c:8000/"
        ports:
        - containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
  name: app-c
  labels:
    app: app-c
    service: app-c
spec:
  ports:
  - port: 8000
    name: http
  selector:
    app: app-c
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-c-v2
  labels:
    app: app-c
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-c
      version: v2
      ASM_TRAFFIC_TAG: v2
  template:
    metadata:
      labels:
        app: app-c
        version: v2
        ASM_TRAFFIC_TAG: v2
      annotations:
        instrumentation.opentelemetry.io/inject-java: "true"
        instrumentation.opentelemetry.io/container-names: "default"
    spec:
      containers:
      - name: default
        image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
        imagePullPolicy: IfNotPresent
        env:
        - name: version
          value: v2
        - name: app
          value: app-c
        ports:
        - containerPort: 8000
  1. Apply the file using the kubeconfig of the data plane cluster. This example uses the default namespace. Any namespace with automatic sidecar proxy injection enabled works.

kubectl apply -f app-init.yaml -n default

Configure Istio routing resources

  1. Create an app-init-mesh.yaml file with the following content. This file defines VirtualServices, DestinationRules, and a Gateway to route all traffic to the stable versions.

app-init-mesh.yaml

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: app-b
  namespace: default
spec:
  hosts:
  - app-b.default.svc.cluster.local
  http:
  - name: default
    route:
    - destination:
        host: app-b.default.svc.cluster.local
        port:
          number: 8000
        subset: v1
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: app-c
  namespace: default
spec:
  hosts:
  - app-c.default.svc.cluster.local
  http:
  - name: default
    route:
    - destination:
        host: app-c.default.svc.cluster.local
        port:
          number: 8000
        subset: v2
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: ingressgateway
  namespace: default
spec:
  selector:
    istio: ingressgateway
  servers:
    - hosts:
        - '*'
      port:
        name: http
        number: 80
        protocol: HTTP
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: ingressgateway
  namespace: istio-system
spec:
  gateways:
  - default/ingressgateway
  hosts:
  - '*'
  http:
  - name: default
    route:
    - destination:
        host: app-a.default.svc.cluster.local
        port:
          number: 8000
        subset: v1
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: app-a
  namespace: default
spec:
  host: app-a.default.svc.cluster.local
  subsets:
    - labels:
        version: v1
      name: v1
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: app-b
  namespace: default
spec:
  host: app-b.default.svc.cluster.local
  subsets:
    - labels:
        version: v1
      name: v1
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: app-c
  namespace: default
spec:
  host: app-c.default.svc.cluster.local
  subsets:
    - labels:
        version: v2
      name: v2
  1. Apply the file using the kubeconfig of the ASM instance (control plane).

kubectl apply -f app-init-mesh.yaml

Verify the baseline

  1. Send a test request. Replace <ingress-gateway-ip> with the IP address of your ingress gateway. To get this address, see Obtain the IP address of an ingress gateway.

curl -H 'x-user-id: 0001' <ingress-gateway-ip>

Expected output:

-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)

All requests follow the call chain app-a(v1) -> app-b(v1) -> app-c(v2).

Step 2: Start the canary release of app-a and app-b

This step deploys app-a v2 and app-b v2, then configures the Hash tagging plug-in to route 10% of users to the new versions. It involves four changes:

  1. Deploy app-a v2 and app-b v2 workloads.

  2. Update DestinationRules and VirtualServices to add v2 subsets and tag-based routing rules.

  3. Configure the ASMHeaderPropagation CRD to propagate appver-prefixed headers through the call chain.

  4. Deploy the Hash tagging plug-in (WasmPlugin) with tagging rules for app-a and app-b.

Note

Note: The order listed here is for clarity. In practice, adjust the order based on your application dependencies.

Canary architecture for app-a and app-b

Deploy canary workloads

  1. Create an app-ab-v2.yaml file with the following content.

app-ab-v2.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-a-v2
  labels:
    app: app-a
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-a
      version: v2
      ASM_TRAFFIC_TAG: v2
  template:
    metadata:
      labels:
        app: app-a
        version: v2
        ASM_TRAFFIC_TAG: v2
      annotations:
        instrumentation.opentelemetry.io/inject-java: "true"
        instrumentation.opentelemetry.io/container-names: "default"
    spec:
      containers:
      - name: default
        image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
        imagePullPolicy: IfNotPresent
        env:
        - name: version
          value: v2
        - name: app
          value: app-a
        - name: upstream_url
          value: "http://app-b:8000/"
        ports:
        - containerPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-b-v2
  labels:
    app: app-b
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-b
      version: v2
      ASM_TRAFFIC_TAG: v2
  template:
    metadata:
      labels:
        app: app-b
        version: v2
        ASM_TRAFFIC_TAG: v2
      annotations:
        instrumentation.opentelemetry.io/inject-java: "true"
        instrumentation.opentelemetry.io/container-names: "default"
    spec:
      containers:
      - name: default
        image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
        imagePullPolicy: IfNotPresent
        env:
        - name: version
          value: v2
        - name: app
          value: app-b
        - name: upstream_url
          value: "http://app-c:8000/"
        ports:
        - containerPort: 8000
  1. Apply the file using the kubeconfig of the data plane cluster.

kubectl apply -f app-ab-v2.yaml

Update routing rules

  1. Create an app-ab-v2-mesh.yaml file with the following content. This adds v2 subsets to the DestinationRules and tag-matching routes to the VirtualServices for app-a and app-b. When a request carries the header appver-a: v2, it routes to app-a v2. When it carries appver-b: v2, it routes to app-b v2. Requests without these headers go to v1 (default).

app-ab-v2-mesh.yaml

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: app-b
  namespace: default
spec:
  hosts:
  - app-b.default.svc.cluster.local
  http:
  - name: v2
    match:
    - headers:
        appver-b:
          exact: v2
    route:
    - destination:
        host: app-b.default.svc.cluster.local
        port:
          number: 8000
        subset: v2
  - name: default
    route:
    - destination:
        host: app-b.default.svc.cluster.local
        port:
          number: 8000
        subset: v1
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: app-c
  namespace: default
spec:
  hosts:
  - app-c.default.svc.cluster.local
  http:
  - name: default
    route:
    - destination:
        host: app-c.default.svc.cluster.local
        port:
          number: 8000
        subset: v2
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: ingressgateway
  namespace: default
spec:
  gateways:
  - default/ingressgateway
  hosts:
  - '*'
  http:
  - name: v2
    match:
    - headers:
        appver-a:
          exact: v2
    route:
    - destination:
        host: app-a.default.svc.cluster.local
        port:
          number: 8000
        subset: v2
  - name: default
    route:
    - destination:
        host: app-a.default.svc.cluster.local
        port:
          number: 8000
        subset: v1
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: app-a
spec:
  host: app-a.default.svc.cluster.local
  subsets:
    - labels:
        version: v1
      name: v1
    - labels:
        version: v2
      name: v2
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: app-b
spec:
  host: app-b.default.svc.cluster.local
  subsets:
    - labels:
        version: v1
      name: v1
    - labels:
        version: v2
      name: v2
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: app-c
spec:
  host: app-c.default.svc.cluster.local
  subsets:
    - labels:
        version: v2
      name: v2
  1. Apply the file using the kubeconfig of the ASM instance (control plane).

kubectl apply -f app-ab-v2-mesh.yaml

Enable header propagation

  1. Create a header-propagation.yaml file with the following content. The ASMHeaderPropagation CRD tells the sidecar proxy to forward all headers prefixed with appver through the call chain. Without this, tags added at the gateway would not reach downstream applications.

apiVersion: istio.alibabacloud.com/v1beta1
kind: ASMHeaderPropagation
metadata:
  name: tag-propagation
spec:
  headerPrefixes:
    - appver
  1. Apply the file using the kubeconfig of the data plane cluster.

kubectl apply -f header-propagation.yaml -n default

Configure the Hash tagging plug-in

  1. Create a hash-tagging-plugin.yaml file with the following content.

apiVersion: extensions.istio.io/v1alpha1
kind: WasmPlugin
metadata:
  name: hash-tagging
  namespace: istio-system
spec:
  imagePullPolicy: IfNotPresent
  selector:
    matchLabels:
      istio: ingressgateway
  url: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-wasm-hash-tagging:v1.22.6.2-g72656ba-aliyun
  phase: AUTHN
  pluginConfig:
    rules:
      - header: x-user-id        # Hash input: user identifier
        modulo: 100               # Divide hash by 100
        tagHeader: appver-a       # Tag header for app-a routing
        policies:
          - range: 10             # Hash remainder <= 10 -> add tag
            tagValue: v2          # Route tagged users to v2
      - header: x-user-id
        modulo: 100
        tagHeader: appver-b       # Tag header for app-b routing
        policies:
          - range: 100            # Hash remainder <= 100 -> always add tag
            tagValue: v2

The following table explains the tagging rules in this configuration.

RuleTag headerRangeEffect
app-aappver-a10Hash the x-user-id value with modulo 100. If the remainder is 10 or less, add the header appver-a: v2. Routes roughly 10% of users to app-a v2.
app-bappver-b100Same hash calculation, but with a range of 100, so all users get the header appver-b: v2 and are routed to app-b v2. Adjust the range value to match your target canary ratio.
  1. Apply the WasmPlugin configuration.

kubectl apply -f hash-tagging-plugin.yaml

Verify the canary release

  1. Send test requests with different user IDs.

curl -H 'x-user-id: 0001' <ingress-gateway-ip>
curl -H 'x-user-id: 0002' <ingress-gateway-ip>
curl -H 'x-user-id: 0003' <ingress-gateway-ip>
curl -H 'x-user-id: 0004' <ingress-gateway-ip>
curl -H 'x-user-id: 0005' <ingress-gateway-ip>

Expected output:

-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
-> app-a(version: v2, ip: 10.0.250.14)-> app-b(version: v2, ip: 10.0.250.8)-> app-c(version: v2, ip: 10.0.250.11)

User 0005's hash remainder falls within the configured range, so the Hash tagging plug-in tags the request and routes it to app-a v2 and app-b v2. Users 0001 through 0004 remain on v1.

Step 3: Start the canary release of app-c

While app-a v2 and app-b v2 are still in canary, the team identifies a bug in app-c v2 and wants to deploy app-c v3 to 50% of users.

Deploy app-c v3

  1. Create an app-c-v3.yaml file with the following content.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-c-v3
  labels:
    app: app-c
    version: v3
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-c
      version: v3
      ASM_TRAFFIC_TAG: v3
  template:
    metadata:
      labels:
        app: app-c
        version: v3
        ASM_TRAFFIC_TAG: v3
      annotations:
        instrumentation.opentelemetry.io/inject-java: "true"
        instrumentation.opentelemetry.io/container-names: "default"
    spec:
      containers:
      - name: default
        image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
        imagePullPolicy: IfNotPresent
        env:
        - name: version
          value: v3
        - name: app
          value: app-c
        ports:
        - containerPort: 8000
  1. Apply the file using the kubeconfig of the data plane cluster.

kubectl apply -f app-c-v3.yaml

Configure routing for app-c v3

  1. Create an app-c-v3-mesh.yaml file with the following content. This adds a v3 subset to the DestinationRule and a tag-matching route to the VirtualService for app-c. Requests with appver-c: v3 route to app-c v3; all other requests go to app-c v2 (default).

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: app-c
  namespace: default
spec:
  hosts:
  - app-c.default.svc.cluster.local
  http:
  - name: v3
    match:
    - headers:
        appver-c:
          exact: v3
    route:
    - destination:
        host: app-c.default.svc.cluster.local
        port:
          number: 8000
        subset: v3
  - name: default
    route:
    - destination:
        host: app-c.default.svc.cluster.local
        port:
          number: 8000
        subset: v2
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: app-c
spec:
  host: app-c.default.svc.cluster.local
  subsets:
    - labels:
        version: v2
      name: v2
    - labels:
        version: v3
      name: v3
  1. Apply the file using the kubeconfig of the ASM instance (control plane).

kubectl apply -f app-c-v3-mesh.yaml

Update the Hash tagging plug-in

  1. Create a wasm-plugin-ab-v2-c-v3.yaml file with the following content. This adds a third tagging rule for app-c with a range of 50, routing 50% of users to app-c v3.

apiVersion: extensions.istio.io/v1alpha1
kind: WasmPlugin
metadata:
  name: hash-tagging
  namespace: istio-system
spec:
  imagePullPolicy: IfNotPresent
  selector:
    matchLabels:
      istio: ingressgateway
  url: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-wasm-hash-tagging:v1.22.6.2-g72656ba-aliyun
  phase: AUTHN
  pluginConfig:
    rules:
      - header: x-user-id
        modulo: 100
        tagHeader: appver-a
        policies:
          - range: 10
            tagValue: v2
      - header: x-user-id
        modulo: 100
        tagHeader: appver-b
        policies:
          - range: 10
            tagValue: v2
      - header: x-user-id
        modulo: 100
        tagHeader: appver-c
        policies:
          - range: 50           # 50% canary ratio for app-c
            tagValue: v3
  1. Apply the updated plug-in configuration.

kubectl apply -f wasm-plugin-ab-v2-c-v3.yaml

Verify the results

  1. Send test requests with different user IDs.

curl -H 'x-user-id: 0001' <ingress-gateway-ip>
curl -H 'x-user-id: 0002' <ingress-gateway-ip>
curl -H 'x-user-id: 0003' <ingress-gateway-ip>
curl -H 'x-user-id: 0004' <ingress-gateway-ip>
curl -H 'x-user-id: 0005' <ingress-gateway-ip>

Expected output:

-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
-> app-a(version: v2, ip: 10.0.250.14)-> app-b(version: v2, ip: 10.0.250.8)-> app-c(version: v3, ip: 10.0.250.23)

The results show three distinct routing paths, each determined independently:

User IDsapp-aapp-bapp-cExplanation
0001, 0002v1v1v2No tags matched; all stable versions
0003, 0004v1v1v3Only appver-c matched (50% range)
0005v2v2v3All tags matched: appver-a and appver-b (10% range) plus appver-c (50% range)

Step 4: Promote app-c v3 to full traffic

After verifying that app-c v3 works correctly in the canary environment, promote it by routing 100% of traffic to v3. This requires two changes: update the VirtualService to remove the tag-matching route and point the default route to v3, and remove the app-c tagging rule from the plug-in to stop propagating unnecessary headers.

Update the VirtualService

  1. Apply the following YAML using the kubeconfig of the ASM instance (control plane). This routes all app-c traffic to v3 by updating the default route.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: app-c
  namespace: default
spec:
  hosts:
  - app-c.default.svc.cluster.local
  http:
  - name: default
    route:
    - destination:
        host: app-c.default.svc.cluster.local
        port:
          number: 8000
        subset: v3

Remove the app-c tagging rule

  1. Apply the following YAML to update the Hash tagging plug-in. The appver-c rule is removed, leaving only the rules for app-a and app-b.

apiVersion: extensions.istio.io/v1alpha1
kind: WasmPlugin
metadata:
  name: hash-tagging
  namespace: istio-system
spec:
  imagePullPolicy: IfNotPresent
  selector:
    matchLabels:
      istio: ingressgateway
  url: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-wasm-hash-tagging:v1.22.6.2-g72656ba-aliyun
  phase: AUTHN
  pluginConfig:
    rules:
      - header: x-user-id
        modulo: 100
        tagHeader: appver-a
        policies:
          - range: 10
            tagValue: v2
      - header: x-user-id
        modulo: 100
        tagHeader: appver-b
        policies:
          - range: 10
            tagValue: v2

Verify the promotion

  1. Send test requests with different user IDs.

curl -H 'x-user-id: 0001' <ingress-gateway-ip>
curl -H 'x-user-id: 0002' <ingress-gateway-ip>
curl -H 'x-user-id: 0003' <ingress-gateway-ip>
curl -H 'x-user-id: 0004' <ingress-gateway-ip>
curl -H 'x-user-id: 0005' <ingress-gateway-ip>

Expected output:

-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
-> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
-> app-a(version: v2, ip: 10.0.250.14)-> app-b(version: v2, ip: 10.0.250.8)-> app-c(version: v3, ip: 10.0.250.23)

All users now reach app-c v3, confirming the promotion is complete.

Note

Note: After all traffic is switched to app-c v3, scale the app-c v2 deployment to 0 replicas or delete it based on your operational requirements.