All Products
Search
Document Center

Container Compute Service:Canary release of multiple applications by user segment based on the multi-tag routing rules achieved by Hash tagging plug-in

Last Updated:Mar 05, 2025

In the release practice of distributed applications, end-to-end canary release strategy allows you to roll out new services or versions by using loose-mode traffic lane and strict-mode traffic lane. In some cases, multiple applications in a distributed application system needs to be incrementally rolled out, and the canary ratio needs to be adjusted based on routing rules rather than by request. Routing rules specify routing a specific percentage of users that makes a request to the target version based on canary ratio. This topic describes how to implement an incremental canary release of multiple applications by setting routing rules in Alibaba Cloud Service Mesh (ASM) gateway.

Background information

Traffic lane defines the architecture of a distributed system in different versions, as shown in the following figure.

yuque_diagram

The distributed system in the proceding figure consists of application A, application B, and application C, where:

  • Version 1 consists of app-a(V1), app-b(V1), and app-c(V1).

  • Version 2 consists of app-a(V2), app-b(V1), and app-c(V2).

The following process shows how ASM traffic lane works. A client sends a request. The Hash tagging plug-in tags each request after the ASW gateway receives the request. The ASM sidecar proxy passes these tags through the call chain. This ensures that any request from any application always is routed to a specific version based on tagging rules, allowing all traffic being directed into a specific traffic lane. O&M engineer can use canary release deployment to uniformly release an application of specific version.

However, in some cases, you may want to implement canary release on multiple applications in a distributed application system simultaneously, and the ratio of the phased update needs to be determined by development team. For example:

  • App-a and app-b are in the stable version 1. As business services are upgraded, app-a v2 and app-b v2 need to be released for rolling out a new feature. To avoid negative impacts on user experience caused by significant changes in this feature, the project team wants to route 10% of the users to the app-a and app-b of version 2.

  • App-c is in the stable version 2. To fix a bug in version 2, version 3 is released. The project team wants to route 50% of the users to the version 3 for fixing the vulnerability at the earliest opportunity.

image

In this case, you need to route requests on the same call chain using different policies, which cannot be accomplished only with tagging rules. To meet these requirements, Hash tagging plug-in is provided in ASM gateway to tag a request with multiple marks simultaneously. You can enable the ASMHeaderPropagation to pass specific prefix through the call chain. You can also use the virtual service on ASM instance to match these marks with different traffic lanes. This helps ensure a controlled and gradual rollout of new features or updates.

image

Prerequisites

Procedure

Step 1: Deploy an application instance

In this example, the application architecture consists of three applications: app-a, app-b, and app-c. The following call process is used: app-a -> app-b -> app-c. App-a and app-b are in version 1, and app-c is in version 2.

image

  1. Create an app-init.yaml file with the following content.

    Click to view details

    apiVersion: v1
    kind: Service
    metadata:
      name: app-a
      labels:
        app: app-a
        service: app-a
    spec:
      ports:
      - port: 8000
        name: http
      selector:
        app: app-a
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app-a-v1
      labels:
        app: app-a
        version: v1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: app-a
          version: v1
          ASM_TRAFFIC_TAG: v1
      template:
        metadata:
          labels:
            app: app-a
            version: v1
            ASM_TRAFFIC_TAG: v1
          annotations:
            instrumentation.opentelemetry.io/inject-java: "true"
            instrumentation.opentelemetry.io/container-names: "default"
        spec:
          containers:
          - name: default
            image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
            imagePullPolicy: IfNotPresent
            env:
            - name: version
              value: v1
            - name: app
              value: app-a
            - name: upstream_url
              value: "http://app-b:8000/"
            ports:
            - containerPort: 8000
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: app-b
      labels:
        app: app-b
        service: app-b
    spec:
      ports:
      - port: 8000
        name: http
      selector:
        app: app-b
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app-b-v1
      labels:
        app: app-b
        version: v1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: app-b
          version: v1
          ASM_TRAFFIC_TAG: v1
      template:
        metadata:
          labels:
            app: app-b
            version: v1
            ASM_TRAFFIC_TAG: v1
          annotations:
            instrumentation.opentelemetry.io/inject-java: "true"
            instrumentation.opentelemetry.io/container-names: "default"
        spec:
          containers:
          - name: default
            image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
            imagePullPolicy: IfNotPresent
            env:
            - name: version
              value: v1
            - name: app
              value: app-b
            - name: upstream_url
              value: "http://app-c:8000/"
            ports:
            - containerPort: 8000
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: app-c
      labels:
        app: app-c
        service: app-c
    spec:
      ports:
      - port: 8000
        name: http
      selector:
        app: app-c
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app-c-v2
      labels:
        app: app-c
        version: v2
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: app-c
          version: v2
          ASM_TRAFFIC_TAG: v2
      template:
        metadata:
          labels:
            app: app-c
            version: v2
            ASM_TRAFFIC_TAG: v2
          annotations:
            instrumentation.opentelemetry.io/inject-java: "true"
            instrumentation.opentelemetry.io/container-names: "default"
        spec:
          containers:
          - name: default
            image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
            imagePullPolicy: IfNotPresent
            env:
            - name: version
              value: v2
            - name: app
              value: app-c
            ports:
            - containerPort: 8000
  2. Run the following command to deploy a Deployment and a Service for application instance by using the kubeconfig file of the cluster on the data plane. In this example, the namespace default is used as an example, and you can switch to use other namespaces with automatic sidecar proxy injection enabled.

    $ kubectl apply -f app-init.yaml -n default
  3. Create an app-init-mesh.yaml file with the following content.

    Click to view details

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: app-b
      namespace: default
    spec:
      hosts:
      - app-b.default.svc.cluster.local
      http:
      - name: default
        route:
        - destination:
            host: app-b.default.svc.cluster.local
            port:
              number: 8000
            subset: v1
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: app-c
      namespace: default
    spec:
      hosts:
      - app-c.default.svc.cluster.local
      http:
      - name: default
        route:
        - destination:
            host: app-c.default.svc.cluster.local
            port:
              number: 8000
            subset: v2
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: Gateway
    metadata:
      name: ingressgateway
      namespace: default
    spec:
      selector:
        istio: ingressgateway
      servers:
        - hosts:
            - '*'
          port:
            name: http
            number: 80
            protocol: HTTP
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: ingressgateway
      namespace: istio-system
    spec:
      gateways:
      - default/ingressgateway
      hosts:
      - '*'
      http:
      - name: default
        route:
        - destination:
            host: app-a.default.svc.cluster.local
            port:
              number: 8000
            subset: v1
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: DestinationRule
    metadata:
      name: app-a
      namespace: default
    spec:
      host: app-a.default.svc.cluster.local
      subsets:
        - labels:
            version: v1
          name: v1
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: DestinationRule
    metadata:
      name: app-b
      namespace: default
    spec:
      host: app-b.default.svc.cluster.local
      subsets:
        - labels:
            version: v1
          name: v1
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: DestinationRule
    metadata:
      name: app-c
      namespace: default
    spec:
      host: app-c.default.svc.cluster.local
      subsets:
        - labels:
            version: v2
          name: v2
  4. Run the following command to configure virtual services and destination rules for an application and an ASM gateway by using the kubeconfig file of the cluster on the control plane.

    $ kubectl apply -f app-init-mesh.yaml
  5. Run the following command to route the request with headerx-user-id: 0001that matches specific rules to the desired version of the application based on the IP address of the ASM ingress gateway. Replace ${ingress gateway ip} with the IP address of actual gateway. For more information about how to obtain the IP address of an ingress gateway, see Obtain the IP address of an ingress gateway.

    curl -H 'x-user-id: 0001' ${ingress gateway ip}

    Expected output:

    -> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)

    The service call chain is as shown in the figure above: app-a v1 -> app-b v1 -> app-c v2. The operations are performed as expected.

Step 2: Canary release of app-a v2 and app-b v2

You can implement canary release by user segment as follows:

  • Release app-a v2 and app-b v2. Modify the destination rules of app-a and app-b. Create subsets for app-a v2 and app-b v2.

  • Modify the virtual services for the ASM gateway and app-b and add rules that specify routing the tagged requests to app-a v2 and app-b v2.

  • The Hash tagging plug-in used in the ASM gateway uses the value of the x-user-id header as input for hash calculation, and performs ratio-based tagging.

  • Configure the ASMHeaderPropagation CRD to enable the ASM sidecar proxy to pass through all tags added to the requests by the plug-in.

Note

The proceding steps are provided only for you to better understand the examples in this topic. The actual procedure is not strictly in the order described above and needs to be determined based on application dependencies.

image

  1. Create an app-ab-v2.yaml file with the following content.

    Click to view details

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app-a-v2
      labels:
        app: app-a
        version: v2
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: app-a
          version: v2
          ASM_TRAFFIC_TAG: v2
      template:
        metadata:
          labels:
            app: app-a
            version: v2
            ASM_TRAFFIC_TAG: v2
          annotations:
            instrumentation.opentelemetry.io/inject-java: "true"
            instrumentation.opentelemetry.io/container-names: "default"
        spec:
          containers:
          - name: default
            image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
            imagePullPolicy: IfNotPresent
            env:
            - name: version
              value: v2
            - name: app
              value: app-a
            - name: upstream_url
              value: "http://app-b:8000/"
            ports:
            - containerPort: 8000
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app-b-v2
      labels:
        app: app-b
        version: v2
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: app-b
          version: v2
          ASM_TRAFFIC_TAG: v2
      template:
        metadata:
          labels:
            app: app-b
            version: v2
            ASM_TRAFFIC_TAG: v2
          annotations:
            instrumentation.opentelemetry.io/inject-java: "true"
            instrumentation.opentelemetry.io/container-names: "default"
        spec:
          containers:
          - name: default
            image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
            imagePullPolicy: IfNotPresent
            env:
            - name: version
              value: v2
            - name: app
              value: app-b
            - name: upstream_url
              value: "http://app-c:8000/"
            ports:
            - containerPort: 8000
  2. Run the following command to deploy app-a v2 and app-b v2 by using the kubeconfig file of the cluster on the data plane.

    $ kubectl apply -f app-ab-v2.yaml
  3. Create an app-ab-v2-mesh.yaml file with the following content.

    Click to view details

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: app-b
      namespace: default
    spec:
      hosts:
      - app-b.default.svc.cluster.local
      http:
      - name: v2
        match:
        - headers:
            appver-b:
              exact: v2
        route:
        - destination:
            host: app-b.default.svc.cluster.local
            port:
              number: 8000
            subset: v2
      - name: default
        route:
        - destination:
            host: app-b.default.svc.cluster.local
            port:
              number: 8000
            subset: v1
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: app-c
      namespace: default
    spec:
      hosts:
      - app-c.default.svc.cluster.local
      http:
      - name: default
        route:
        - destination:
            host: app-c.default.svc.cluster.local
            port:
              number: 8000
            subset: v2
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: ingressgateway
      namespace: default
    spec:
      gateways:
      - default/ingressgateway
      hosts:
      - '*'
      http:
      - name: v2
        match:
        - headers:
            appver-a:
              exact: v2
        route:
        - destination:
            host: app-a.default.svc.cluster.local
            port:
              number: 8000
            subset: v2
      - name: default
        route:
        - destination:
            host: app-a.default.svc.cluster.local
            port:
              number: 8000
            subset: v1
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: DestinationRule
    metadata:
      name: app-a
    spec:
      host: app-a.default.svc.cluster.local
      subsets:
        - labels:
            version: v1
          name: v1
        - labels:
            version: v2
          name: v2
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: DestinationRule
    metadata:
      name: app-b
    spec:
      host: app-b.default.svc.cluster.local
      subsets:
        - labels:
            version: v1
          name: v1
        - labels:
            version: v2
          name: v2
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: DestinationRule
    metadata:
      name: app-c
    spec:
      host: app-c.default.svc.cluster.local
      subsets:
        - labels:
            version: v2
          name: v2
    
  4. Run the following command to add a subset of users to the destination rules for app-a v2 and app-b v2 and add routing rules that match tagging rules to virtual services by using the kubeconfig file of the cluster on the control plane.

    $ kubectl apply -f app-ab-v2-mesh.yaml
  5. Create a header-propagation.yaml file with the following content.

    apiVersion: istio.alibabacloud.com/v1beta1
    kind: ASMHeaderPropagation
    metadata:
      name: tag-propagation
    spec:
      headerPrefixes:
        - appver
  6. Run the following command to enable the sidecar proxy to pass through request headers that are prefixed with "appver".

    $ kubectl apply -f header-propagation.yaml -n default
  7. Create a hash-tagging-plugin.yaml file with the following content.

    apiVersion: extensions.istio.io/v1alpha1
    kind: WasmPlugin
    metadata:
      name: hash-tagging
      namespace: istio-system
    spec:
      imagePullPolicy: IfNotPresent 
      selector:
        matchLabels:
          istio: ingressgateway
      url: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-wasm-hash-tagging:v1.22.6.2-g72656ba-aliyun 
      phase: AUTHN
      pluginConfig:
        rules:
          - header: x-user-id
            modulo: 100
            tagHeader: appver-a
            policies:
              - range: 10
                tagValue: v2
          - header: x-user-id
            modulo: 100
            tagHeader: appver-b
            policies:
              - range: 100
                tagValue: v2

    In the configuration of the Hash tagging plug-in above, we provide two tagging rules:

    • Use an x-user-id as Hash value, and set the modulo to 100. When the remainder is less than or equal to 10, add appver-a = 2 header for the request.

    • Use an x-user-id as Hash value, and set the modulo to 100. When the remainder is less than or equal to 10, add appver-b = 2 header for the request.

  8. Run each of the following commands to initiate a request using 0001, 0002, 0003, 0004, and 0005 as the respective values of the x-user-id header.

    curl -H 'x-user-id: 0001' ${ingress gateway ip}
    curl -H 'x-user-id: 0002' ${ingress gateway ip}
    curl -H 'x-user-id: 0003' ${ingress gateway ip}
    curl -H 'x-user-id: 0004' ${ingress gateway ip}
    curl -H 'x-user-id: 0005' ${ingress gateway ip}

    Expected output:

    -> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
    -> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
    -> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
    -> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
    -> app-a(version: v2, ip: 10.0.250.14)-> app-b(version: v2, ip: 10.0.250.8)-> app-c(version: v2, ip: 10.0.250.11)

    The result shows that the remainder of the request tagged with value 0005 falls within the range of 10. Therefore, the user making the request is successfully tagged by the Hash tagging plug-in and directed to the v2 routing rule when accessing app-a and app-b.

Step 3: Deploy app-c v3

In the canary environment of app-a and app-b, the development team may want to start the canary release of app-c v3 to fix a bug in app-c v2. To start the canary release of app-c, you first need to deploy app-c v3.

  1. Create an app-c-v3.yaml file with the following content.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app-c-v3
      labels:
        app: app-c
        version: v3
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: app-c
          version: v3
          ASM_TRAFFIC_TAG: v3
      template:
        metadata:
          labels:
            app: app-c
            version: v3
            ASM_TRAFFIC_TAG: v3
          annotations:
            instrumentation.opentelemetry.io/inject-java: "true"
            instrumentation.opentelemetry.io/container-names: "default"
        spec:
          containers:
          - name: default
            image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
            imagePullPolicy: IfNotPresent
            env:
            - name: version
              value: v3
            - name: app
              value: app-c
            ports:
            - containerPort: 8000
  2. Run the following command to deploy app-c by using the kubeconfig file of the cluster on the data plane.

    $ kubectl apply -f app-c-v3.yaml
  3. Create an app-c-v3-mesh.yaml file with the following content.

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: app-c
      namespace: default
    spec:
      hosts:
      - app-c.default.svc.cluster.local
      http:
      - name: v3 
        match:
        - headers:
            appver-c:
              exact: v3
        route:
        - destination:
            host: app-c.default.svc.cluster.local
            port:
              number: 8000
            subset: v3
      - name: default
        route:
        - destination:
            host: app-c.default.svc.cluster.local
            port:
              number: 8000
            subset: v2
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: DestinationRule
    metadata:
      name: app-c
    spec:
      host: app-c.default.svc.cluster.local
      subsets:
        - labels:
            version: v2
          name: v2
        - labels:
            version: v3
          name: v3
  4. Run the following command to configure a destination rule and routing rule on virtual service for app-c v3 by using the kubeconfig file of the cluster on the control plane.

    $ kubectl apply -f app-c-v3-mesh.yaml
  5. Create a wasm-plugin-ab-v2-c-v3.yaml file with the following content.

    apiVersion: extensions.istio.io/v1alpha1
    kind: WasmPlugin
    metadata:
      name: hash-tagging
      namespace: istio-system
    spec:
      imagePullPolicy: IfNotPresent 
      selector:
        matchLabels:
          istio: ingressgateway
      url: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-wasm-hash-tagging:v1.22.6.2-g72656ba-aliyun 
      phase: AUTHN
      pluginConfig:
        rules:
          - header: x-user-id
            modulo: 100
            tagHeader: appver-a
            policies:
              - range: 10
                tagValue: v2
          - header: x-user-id
            modulo: 100
            tagHeader: appver-b
            policies:
              - range: 10
                tagValue: v2
          - header: x-user-id
            modulo: 100
            tagHeader: appver-c
            policies:
              - range: 50
                tagValue: v3
  6. The development team believes the risk caused by bug fix is low and wants to release app-c v3 as soon as possible. The team sets canary ratio to 50%. Run the following command to modify the configuration of the Hash tagging plug-in and add a tagging rule for the canary release.

    $ kubectl apply -f wasm-plugin-ab-v2-c-v3.yaml
  7. Run each of the following commands to initiate a request using 0001, 0002, 0003, 0004, and 0005 as the respective values of the x-user-id header.

    curl -H 'x-user-id: 0001' ${ingress gateway ip}
    curl -H 'x-user-id: 0002' ${ingress gateway ip}
    curl -H 'x-user-id: 0003' ${ingress gateway ip}
    curl -H 'x-user-id: 0004' ${ingress gateway ip}
    curl -H 'x-user-id: 0005' ${ingress gateway ip}

    Expected output:

    -> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
    -> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v2, ip: 10.0.250.11)
    -> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
    -> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
    -> app-a(version: v2, ip: 10.0.250.14)-> app-b(version: v2, ip: 10.0.250.8)-> app-c(version: v3, ip: 10.0.250.23)

    As shown in the output:

    • The call chain for users with IDs 0001 and 0002 is app-a(v1) -> app-b(v1) -> app-c(v2).

    • The call chain for users with IDs 0003 and 0004 is app-a(v1) -> app-b(v1) -> app-c(v3).

    • The call chain for users with ID 0005 is app-a(v2) -> app-b(v2) -> app-c(v3).

Step 4: Complete the canary release of app-c

After a period of application verification in canary environment, the development team decides to route 100% of the traffic to app-c v3 for successful release. In this case, you no longer need to distinguish the traffic for app-c. You can directly remove the routing rules that match the tags in the virtual service and route all traffic to app-c v3 by modifying the default routing rule.

  1. Use the kubeconfig file on the control plane to apply the following YAML file to the ASM instance to update the routing rules in the virtual service of app-c.

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: app-c
      namespace: default
    spec:
      hosts:
      - app-c.default.svc.cluster.local
      http:
      - name: default
        route:
        - destination:
            host: app-c.default.svc.cluster.local
            port:
              number: 8000
            subset: v3
  2. Remove the tagging rules after the successful release of app-c v3 to eliminate unnecessary information carried on the call chain. Use the kubeconfig file of the ASM instance to apply the following YAML file to the ASM instance to update the configuration of the Hash tagging plug-in and remove the configuration for canary version of app-c.

    apiVersion: extensions.istio.io/v1alpha1
    kind: WasmPlugin
    metadata:
      name: hash-tagging
      namespace: istio-system
    spec:
      imagePullPolicy: IfNotPresent 
      selector:
        matchLabels:
          istio: ingressgateway
      url: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-wasm-hash-tagging:v1.22.6.2-g72656ba-aliyun 
      phase: AUTHN
      pluginConfig:
        rules:
          - header: x-user-id
            modulo: 100
            tagHeader: appver-a
            policies:
              - range: 10
                tagValue: v2
          - header: x-user-id
            modulo: 100
            tagHeader: appver-b
            policies:
              - range: 10
                tagValue: v2
  3. Run each of the following commands to initiate a request using 0001, 0002, 0003, 0004, and 0005 as the respective values of the x-user-id header.

    curl -H 'x-user-id: 0001' ${ingress gateway ip}
    curl -H 'x-user-id: 0002' ${ingress gateway ip}
    curl -H 'x-user-id: 0003' ${ingress gateway ip}
    curl -H 'x-user-id: 0004' ${ingress gateway ip}
    curl -H 'x-user-id: 0005' ${ingress gateway ip}

    Expected output:

    -> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
    -> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
    -> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
    -> app-a(version: v1, ip: 10.0.250.27)-> app-b(version: v1, ip: 10.0.250.6)-> app-c(version: v3, ip: 10.0.250.23)
    -> app-a(version: v2, ip: 10.0.250.14)-> app-b(version: v2, ip: 10.0.250.8)-> app-c(version: v3, ip: 10.0.250.23)

    The result shows that all users are routed to app-c v3.

    Note

    After all traffic is switched to app-c v3, you need to set the number of replicas of app-c v2 to 0 or delete app-c v2 based on your business requirements.