All Products
Search
Document Center

Alibaba Cloud Service Mesh:Shift TCP traffic between application versions

Last Updated:Mar 11, 2026

Whether you need to optimize your network topology, scale out application servers, or throttle user traffic, routing all traffic to a new version at once risks downtime if the release has issues. Service Mesh (ASM) supports weighted TCP routing rules that split traffic across multiple versions of a service -- for example, sending 80% to v1 and 20% to v2 -- so you can validate a new release incrementally before full cutover.

This tutorial uses the Istio TCP traffic shifting pattern with a sample tcp-echo application that has two versions:

  • v1 prefixes response timestamps with one

  • v2 prefixes response timestamps with two

By the end of this tutorial, you will have configured weighted TCP routing and verified that traffic splits correctly between versions.

How traffic shifting works

ASM uses three Istio resources to control TCP traffic flow:

ResourceRole
GatewayExposes a TCP port on the ingress gateway to accept incoming connections
DestinationRuleDefines named subsets (v1, v2) based on pod labels
VirtualServiceRoutes traffic arriving at the Gateway to specific subsets with configurable weights

The traffic path:

Client -> Ingress gateway (port 31400) -> VirtualService (weight-based routing) -> DestinationRule (subsets v1/v2) -> tcp-echo pods (port 9000)

Prerequisites

Step 1: Deploy the sample application

Deploy two versions of the tcp-echo application, then create a Kubernetes Service to expose them.

Deploy the tcp-echo Deployments

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of your cluster. In the left-side navigation pane, choose Workloads > Deployments.

  3. At the top of the Deployments page, select a namespace from the Namespace drop-down list, and click Create from YAML.

  4. Select Custom from the Sample Template drop-down list, paste the following YAML into the Template editor, and click Create. This YAML creates two Deployments. Each runs the tcp-echo-server image on port 9000 but with a different prefix argument (one for v1, two for v2). Both Deployments share the app: tcp-echo label so that a single Service can route to either version. After creation, both Deployments appear on the Deployments page.

    Show the YAML code

       apiVersion: apps/v1
       kind: Deployment
       metadata:
         name: tcp-echo-v1
       spec:
         replicas: 1
         selector:
           matchLabels:
             app: tcp-echo
             version: v1
         template:
           metadata:
             labels:
               app: tcp-echo
               version: v1
           spec:
             containers:
             - name: tcp-echo
               image: docker.io/istio/tcp-echo-server:1.1
               imagePullPolicy: IfNotPresent
               args: [ "9000", "one" ]
               ports:
               - containerPort: 9000
       ---
       apiVersion: apps/v1
       kind: Deployment
       metadata:
         name: tcp-echo-v2
       spec:
         replicas: 1
         selector:
           matchLabels:
             app: tcp-echo
             version: v2
         template:
           metadata:
             labels:
               app: tcp-echo
               version: v2
           spec:
             containers:
             - name: tcp-echo
               image: docker.io/istio/tcp-echo-server:1.1
               imagePullPolicy: IfNotPresent
               args: [ "9000", "two" ]
               ports:
               - containerPort: 9000

Create the tcp-echo Service

  1. On the cluster details page, choose Network > Services in the left-side navigation pane.

  2. Select the same namespace, and click Create.

  3. In the Create Service dialog box, configure the following parameters and click OK. After creation, the tcp-echo Service appears on the Services page.

    ParameterValue
    Nametcp-echo
    Service TypeSelect a type based on how you want to expose the service. Valid values: Cluster IP, Node Port, and Server Load Balancer.
    BackendSet Name to app and Value to tcp-echo-v1. Both v1 and v2 Deployments share the app: tcp-echo label, so the Service routes to both regardless of which Deployment you specify here.
    Port MappingSet Name to tcp, Service Port and Container Port to 9000, and Protocol to TCP.

Step 2: Configure routing rules

Create an Istio gateway, destination rule, and virtual service to route all TCP traffic to v1 initially.

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of your ASM instance.

Create an Istio gateway

An Istio gateway defines the external entry point for TCP connections. The following configuration listens on port 31400 and forwards connections into the mesh.

  1. In the left-side navigation pane, choose ASM Gateways > Gateway. Click Create from YAML.

  2. Select default from the Namespace drop-down list, paste the following YAML, and click Create.

       apiVersion: networking.istio.io/v1alpha3
       kind: Gateway
       metadata:
         name: tcp-echo-gateway
       spec:
         selector:
           istio: ingressgateway    # Use the default ingress gateway
         servers:
         - port:
             number: 31400          # External-facing TCP port
             name: tcp
             protocol: TCP
           hosts:
           - "*"                    # Accept connections from any host

Create a destination rule

A destination rule maps pod labels to named subsets that the virtual service references for routing decisions.

  1. In the left-side navigation pane, choose Traffic Management Center > DestinationRule. Click Create from YAML.

  2. Select default from the Namespace drop-down list, paste the following YAML, and click Create.

       apiVersion: networking.istio.io/v1alpha3
       kind: DestinationRule
       metadata:
         name: tcp-echo-destination
       spec:
         host: tcp-echo             # Matches the Kubernetes Service name
         subsets:
         - name: v1
           labels:
             version: v1            # Selects pods with label version=v1
         - name: v2
           labels:
             version: v2            # Selects pods with label version=v2

Create a virtual service

A virtual service contains the routing logic. The following configuration sends 100% of TCP traffic on port 31400 to the v1 subset. You will update the weights in a later step to shift traffic to v2.

  1. In the left-side navigation pane, choose Traffic Management Center > VirtualService. Click Create from YAML.

  2. Select default from the Namespace drop-down list, paste the following YAML, and click Create.

       apiVersion: networking.istio.io/v1alpha3
       kind: VirtualService
       metadata:
         name: tcp-echo
       spec:
         hosts:
         - "*"
         gateways:
         - tcp-echo-gateway          # Bind to the gateway created above
         tcp:
         - match:
           - port: 31400             # Match traffic arriving on this port
           route:
           - destination:
               host: tcp-echo        # Target the tcp-echo Kubernetes Service
               port:
                 number: 9000
               subset: v1            # Send all traffic to v1

Step 3: Deploy an ingress gateway

Add port 31400 to the ingress gateway so that external TCP traffic reaches the Istio gateway.

  1. On the ASM instance details page, choose ASM Gateways > Ingress Gateway in the left-side navigation pane.

  2. Click Create and configure the following parameters.

    ParameterValue
    ClusterSelect the ACK cluster where the sample application is deployed.
    CLB Instance TypeSelect Internet Access.
    CLB InstanceSelect an existing CLB instance or create a new one. Use a dedicated CLB instance per Kubernetes Service to avoid listener conflicts. For more information, see Create an ingress gateway.
    Port MappingSet the protocol to TCP and the service port to 31400.
  3. Click Create.

Step 4: Verify initial routing

Confirm that all TCP traffic reaches the v1 version.

  1. Connect to the ACK cluster with kubectl. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

  2. Retrieve the ingress gateway IP address and TCP port:

       export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
       export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}')
  3. Connect to the tcp-echo service over TCP: Expected output:

       telnet $INGRESS_HOST $INGRESS_PORT
       Trying xxx.xxx.xxx.xxx...
       Connected to xxx.xxx.xxx.xxx.
       Escape character is '^]'
  4. Type any string and press Enter. If the response starts with one, all traffic is routed to v1.

Step 5: Shift traffic to v2

Update the virtual service to send 80% of traffic to v1 and 20% to v2.

  1. On the ASM instance details page, choose Traffic Management Center > VirtualService in the left-side navigation pane.

  2. Find the tcp-echo virtual service and click YAML in the Actions column.

  3. In the Edit dialog box, replace the YAML with the following content and click OK. The only change from the original virtual service is the addition of a second route entry with weight fields that control the traffic split.

       apiVersion: networking.istio.io/v1alpha3
       kind: VirtualService
       metadata:
         name: tcp-echo
       spec:
         hosts:
         - "*"
         gateways:
         - tcp-echo-gateway
         tcp:
         - match:
           - port: 31400
           route:
           - destination:
               host: tcp-echo
               port:
                 number: 9000
               subset: v1
             weight: 80              # 80% of traffic to v1
           - destination:
               host: tcp-echo
               port:
                 number: 9000
               subset: v2
             weight: 20              # 20% of traffic to v2
  4. Send 10 requests to verify the traffic split: Sample output: Responses prefixed with one come from v1; those with two come from v2. With a small sample, the actual ratio may not match 80:20 exactly, but it converges to the configured weights as the number of requests increases.

       for i in {1..10}; do \
       docker run -e INGRESS_HOST=$INGRESS_HOST -e INGRESS_PORT=$INGRESS_PORT -it --rm busybox sh -c "(date; sleep 1) | nc $INGRESS_HOST $INGRESS_PORT"; \
       done
       one Mon Nov 12 23:38:45 UTC 2018
       two Mon Nov 12 23:38:47 UTC 2018
       one Mon Nov 12 23:38:50 UTC 2018
       one Mon Nov 12 23:38:52 UTC 2018
       one Mon Nov 12 23:38:55 UTC 2018
       two Mon Nov 12 23:38:57 UTC 2018
       one Mon Nov 12 23:39:00 UTC 2018
       one Mon Nov 12 23:39:02 UTC 2018
       one Mon Nov 12 23:39:05 UTC 2018
       one Mon Nov 12 23:39:07 UTC 2018
  5. Gradually increase the v2 weight value as you gain confidence in the new version, until v2 handles 100% of traffic.

Clean up resources

After testing, delete the resources created in this tutorial to avoid unnecessary charges.

  1. Delete the Istio routing resources from the ASM console:

    • On the ASM instance details page, go to Traffic Management Center > VirtualService, find tcp-echo, and click Delete in the Actions column.

    • Go to Traffic Management Center > DestinationRule, find tcp-echo-destination, and delete it.

    • Go to ASM Gateways > Gateway, find tcp-echo-gateway, and delete it.

    • Go to ASM Gateways > Ingress Gateway and delete the ingress gateway created for this tutorial.

  2. Delete the Kubernetes resources from the ACK console:

    • Go to Workloads > Deployments, find tcp-echo-v1 and tcp-echo-v2, and delete them.

    • Go to Network > Services, find tcp-echo, and delete it.