All Products
Search
Document Center

Alibaba Cloud Service Mesh:Layer 4 TCP routing

Last Updated:Mar 10, 2026

Service Mesh (ASM) routes TCP traffic through VirtualService rules that match on destination subnets, ports, source labels, gateways, or source namespaces. Combined with weighted routing, these match attributes let you build fine-grained Layer 4 traffic policies for canary releases, namespace isolation, and gateway-based routing.

How TCP matching works

A VirtualService tcp block accepts one or more match entries:

  • Within a single match block, all conditions are combined with AND logic.

  • Across match blocks, conditions are evaluated with OR logic -- the first block that succeeds wins.

Match attributes at a glance

Attribute What it matches Sidecar mode Ambient Mesh mode
destinationSubnets Destination CIDR block Supported Not supported
port Destination port Supported Not supported
sourceLabels Labels on the source workload Supported Not supported
gateways The gateway that received the request Supported Supported
sourceNamespace Namespace of the source workload Supported Not supported

Prerequisites

Deploy the sample applications

The examples in this topic use two TCP echo servers and several telnet clients. Deploy them before testing any routing rule.

  1. Create the foo namespace. For details, see Manage global namespaces.

  2. On the Global Namespace page of the ASM console, enable sidecar proxy injection or the Ambient Mesh mode for both the default and foo namespaces.

  3. Connect to the data-plane cluster with kubectl, then deploy the echo servers and telnet clients.

    1. Create echoserver.yaml:

      echoserver.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echo-server
        labels:
          app: echo-server
          version: prod
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: echo-server
            version: prod
        template:
          metadata:
            labels:
              app: echo-server
              version: prod
          spec:
            serviceAccountName: echo-server
            containers:
            - name: server
              image: istio/tcp-echo-server:1.2
              ports:
              - containerPort: 9000
              resources:
                limits:
                  cpu: "100m"
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: echo-server
        labels:
          app: echo-server
      spec:
        selector:
          app: echo-server
        ports:
        - protocol: TCP
          name: tcp
          port: 19000
          targetPort: 9000
        type: ClusterIP
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: echo-server
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echo-server-2
        labels:
          app: echo-server-2
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: echo-server-2
        template:
          metadata:
            labels:
              app: echo-server-2
          spec:
            serviceAccountName: echo-server-2
            containers:
            - name: server
              image: istio/tcp-echo-server:1.2
              command:
              - "/bin/tcp-echo"
              - "9000"
              - "hello-2"
              ports:
              - containerPort: 9000
              resources:
                limits:
                  cpu: "100m"
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: echo-server-2
        labels:
          app: echo-server-2
      spec:
        selector:
          app: echo-server-2
        ports:
        - protocol: TCP
          name: tcp
          port: 19000
          targetPort: 9000
        type: ClusterIP
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: echo-server-2
    2. Create telnet.yaml:

      telnet.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: telnet-client
        labels:
          app: telnet-client
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: telnet-client
        template:
          metadata:
            labels:
              app: telnet-client
          spec:
            serviceAccountName: telnet-client
            containers:
            - name: client
              image: mikesplain/telnet:latest
              command:
              - "sleep"
              - "3600"
              resources:
                limits:
                  cpu: "100m"
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: telnet-client
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: telnet-client-2
        labels:
          app: telnet-client-2
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: telnet-client-2
        template:
          metadata:
            labels:
              app: telnet-client-2
          spec:
            serviceAccountName: telnet-client-2
            containers:
            - name: client
              image: mikesplain/telnet:latest
              command:
              - "sleep"
              - "3600"
              resources:
                limits:
                  cpu: "100m"
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: telnet-client-2
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: telnet-client-foo
        namespace: foo
        labels:
          app: telnet-client-foo
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: telnet-client-foo
        template:
          metadata:
            labels:
              app: telnet-client-foo
          spec:
            serviceAccountName: telnet-client-foo
            containers:
            - name: client
              image: mikesplain/telnet:latest
              command:
              - "sleep"
              - "3600"
              resources:
                limits:
                  cpu: "100m"
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: telnet-client-foo
        namespace: foo
    3. Apply both files:

      kubectl apply -f echoserver.yaml
      kubectl apply -f telnet.yaml

After deployment, echo-server responds with hello and echo-server-2 responds with hello-2. This difference lets you confirm which backend received the request.

Match by destination subnet (destinationSubnets)

Route TCP traffic based on the destination CIDR block. Only supported in Sidecar mode.

The following VirtualService forwards traffic destined for 192.168.0.0/16 to echo-server-2. The endpoint of echo-server.default.svc.cluster.local falls within this CIDR block. Adjust the CIDR to match your environment.

  1. Save the following YAML as destinationSubnets.yaml:

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: echo
      namespace: default
    spec:
      hosts:
        - echo-server.default.svc.cluster.local
      tcp:
        - match:
            - destinationSubnets:
                - "192.168.0.0/16"
          route:
            - destination:
                host: echo-server-2.default.svc.cluster.local
  2. Connect to the ASM instance with kubectl and apply the VirtualService:

    kubectl apply -f destinationSubnets.yaml
  3. Verify by sending a TCP request from the telnet client:

    kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    Expected response: hello2 test -- returned by echo-server-2, confirming the CIDR-based rule matched.

Match by destination port (port)

Route TCP traffic based on the destination port. Only supported in Sidecar mode.

The following VirtualService forwards traffic destined for port 19000 to echo-server-2.

  1. Save the following YAML as port.yaml:

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: echo
      namespace: default
    spec:
      hosts:
        - echo-server.default.svc.cluster.local
      tcp:
        - match:
            - port: 19000
          route:
            - destination:
                host: echo-server-2.default.svc.cluster.local
  2. Apply the VirtualService:

    kubectl apply -f port.yaml
  3. Verify:

    kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    Expected response: hello2 test -- confirming the port-based rule matched.

Match by source labels (sourceLabels)

Route TCP traffic based on labels of the source workload. Only supported in Sidecar mode.

The following VirtualService matches requests from workloads labeled app: telnet-client and routes them to echo-server-2. The telnet-client pod carries this label, while telnet-client-2 carries app: telnet-client-2, so only traffic from telnet-client matches.

  1. Save the following YAML as sourceLabels.yaml:

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: echo
      namespace: default
    spec:
      hosts:
        - echo-server.default.svc.cluster.local
      tcp:
        - match:
            - sourceLabels:
                app: telnet-client
          route:
            - destination:
                host: echo-server-2.default.svc.cluster.local
  2. Apply the VirtualService:

    kubectl apply -f sourceLabels.yaml
  3. Test from telnet-client (label matches):

    kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    Expected response: hello2 test -- traffic routed to echo-server-2.

  4. Test from telnet-client-2 (label does not match):

    kubectl exec -it telnet-client-2-c56db78bd-7**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    Expected response: hello test -- traffic routed to echo-server (default), confirming the label-based rule only applies to matching workloads.

Match by gateway (gateways)

Route TCP traffic based on which ASM gateway received the request. Supported in both Sidecar mode and Ambient Mesh mode.

In this example, two ASM gateways direct traffic to different backends: ingressgateway-1 routes to echo-server, and ingressgateway-2 routes to echo-server-2.

Set up the gateways

  1. Create two ASM gateways named istio-ingressgateway-1 and istio-ingressgateway-2. For details, see Create an ingress gateway.

    Configure the same settings on both gateways. Under Port Mapping, click Add Port, set Protocol to TCP, and set Service Port to 19000.

    Port mapping configuration

  2. Deploy the Istio Gateway resources. For details on using the ASM console, see Manage Istio gateways.

    1. Save the following YAML files. Both listen on port 19000 with TCP protocol and host *.

      ingressgateway-1.yaml

      apiVersion: networking.istio.io/v1beta1
      kind: Gateway
      metadata:
        name: ingressgateway-1
        namespace: istio-system
      spec:
        selector:
          istio: ingressgateway-1
        servers:
          - hosts:
              - '*'
            port:
              name: tcp
              number: 19000
              protocol: TCP

      ingressgateway-2.yaml

      apiVersion: networking.istio.io/v1beta1
      kind: Gateway
      metadata:
        name: ingressgateway-2
        namespace: istio-system
      spec:
        selector:
          istio: ingressgateway-2
        servers:
          - hosts:
              - '*'
            port:
              name: tcp
              number: 19000
              protocol: TCP
    2. Apply the Istio Gateways:

      kubectl apply -f ingressgateway-1.yaml
      kubectl apply -f ingressgateway-2.yaml

Deploy the VirtualService

The following VirtualService defines two routing rules. Traffic from ingressgateway-2 goes to echo-server-2; all other traffic (including from ingressgateway-1) goes to echo-server. For details on using the ASM console, see Manage virtual services.

  1. Save the following YAML as ingressgateway-vs.yaml:

    ingressgateway-vs.yaml

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: ingressgateway
      namespace: istio-system
    spec:
      gateways:
        - ingressgateway-1
        - ingressgateway-2
      hosts:
        - '*'
      tcp:
        - match:
            - gateways:
                - ingressgateway-2
          route:
            - destination:
                host: echo-server-2.default.svc.cluster.local
        - route:
            - destination:
                host: echo-server.default.svc.cluster.local
  2. Apply the VirtualService:

    kubectl apply -f ingressgateway-vs.yaml
  3. Test from ingressgateway-1 (connect to its IP address and send test):

    kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    Expected response: hello test -- routed to echo-server.

  4. Test from ingressgateway-2 (connect to its IP address and send test):

    kubectl exec -it telnet-client-2-c56db78bd-7**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    Expected response: hello-2 test -- routed to echo-server-2.

Match by source namespace (sourceNamespace)

Route TCP traffic based on the namespace of the source workload. Only supported in Sidecar mode.

The following VirtualService routes traffic from the foo namespace to echo-server-2, while traffic from other namespaces falls through to echo-server. For details on using the ASM console, see Manage virtual services.

  1. Save the following YAML as source-namespace.yaml:

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: source-namespace
      namespace: default
    spec:
      hosts:
        - echo-server.default.svc.cluster.local
      http: []
      tcp:
        - match:
            - sourceNamespace: foo
          route:
            - destination:
                host: echo-server-2.default.svc.cluster.local
        - route:
            - destination:
                host: echo-server.default.svc.cluster.local
  2. Apply the VirtualService:

    kubectl apply -f source-namespace.yaml
  3. Test from the foo namespace (matches the rule):

    kubectl -n foo exec -it telnet-client-foo-7c94569bfd-h**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    Expected response: hello2 test -- traffic routed to echo-server-2.

  4. Test from the default namespace (falls through to default rule):

    kubectl exec -it telnet-client-c56db78bd-7**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    Expected response: hello test -- traffic routed to echo-server.

Weighted routing across subsets

Split TCP traffic across multiple versions of a service by assigning each subset a percentage. Only supported in Sidecar mode.

This example distributes traffic between prod and gray subsets of echo-server at an 80:20 ratio -- a common pattern for canary releases where a small percentage of connections test a new version before a full rollout.

Note: TCP load balancing operates at the connection level. Each new connection is independently assigned to a subset, so observed ratios may not match the configured weights exactly, especially with a small number of connections.

Deploy the canary version

  1. Save the following YAML as echo-server-backup.yaml to create the gray subset of echo-server:

    echo-server-backup.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: echo-server-gray
      labels:
        app: echo-server
        version: gray
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: echo-server
          version: gray
      template:
        metadata:
          labels:
            app: echo-server
            version: gray
        spec:
          serviceAccountName: echo-server
          containers:
          - name: server
            image: istio/tcp-echo-server:1.2
            command:
            - "/bin/tcp-echo"
            - "9000"
            - "hello-gray"
            ports:
            - containerPort: 9000
            resources:
              limits:
                cpu: "100m"
  2. Apply the Deployment:

    kubectl apply -f echo-server-backup.yaml

Define the destination rule

The destination rule creates two subsets -- prod and gray -- selected by the version label. For details on using the ASM console, see Manage destination rules.

  1. Save the following YAML as echoserver-dr.yaml:

    apiVersion: networking.istio.io/v1beta1
    kind: DestinationRule
    metadata:
      name: echo
      namespace: default
    spec:
      host: echo.default.svc.cluster.local
      subsets:
        - labels:
            version: prod
          name: prod
        - labels:
            version: gray
          name: gray
  2. Apply the destination rule:

    kubectl apply -f echoserver-dr.yaml

Configure the weighted VirtualService

The VirtualService sends 80% of traffic to prod and 20% to gray. For details on using the ASM console, see Manage virtual services.

  1. Save the following YAML as echoserver-vs.yaml:

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: echo
    spec:
      hosts:
      - echo-server.default.svc.cluster.local
      http:
        - route:
          - destination:
              host: echo-server.default.svc.cluster.local
              subset: gray
            weight: 20
          - destination:
              host: echo-server.default.svc.cluster.local
              subset: prod
            weight: 80
  2. Apply the VirtualService:

    kubectl apply -f echoserver-vs.yaml

Verify the traffic split

Send multiple requests and observe the distribution. The prod subset responds with hello; the gray subset responds with hello-gray. Press Ctrl+D to exit each telnet session before starting the next.

$ kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
test
hello-gray test
$ kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
test
hello test
$ kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
test
hello test
$ kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
test
hello test
$ kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
test
hello test

In this sample run, 1 out of 5 connections reached gray and 4 reached prod, approximating the 80:20 split.