Service Mesh (ASM) routes TCP traffic through VirtualService rules that match on destination subnets, ports, source labels, gateways, or source namespaces. Combined with weighted routing, these match attributes let you build fine-grained Layer 4 traffic policies for canary releases, namespace isolation, and gateway-based routing.
How TCP matching works
A VirtualService tcp block accepts one or more match entries:
-
Within a single match block, all conditions are combined with AND logic.
-
Across match blocks, conditions are evaluated with OR logic -- the first block that succeeds wins.
Match attributes at a glance
| Attribute | What it matches | Sidecar mode | Ambient Mesh mode |
|---|---|---|---|
destinationSubnets |
Destination CIDR block | Supported | Not supported |
port |
Destination port | Supported | Not supported |
sourceLabels |
Labels on the source workload | Supported | Not supported |
gateways |
The gateway that received the request | Supported | Supported |
sourceNamespace |
Namespace of the source workload | Supported | Not supported |
Prerequisites
Deploy the sample applications
The examples in this topic use two TCP echo servers and several telnet clients. Deploy them before testing any routing rule.
-
Create the
foonamespace. For details, see Manage global namespaces. -
On the Global Namespace page of the ASM console, enable sidecar proxy injection or the Ambient Mesh mode for both the
defaultandfoonamespaces. -
Connect to the data-plane cluster with kubectl, then deploy the echo servers and telnet clients.
-
Create
echoserver.yaml: -
Create
telnet.yaml: -
Apply both files:
kubectl apply -f echoserver.yaml kubectl apply -f telnet.yaml
-
After deployment, echo-server responds with hello and echo-server-2 responds with hello-2. This difference lets you confirm which backend received the request.
Match by destination subnet (destinationSubnets)
Route TCP traffic based on the destination CIDR block. Only supported in Sidecar mode.
The following VirtualService forwards traffic destined for 192.168.0.0/16 to echo-server-2. The endpoint of echo-server.default.svc.cluster.local falls within this CIDR block. Adjust the CIDR to match your environment.
-
Save the following YAML as
destinationSubnets.yaml:apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: echo namespace: default spec: hosts: - echo-server.default.svc.cluster.local tcp: - match: - destinationSubnets: - "192.168.0.0/16" route: - destination: host: echo-server-2.default.svc.cluster.local -
Connect to the ASM instance with kubectl and apply the VirtualService:
kubectl apply -f destinationSubnets.yaml -
Verify by sending a TCP request from the telnet client:
kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000 testExpected response:
hello2 test-- returned by echo-server-2, confirming the CIDR-based rule matched.
Match by destination port (port)
Route TCP traffic based on the destination port. Only supported in Sidecar mode.
The following VirtualService forwards traffic destined for port 19000 to echo-server-2.
-
Save the following YAML as
port.yaml:apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: echo namespace: default spec: hosts: - echo-server.default.svc.cluster.local tcp: - match: - port: 19000 route: - destination: host: echo-server-2.default.svc.cluster.local -
Apply the VirtualService:
kubectl apply -f port.yaml -
Verify:
kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000 testExpected response:
hello2 test-- confirming the port-based rule matched.
Match by source labels (sourceLabels)
Route TCP traffic based on labels of the source workload. Only supported in Sidecar mode.
The following VirtualService matches requests from workloads labeled app: telnet-client and routes them to echo-server-2. The telnet-client pod carries this label, while telnet-client-2 carries app: telnet-client-2, so only traffic from telnet-client matches.
-
Save the following YAML as
sourceLabels.yaml:apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: echo namespace: default spec: hosts: - echo-server.default.svc.cluster.local tcp: - match: - sourceLabels: app: telnet-client route: - destination: host: echo-server-2.default.svc.cluster.local -
Apply the VirtualService:
kubectl apply -f sourceLabels.yaml -
Test from
telnet-client(label matches):kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000 testExpected response:
hello2 test-- traffic routed to echo-server-2. -
Test from
telnet-client-2(label does not match):kubectl exec -it telnet-client-2-c56db78bd-7**** -c client -- telnet echo-server.default.svc.cluster.local:19000 testExpected response:
hello test-- traffic routed to echo-server (default), confirming the label-based rule only applies to matching workloads.
Match by gateway (gateways)
Route TCP traffic based on which ASM gateway received the request. Supported in both Sidecar mode and Ambient Mesh mode.
In this example, two ASM gateways direct traffic to different backends: ingressgateway-1 routes to echo-server, and ingressgateway-2 routes to echo-server-2.
Set up the gateways
-
Create two ASM gateways named
istio-ingressgateway-1andistio-ingressgateway-2. For details, see Create an ingress gateway.Configure the same settings on both gateways. Under Port Mapping, click Add Port, set Protocol to TCP, and set Service Port to 19000.

-
Deploy the Istio Gateway resources. For details on using the ASM console, see Manage Istio gateways.
-
Save the following YAML files. Both listen on port 19000 with TCP protocol and host
*. -
Apply the Istio Gateways:
kubectl apply -f ingressgateway-1.yaml kubectl apply -f ingressgateway-2.yaml
-
Deploy the VirtualService
The following VirtualService defines two routing rules. Traffic from ingressgateway-2 goes to echo-server-2; all other traffic (including from ingressgateway-1) goes to echo-server. For details on using the ASM console, see Manage virtual services.
-
Save the following YAML as
ingressgateway-vs.yaml: -
Apply the VirtualService:
kubectl apply -f ingressgateway-vs.yaml -
Test from
ingressgateway-1(connect to its IP address and sendtest):kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000 testExpected response:
hello test-- routed to echo-server. -
Test from
ingressgateway-2(connect to its IP address and sendtest):kubectl exec -it telnet-client-2-c56db78bd-7**** -c client -- telnet echo-server.default.svc.cluster.local:19000 testExpected response:
hello-2 test-- routed to echo-server-2.
Match by source namespace (sourceNamespace)
Route TCP traffic based on the namespace of the source workload. Only supported in Sidecar mode.
The following VirtualService routes traffic from the foo namespace to echo-server-2, while traffic from other namespaces falls through to echo-server. For details on using the ASM console, see Manage virtual services.
-
Save the following YAML as
source-namespace.yaml:apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: source-namespace namespace: default spec: hosts: - echo-server.default.svc.cluster.local http: [] tcp: - match: - sourceNamespace: foo route: - destination: host: echo-server-2.default.svc.cluster.local - route: - destination: host: echo-server.default.svc.cluster.local -
Apply the VirtualService:
kubectl apply -f source-namespace.yaml -
Test from the
foonamespace (matches the rule):kubectl -n foo exec -it telnet-client-foo-7c94569bfd-h**** -c client -- telnet echo-server.default.svc.cluster.local:19000 testExpected response:
hello2 test-- traffic routed to echo-server-2. -
Test from the
defaultnamespace (falls through to default rule):kubectl exec -it telnet-client-c56db78bd-7**** -c client -- telnet echo-server.default.svc.cluster.local:19000 testExpected response:
hello test-- traffic routed to echo-server.
Weighted routing across subsets
Split TCP traffic across multiple versions of a service by assigning each subset a percentage. Only supported in Sidecar mode.
This example distributes traffic between prod and gray subsets of echo-server at an 80:20 ratio -- a common pattern for canary releases where a small percentage of connections test a new version before a full rollout.
Note: TCP load balancing operates at the connection level. Each new connection is independently assigned to a subset, so observed ratios may not match the configured weights exactly, especially with a small number of connections.
Deploy the canary version
-
Save the following YAML as
echo-server-backup.yamlto create thegraysubset of echo-server: -
Apply the Deployment:
kubectl apply -f echo-server-backup.yaml
Define the destination rule
The destination rule creates two subsets -- prod and gray -- selected by the version label. For details on using the ASM console, see Manage destination rules.
-
Save the following YAML as
echoserver-dr.yaml:apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: echo namespace: default spec: host: echo.default.svc.cluster.local subsets: - labels: version: prod name: prod - labels: version: gray name: gray -
Apply the destination rule:
kubectl apply -f echoserver-dr.yaml
Configure the weighted VirtualService
The VirtualService sends 80% of traffic to prod and 20% to gray. For details on using the ASM console, see Manage virtual services.
-
Save the following YAML as
echoserver-vs.yaml:apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: echo spec: hosts: - echo-server.default.svc.cluster.local http: - route: - destination: host: echo-server.default.svc.cluster.local subset: gray weight: 20 - destination: host: echo-server.default.svc.cluster.local subset: prod weight: 80 -
Apply the VirtualService:
kubectl apply -f echoserver-vs.yaml
Verify the traffic split
Send multiple requests and observe the distribution. The prod subset responds with hello; the gray subset responds with hello-gray. Press Ctrl+D to exit each telnet session before starting the next.
$ kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
test
hello-gray test
$ kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
test
hello test
$ kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
test
hello test
$ kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
test
hello test
$ kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
test
hello test
In this sample run, 1 out of 5 connections reached gray and 4 reached prod, approximating the 80:20 split.