All Products
Search
Document Center

Alibaba Cloud Service Mesh:Layer 4 TCP routing

Last Updated:Oct 16, 2023

You can use the Layer 4 routing capabilities of Service Mesh (ASM) to route TCP traffic based on routing rules. This topic describes all TCP matching rules and routing capabilities supported by ASM.

Prerequisites

The cluster is added to the ASM instance.

Deploy sample applications

You can test Layer 4 load balancing by deploying a sample application and using the virtual service provided in the sample code. To test Layer 4 load balancing, you can perform the following steps to deploy the echo-server and telnet-client applications in a cluster on the data plane.

  1. Create the foo namespace. For more information, see Manage global namespaces.

  2. On the Global Namespace page of the ASM console, make sure that sidecar proxy injection or the Ambient Mesh mode is enabled for both the default namespace and the foo namespace.

  3. Use kubectl to connect to the cluster on the data plane based on the information in the kubeconfig file and deploy the echo-server and telnet-client applications.

    1. Create the echoserver.yaml and telnet.yaml files by using the corresponding content shown in the following sample code:

      Expand to view the echoserver.yaml file

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echo-server
        labels:
          app: echo-server
          version: prod
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: echo-server
            version: prod
        template:
          metadata:
            labels:
              app: echo-server
              version: prod
          spec:
            serviceAccountName: echo-server
            containers:
            - name: server
              image: istio/tcp-echo-server:1.2
              ports:
              - containerPort: 9000
              resources:
                limits:
                  cpu: "100m"
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: echo-server
        labels:
          app: echo-server
      spec:
        selector:
          app: echo-server
        ports:
        - protocol: TCP
          name: tcp
          port: 19000
          targetPort: 9000
        type: ClusterIP 
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: echo-server
      ---
      apiVersion: apps/v1 
      kind: Deployment
      metadata:
        name: echo-server-2 
        labels:
          app: echo-server-2
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: echo-server-2
        template:
          metadata:
            labels:
              app: echo-server-2
          spec:
            serviceAccountName: echo-server-2
            containers:
            - name: server 
              image: istio/tcp-echo-server:1.2 
              command:
              - "/bin/tcp-echo"
              - "9000"
              - "hello-2"
              ports:
              - containerPort: 9000
              resources:
                limits:
                  cpu: "100m"
      
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: echo-server-2
        labels:
          app: echo-server-2 
      spec:
        selector:
          app: echo-server-2
        ports:
        - protocol: TCP
          name: tcp
          port: 19000
          targetPort: 9000
        type: ClusterIP 
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: echo-server-2
      

      Expand to view the telnet.yaml file

      apiVersion: apps/v1 
      kind: Deployment
      metadata:
        name: telnet-client 
        labels:
          app: telnet-client 
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: telnet-client 
        template:
          metadata:
            labels:
              app: telnet-client 
          spec:
            serviceAccountName: telnet-client 
            containers:
            - name: client 
              image: mikesplain/telnet:latest 
              command: 
              - "sleep"
              - "3600"
              resources:
                limits:
                  cpu: "100m"
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: telnet-client 
      ---
      apiVersion: apps/v1 
      kind: Deployment
      metadata:
        name: telnet-client-2 
        labels:
          app: telnet-client-2
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: telnet-client-2 
        template:
          metadata:
            labels:
              app: telnet-client-2 
          spec:
            serviceAccountName: telnet-client-2
            containers:
            - name: client 
              image: mikesplain/telnet:latest 
              command: 
              - "sleep"
              - "3600"
              resources:
                limits:
                  cpu: "100m"
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: telnet-client-2
      ---
      apiVersion: apps/v1 
      kind: Deployment
      metadata:
        name: telnet-client-foo 
        namespace: foo
        labels:
          app: telnet-client-foo
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: telnet-client-foo
        template:
          metadata:
            labels:
              app: telnet-client-foo
          spec:
            serviceAccountName: telnet-client-foo
            containers:
            - name: client 
              image: mikesplain/telnet:latest 
              command: 
              - "sleep"
              - "3600"
              resources:
                limits:
                  cpu: "100m"
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: telnet-client-foo
        namespace: foo
      

    2. Run the following commands to deploy the echo-server and telnet-client applications:

      kubectl apply -f echoserver.yaml
      kubectl apply -f telnet.yaml

Matching rules

destinationSubnets

You can configure the tcp.match[n].destinationSubnets attribute to match the destination CIDR block for traffic. If the destination CIDR block matches the value specified by the attribute, traffic is routed by using the routing rule that corresponds to the tcp.match[n].destinationSubnets attribute.

Supported or not

  • Ambient Mesh mode: Not supported

  • Sidecar mode: Supported

Deploy sample resources

In the following YAML file, the destination CIDR block is 192.168.0.0/16. TCP traffic destined for this CIDR block is forwarded to the echo-server-2.default.svc.cluster.local service. You can deploy sample resources by using the CLI or in the ASM console. In the following example, a CLI is used. For more information about how to deploy resources in the ASM console, see Manage virtual services.

  1. Create a destinationSubnets.yaml file that contains the following content.

    In the sample code, the endpoint of the echo-server.default.svc.cluster.local service is in the 192.168.0.0/16 CIDR block. Modify the endpoint based on your business requirements.

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: echo
      namespace: default
    spec:
      hosts:
        - echo-server.default.svc.cluster.local
      tcp:
        - match:
            - destinationSubnets:
                - "192.168.0.0/16"
          route:
            - destination:
                host: echo-server-2.default.svc.cluster.local
  2. Use kubectl to connect to the ASM instance based on the information in the kubeconfig file, and then run the following command to deploy the virtual service:

    kubectl apply -f destinationSubnets.yaml
  3. Run the following commands to establish a connection to echo-server.default.svc.cluster.local 19000 and send the content test to initiate a test by using the telnet-client application:

    kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    A message that reads hello2 test is returned by echo-server-2. This indicates that the traffic hits the rule of matching port 19000 declared in the virtual service. The routing is performed as expected.

port

You can configure the tcp.match[n].port attribute to match the destination ports of traffic. If the destination port of a request matches the value specified by this attribute, the request is routed by using the routing rule that corresponds to the tcp.match[n].port attribute.

Supported or not

  • Ambient Mesh mode: Not supported

  • Sidecar mode: Supported

Deploy sample resources

In the following YAML file, the destination port is 19000. TCP traffic destined for this port is forwarded to the echo-server-2.default.svc.cluster.local service. You can deploy sample resources by using the CLI or in the ASM console. In the following example, a CLI is used. For more information about how to deploy sample resources in the ASM console, see Manage virtual services.

  1. Create a port.yaml file that contains the following content:

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: echo
      namespace: default
    spec:
      hosts:
        - echo-server.default.svc.cluster.local
      tcp:
        - match:
            - port: 19000
          route:
            - destination:
                host: echo-server-2.default.svc.cluster.local
  2. Use kubectl to connect to the ASM instance based on the information in the kubeconfig file, and then run the following command to deploy the virtual service:

    kubectl apply -f port.yaml
  3. Run the following commands to establish a connection to echo-server.default.svc.cluster.local 19000 and send the content test to initiate a test by using the telnet-client application:

    kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    A message that reads hello2 test is returned by echo-server-2. This indicates that the traffic hits the rule of matching port 19000 declared in the virtual service. The routing is performed as expected.

sourceLabels

You can configure the tcp.match[n].sourceLabels attribute to match requests sent from a specific workload.

Supported or not

  • Ambient Mesh mode: Not supported

  • Sidecar mode: Supported

Deploy sample resources

The following YAML file matches requests sent from the workload that carries the app: telnet-client label. The telnet-client application carries the app: telnet-client label and the telnet-client-2 application carries the app: telnet-client-2 label. Therefore, TCP traffic sent from the telnet-client application rather than TCP traffic sent from the telnet-client-2 application matches the rule defined in the following virtual service. You can deploy sample resources by using the CLI or in the ASM console. In the following example, a CLI is used. For more information about how to deploy sample resources in the ASM console, see Manage virtual services.

  1. Create a sourceLabels.yaml file that contains the following content:

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: echo
      namespace: default
    spec:
      hosts:
        - echo-server.default.svc.cluster.local
      tcp:
        - match:
            - sourceLabels:
                app: telnet-client
          route:
            - destination:
                host: echo-server-2.default.svc.cluster.local
  2. Use kubectl to connect to the ASM instance based on the information in the kubeconfig file, and then run the following command to deploy the virtual service:

    kubectl apply -f sourceLabels.yaml
  3. Run the following commands to establish a connection to echo-server.default.svc.cluster.local 19000 and send the content test to initiate a test by using the telnet-client application:

    kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    A message that reads hello2 test is returned by echo-server-2. This indicates that the traffic sent from the pod on which the telnet-client application runs matches the routing rule declared in the virtual service.

  4. Run the following commands to establish a connection to echo-server.default.svc.cluster.local 19000 and send the content test to initiate a test by using the telnet-client-2 application:

    kubectl exec -it telnet-client-2-c56db78bd-7**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    A message that reads hello test is returned by echo-server. This indicates that traffic is not forwarded to echo-server-2 and traffic sent from the pod on which the telnet-client-2 application runs does not match the routing rule declared in the virtual service. The routing is performed as expected.

gateways

You can configure tcp.match[n].gateways to match requests sent from a specific gateway to a service.

Supported or not

  • Ambient Mesh mode: Supported

  • Sidecar mode: Supported

Deploy sample resources

In addition to the preceding prerequisites, you must deploy two ASM gateways in the following example. They are named ingressgateway-1 and ingressgateway-2. After you apply the virtual service provided in the example to these two gateways, traffic sent from ingressgateway-1 is routed to the echo-server application, and traffic sent from ingressgateway-2 is routed to the echo-server-2 application.

  1. Create two ASM gateways and name them istio-ingressgateway-1 and istio-ingressgateway-2. For more information, see Create an ingress gateway.

    Configure the same settings on the ingressgateway-1 and ingressgateway-2 gateways. When you configure Port Mapping, click Add Port, set Protocol to TCP, and set Service Port to 19000. image.png

  2. Deploy Istio Gateways.

    In the following example, a CLI is used. For more information about how to deploy an Istio Gateway in the ASM console, see Manage Istio gateways.

    1. Use the following content to create the ingressgateway-1.yaml and ingressgateway-2.yaml files respectively.

      In the following sample code, a listening port is configured for the two gateways. In this example, TCP traffic is used. Therefore, Host is set to *.

      Expand to view ingressgateway-1.yaml

      apiVersion: networking.istio.io/v1beta1
      kind: Gateway
      metadata:
        name: ingressgateway-1
        namespace: istio-system
      spec:
        selector:
          istio: ingressgateway-1
        servers:
          - hosts:
              - '*'
            port:
              name: tcp
              number: 19000
              protocol: TCP

      Expand to view ingressgateway-2.yaml

      apiVersion: networking.istio.io/v1beta1
      kind: Gateway
      metadata:
        name: ingressgateway-1
        namespace: istio-system
      spec:
        selector:
          istio: ingressgateway-1
        servers:
          - hosts:
              - '*'
            port:
              name: tcp
              number: 19000
              protocol: TCP
    2. Run the following commands to deploy Istio Gateways:

      kubectl apply -f ingressgateway-1.yaml
      kubectl apply -f ingressgateway-2.yaml
  3. Deploy a virtual service for the two gateways.

    The following virtual service defines two routing rules. The first one defines gateway matching. If traffic is from ingressgateway-2, traffic is routed to echo-server-2.default.svc.cluster.local. You can deploy sample resources by using the CLI or in the ASM console. In the following example, a CLI is used. For more information about how to deploy sample resources in the ASM console, see Manage virtual services.

    1. Create an ingressgateway-vs.yaml file that contains the following content:

      Expand to view ingressgateway-vs.yaml

      apiVersion: networking.istio.io/v1beta1
      kind: VirtualService
      metadata:
        name: ingressgateway
        namespace: istio-system
      spec:
        gateways:
          - ingressgateway-1
          - ingressgateway-2
        hosts:
          - '*'
        tcp:
          - match:
              - gateways:
                  - ingressgateway-2
            route:
              - destination:
                  host: echo-server-2.default.svc.cluster.local
          - route:
              - destination:
                  host: echo-server.default.svc.cluster.local
      
  4. Use kubectl to connect to the ASM instance based on the information in the kubeconfig file, and then run the following command to deploy the virtual service:

    kubectl apply -f ingressgateway-vs.yaml
  5. Run the following commands to connect to the IP address of the ingressgateway-1 gateway and send the content test:

    kubectl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    A message that reads hello test is returned. As defined by the virtual service, traffic sent from ingressgateway-1 should be routed to echo-server.default.svc.cluster.local. Traffic is routed as expected.

  6. Run the following commands to connect to the IP address of the ingressgateway-2 gateway and send the content test:

    kubectl exec -it telnet-client-2-c56db78bd-7**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    A message that reads hello-2 test is returned. As defined by the virtual service, traffic sent from ingressgateway-2 should be routed to echo-server-2.default.svc.cluster.local. Traffic is routed as expected.

sourceNamespace

You can configure the sourceNamespace field to match the source namespace of traffic.

Supported or not

  • Ambient Mesh mode: Not supported

  • Sidecar mode: Supported

Deploy sample resources

Purpose of the following YAML file: Traffic that is sent from the foo namespace and is destined for echo-server.default.svc.cluster.local is routed to echo-server-2.default.svc.cluster.local. You can deploy sample resources by using the CLI or in the ASM console. In the following example, a CLI is used. For more information about how to deploy resources in the ASM console, see Manage virtual services.

  1. Create a source-namespace.yaml file that contains the following content:

    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: source-namespace
      namespace: default
    spec:
      hosts:
        - echo-server.default.svc.cluster.local
      http: []
      tcp:
        - match:
            - sourceNamespace: foo
          route:
            - destination:
                host: echo-server-2.default.svc.cluster.local
        - route:
            - destination:
                host: echo-server.default.svc.cluster.local
    
  2. Use kubectl to connect to the ASM instance based on the information in the kubeconfig file, and then run the following command to deploy the virtual service:

    kubectl apply -f source-namespace.yaml
  3. Run the following commands to send the content test to initiate a test by using the telnet-client application in the foo namespace:

    kubectl -n foo exec -it telnet-client-foo-7c94569bfd-h**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    A message that reads hello2 test is returned by echo-server-2. This indicates that traffic sent from the pod on which the telnet-client application in the foo namespace runs matches the routing rule declared in the virtual service.

  4. Run the following commands to send the content test to initiate a test by using the telnet-client application in the default namespace:

    kubectl exec -it telnet-client-c56db78bd-7**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test

    A message that reads hello test is returned by echo-server and traffic is not forwarded to echo-server-2. This indicates that traffic sent from the default/telnet-client pod is routed to echo-server.default.svc.cluster.local according to the default matching rule defined in the virtual service.

Routing

weight

You can configure the weight field to route traffic among multiple subsets based on percentages of traffic.

Supported or not

  • Ambient Mesh mode: Not supported

  • Sidecar mode: Supported

Deploy sample resources

  1. Create a Deployment.

    1. Create an echo-server-backup.yaml file that contains the following content.

      The following YAML file is used to deploy the gray subset of the echo-server application.

      Expand to view echo-server-backup.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echo-server-gray
        labels:
          app: echo-server
          version: gray
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: echo-server
            version: gray
        template:
          metadata:
            labels:
              app: echo-server
              version: gray
          spec:
            serviceAccountName: echo-server
            containers:
            - name: server
              image: istio/tcp-echo-server:1.2
              command:
              - "/bin/tcp-echo"
              - "9000"
              - "hello-gray"
              ports:
              - containerPort: 9000
              resources:
                limits:
                  cpu: "100m"
    2. Use kubectl to connect to the cluster on the data plane based on the information in the kubeconfig file, and then run the following command to create a Deployment:

      kubectl apply -f echo-server-backup.yaml
  2. Deploy a destination rule.

    In the following example, a CLI is used. For more information about how to deploy a destination rule in the ASM console, see Manage destination rules.

    1. Create an echoserver-dr.yaml file that contains the following content.

      The following YAML file defines a destination rule that creates two subsets for the echo-server application: prod and gray.

      apiVersion: networking.istio.io/v1beta1
      kind: DestinationRule
      metadata:
        name: echo
        namespace: default
      spec:
        host: echo.default.svc.cluster.local
        subsets:
          - labels:
              version: prod
            name: prod
          - labels:
              version: gray
            name: gray
    2. Use kubectl to connect to the ASM instance based on the information in the kubeconfig file, and then run the following command to deploy the destination rule:

      kubectl apply -f echoserver-dr.yaml
  3. Deploy a virtual service.

    In the following example, a CLI is used. For more information about how to deploy a virtual service in the ASM console, see Manage virtual services.

    1. Create an echoserver-vs.yaml file that contains the following content.

      The following YAML file defines a virtual service that configures a traffic weight of 80:20 for the prod and gray subsets. This means that 80% of traffic is routed to the prod subset, while 20% of traffic is routed to the gray subset.

      apiVersion: networking.istio.io/v1beta1
      kind: VirtualService
      metadata:
        name: echo
      spec:
        hosts:
        - echo-server.default.svc.cluster.local
        http:
          - route:
            - destination:
                host: echo-server.default.svc.cluster.local
                subset: gray
              weight: 20
            - destination:
                host: echo-server.default.svc.cluster.local
                subset: prod
              weight: 80
    2. Use kubectl to connect to the ASM instance based on the information in the kubeconfig file, and then run the following command to deploy the virtual service:

      kubectl apply -f echoserver-vs.yaml
  4. Run the following commands to send the content test to initiate a test by using the telnet-client application. After you receive the message that reads hello test from the echo-server application, press the Ctrl+D shortcut to exit. Repeat the preceding operations five times.

    $ ackctl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test
    hello-gray test
    $ ackctl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test
    hello test
    $ ackctl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test
    hello test
    $ ackctl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test
    hello test
    $ ackctl exec -it telnet-client-5786fc744f-9**** -c client -- telnet echo-server.default.svc.cluster.local:19000
    test
    hello test

    Load balancing of TCP traffic operates at the connection level. From the preceding sample, you can see that five connections are routed to the prod and gray subsets based on the traffic percentages specified by the virtual service. Note that traffic is not routed strictly based on the configured traffic percentages.