All Products
Search
Document Center

Alibaba Cloud Service Mesh:Configure redirection of HTTP or TCP health check requests for applications in an ASM instance

Last Updated:Mar 11, 2024

After you add an application to a Service Mesh (ASM) instance, a sidecar proxy is injected into the pod of the application. The sidecar proxy intercepts all requests sent to the application. If you enable HTTP or TCP health checks for the application in this case, health checks may not work as expected. For example, health checks may always fail. This topic describes how to configure redirection of HTTP or TCP health check requests for applications in an ASM instance.

Background information

TCP or HTTP health checks for applications in an ASM instance may experience the following issues. To resolve these issues, you need to configure redirection of health check requests for applications in an ASM instance by using annotations.

Type

Description

HTTP health check requests

The kubelet service sends health check requests to pods in a Kubernetes cluster. If you enable mutual transport layer security (mTLS) for an ASM instance, applications in the ASM instance must use TLS to communicate with each other. The kubelet service is not part of the ASM instance. As a result, the kubelet service cannot provide the TLS certificate issued by ASM. In this case, all HTTP health check requests are rejected by applications, and the health checks always fail.

Note

If you do not enable mTLS for an ASM instance, HTTP health checks can be successfully performed on pods of applications. In this case, you can enable HTTP health checks for applications without the need to configure redirection of health check requests.

TCP health check requests

Sidecar proxies listen on all ports of pods in an ASM instance to intercept requests. If you enable TCP health checks for an application, the kubelet service checks whether the specified port of the pod is listened on by an application. If yes, the health check is successful.

If a sidecar proxy is injected into the pod and works as expected, health checks are always successful regardless of whether the application is healthy. For example, if you configure an invalid port for the application, the health checks should fail, and the pod should not be ready. However, the health checks are always successful for the pod.

By default, the calls of health check requests for applications in an ASM instance are displayed in the mesh topology. However, health check requests may cause inaccurate traffic statistics if health checks do not work as expected. To ensure that the traffic statistics are accurate, you can configure redirection of health check requests.

Configure redirection of HTTP health check requests

In this example, an NGINX application is used. After you enable mTLS for the ASM instance in which the NGINX application resides, HTTP health checks for the NGINX application always fail. In this case, you can configure redirection of health check requests for the NGINX application. Then, check the events of the pod of the NGINX application. If no events that indicate a failed health check exist and the pod is ready, redirection of health check requests takes effect for the application.

Step 1: Enable the global mTLS mode for an ASM instance

  1. Log on to the ASM console.

  2. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  3. On the Mesh Management page, find the ASM instance that you want to configure. Click the name of the ASM instance or click Manage in the Actions column.

  4. On the details page of the ASM instance, choose Mesh Security Center > PeerAuthentication in the left-side navigation pane.

  5. In the upper part of the PeerAuthentication page, select a namespace from the Namespace drop-down list and click Configure Global mTLS Mode.

  6. On the Configure Global mTLS Mode page, select STRICT -Strictly Enforce mTLS for the mTLS Mode (Namespace-wide) parameter and click Create.

Step 2: Deploy an NGINX application

  1. Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

  2. Deploy an NGINX application.

    1. Create an http-liveness.yaml file that contains the following content:

      Show the http-liveness.yaml file

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx-deployment
        labels:
          app: nginx
      spec:
        selector:
          matchLabels:
            app: nginx
        replicas: 1
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx
              imagePullPolicy: IfNotPresent
              ports:
              - containerPort: 80
              readinessProbe:
                httpGet:
                  path: /index.html
                  port: 80
                  httpHeaders:
                  - name: X-Custom-Header
                    value: hello
                initialDelaySeconds: 5
                periodSeconds: 3

      The httpGet field in the readinessProbe field is specified to enable HTTP health checks for the NGINX application.

    2. Run the following command to deploy the NGINX application:

      kubectl apply -f http-liveness.yaml
  3. View the health check result of the NGINX application.

    1. Run the following command to view the name of the pod that runs the NGINX application:

      kubectl get pod| grep nginx
    2. Run the following command to view the events of the pod:

      kubectl describe pod <Pod name>

      Expected output:

      Warning  Unhealthy  45s               kubelet            Readiness probe failed: Get "http://172.23.64.22:80/index.html": read tcp 172.23.64.1:54130->172.23.64.22:80: read: connection reset by peer

      The output indicates that the HTTP health check for the pod fails. In this case, the pod is not ready.

Step 3: Configure redirection of health check requests for the NGINX application

  1. Run the following command to open the http-liveness.yaml file:

    vim http-liveness.yaml

    Add the following content to the template field:

    annotations:
      sidecar.istio.io/rewriteAppHTTPProbers: "true"

    The following code shows an example of the http-liveness.yaml file after you add an annotation:

    Show the http-liveness.yaml file

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      selector:
        matchLabels:
          app: nginx
      replicas: 1
      template:
        metadata:
          labels:
            app: nginx
          annotations:
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
        spec:
          containers:
          - name: nginx
            image: nginx
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 80
            readinessProbe:
              httpGet:
                path: /index.html
                port: 80
                httpHeaders:
                - name: X-Custom-Header
                  value: hello
              initialDelaySeconds: 5
              periodSeconds: 3
  2. Run the following command to deploy the NGINX application:

    kubectl apply -f http-liveness.yaml

Step 4: Verify that the health check result meets your expectations

  1. View the health check result of the pod.

    1. Run the following command to view the name of the pod that runs the NGINX application:

      kubectl get pod| grep nginx
    2. Run the following command to view the events of the pod:

      kubectl describe pod <Pod name>

      In the command output, no events that indicate a failed health check for the pod exist. The pod is ready. This indicates that redirection of HTTP health check requests takes effect for the application.

  2. Run the following command to view the YAML file of the pod after you configure redirection of health check requests:

    kubectl get pod nginx-deployment-676f85f66b-7vxct -o yaml

    Show the expected output

    apiVersion: v1
    kind: Pod
    metadata:
      ...
      name: nginx-deployment-676f85f66b-cbzsx
      namespace: default
      ...
    spec:
      containers:
        - args:
            - proxy
            - sidecar
            - '--domain'
            - $(POD_NAMESPACE).svc.cluster.local
            - '--proxyLogLevel=warning'
            - '--proxyComponentLogLevel=misc:error'
            - '--log_output_level=default:info'
            - '--concurrency'
            - '2'
          env:
            ...
            - name: ISTIO_KUBE_APP_PROBERS
              value: >-
                {"/app-health/nginx/readyz":{"httpGet":{"path":"/index.html","port":80,"scheme":"HTTP","httpHeaders":[{"name":"X-Custom-Header","value":"hello"}]},"timeoutSeconds":1}}
          ...
        - image: nginx
          imagePullPolicy: IfNotPresent
          name: nginx
          ports:
            - containerPort: 80
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              httpHeaders:
                - name: X-Custom-Header
                  value: hello
              path: /app-health/nginx/readyz
              port: 15020
              scheme: HTTP
            initialDelaySeconds: 5
            periodSeconds: 3
            successThreshold: 1
            timeoutSeconds: 1

    After you configure redirection of health check requests, the health check port is changed from port 80 to port 15020, and the health check path is changed from /index.html to /app-health/nginx/readyz. In addition, an environment variable named ISTIO_KUBE_APP_PROBERS is added to the sidecar container of the pod. The value of this environment variable is serialized from the original health check configurations in the JSON format.

    For applications deployed in ASM, port 15020 is exclusively used for the observability of ASM. Requests that are sent to port 15020 are not intercepted by sidecar proxies. Therefore, health check requests are not affected by the requirements of the mTLS mode. After you configure redirection of health check requests, the pilot-agent service that runs in the sidecar container listens on port 15020. The pilot-agent service receives health check requests from the kubelet service and redirects these requests to the application container based on the value of the ISTIO_KUBE_APP_PROBERS environment variable. This ensures that HTTP health checks work as expected.

Configure redirection of TCP health check requests

In this example, an NGINX application is used, and an invalid port is configured for the NGINX application. If you enable TCP health checks for the NGINX application, TCP health checks that should fail are always successful for the application. After you configure redirection of health check requests, TCP health checks for the pod fail. This indicates that redirection of TCP health check requests takes effect for the NGINX application.

Step 1: Deploy an NGINX application

  1. Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

  2. Deploy an NGINX application.

    1. Create a tcp-liveness.yaml file that contains the following content.

      Port 2940 is configured as the health check port for the NGINX application. However, the NGINX application does not listen on port 2940. In normal cases, health checks fail for the NGINX application, and the pod is not ready.

      Show the tcp-liveness.yaml file

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx-deployment
        labels:
          app: nginx
      spec:
        selector:
          matchLabels:
            app: nginx
        replicas: 1
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx
              imagePullPolicy: IfNotPresent
              ports:
              - containerPort: 80
              readinessProbe:
                tcpSocket:
                  port: 2940
                initialDelaySeconds: 5
                periodSeconds: 3

      The tcpSocket field in the readinessProbe field is specified to enable TCP health checks for the NGINX application.

    2. Run the following command to deploy the NGINX application:

      kubectl apply -f tcp-liveness.yaml
  3. View the health check result of the NGINX application.

    1. Run the following command to view the name of the pod that runs the NGINX application:

      kubectl get pod| grep nginx
    2. Run the following command to view the events of the pod:

      kubectl describe pod <Pod name>

      In the command output, no events that indicate a failed health check exist. In this case, the pod is ready. This does not meet your expectations.

Step 2: Configure redirection of health check requests for the NGINX application

  1. Run the following command to edit the tcp-liveness.yaml file:

    vim tcp-liveness.yaml

    Add the following content to the template field in the tcp-liveness.yaml file:

    annotations:
      sidecar.istio.io/rewriteAppHTTPProbers: "true"

    The following code shows an example of the tcp-liveness.yaml file after you add an annotation:

    Show the tcp-liveness.yaml file

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      selector:
        matchLabels:
          app: nginx
      replicas: 1
      template:
        metadata:
          labels:
            app: nginx
          annotations:
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
        spec:
          containers:
          - name: nginx
            image: nginx
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 80
            readinessProbe:
              tcpSocket:
                port: 2940
              initialDelaySeconds: 5
              periodSeconds: 3
  2. Run the following command to deploy the NGINX application:

    kubectl apply -f tcp-liveness.yaml

Step 3: Verify that the health check result meets your expectations

  1. View the health check result of the NGINX application.

    1. Run the following command to view the name of the pod that runs the NGINX application:

      kubectl get pod| grep nginx
    2. Run the following command to view the events of the pod:

      kubectl describe pod <Pod name>

      Expected output:

      Warning  Unhealthy  45s               kubelet            Readiness probe failed: HTTP probe failed with statuscode: 500

      The output indicates that the TCP health check for the pod fails. This meets your expectations.

  2. Run the following command to view the YAML file of the pod after you configure redirection of health check requests:

    kubectl get pod nginx-deployment-746458cdc9-m9t9q -o yaml

    Show the expected output

    apiVersion: v1
    kind: Pod
    metadata:
      ...
      name: nginx-deployment-746458cdc9-m9t9q
      namespace: default
      ...
    spec:
      containers:
        - args:
            - proxy
            - sidecar
            - '--domain'
            - $(POD_NAMESPACE).svc.cluster.local
            - '--proxyLogLevel=warning'
            - '--proxyComponentLogLevel=misc:error'
            - '--log_output_level=default:info'
            - '--concurrency'
            - '2'
          env:
            ...
            - name: ISTIO_KUBE_APP_PROBERS
              value: >-
                {"/app-health/nginx/readyz":{"tcpSocket":{"port":2940},"timeoutSeconds":1}}
          ...
        - image: nginx
          imagePullPolicy: IfNotPresent
          name: nginx
          ports:
            - containerPort: 80
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /app-health/nginx/readyz
              port: 15020
              scheme: HTTP
            initialDelaySeconds: 5
            periodSeconds: 3
            successThreshold: 1
            timeoutSeconds: 1
          ...

    After you configure redirection of health check requests, the original TCP requests are converted to HTTP requests. In addition, the health check port is changed from port 80 to port 15020, and the path /app-health/nginx/readyz is automatically added for HTTP health checks. An environment variable named ISTIO_KUBE_APP_PROBERS is added to the sidecar container of the pod. The value of this environment variable is serialized from the original TCP health check configurations in the JSON format.

    Port 15020 is used to receive the converted HTTP health check requests. This is the same as that in the redirection configurations for HTTP health check requests. After you configure redirection of health check requests, the pilot-agent service that runs in the sidecar container listens on port 15020. The pilot-agent service receives health check requests from the kubelet service and checks the status of the TCP health check port configured for the application container based on the value of the ISTIO_KUBE_APP_PROBERS environment variable. If the TCP health check fails, the pilot-agent service returns a 500 status code. This status code indicates a failed health check.