Background information

When a web container is set up in Container Service and routing is used to forward requests to this server, the request link is as follows: client > DNS resolution > Server Load Balancer VIP > > an acsrouting container in the cluster > forwarded to the web container. This is shown in the following figure.



If a problem occurs at any stage in this process, user requests may not be correctly routed to the web container. Troubleshoot the access link issues as follows, starting from the health checks of the developers’ web containers, where issues are always located.

Procedure

  1. Check whether or not the container is running.

    Log on to the Container Service console. Click Applications in the left-side navigation pane. Select the cluster from the Cluster list. Click the application name (wordpress-test in this example).



  2. Click the name of the service (web in this example) that provides the web container.


  3. Check the health check status of the container that provides the web service.

    Under the Containers tab, check whether or not all of the containers have Normal displayed for Health Check. If not, click the Logs tab to check the error message and click the Events tab to check if any deployment exception occurs. If the health check is set for the application, you must confirm that the health check page returns the status code 200 to make sure the health check status is normal. See the following figure.



  4. Check whether or not the web container page responds normally.

    If the health check status of the container is normal, you must bypass the routing service and check the accessibility of the web container directly. As shown in the preceding figure, you can view the container IP of a web container. Log on to the routing container of a machine in the cluster and use the container IP to request the web container page. If the returned HTTP status code is less than 400, the web container page is normal. In the following example, docker exec -it f171110f2fe2 sh is used. Here, f171110f2fe2 is the container ID of the container acsrouting_routing_1 and the IP address 172.19.0.7 in curl -v 172.19.0.7 is the container IP address of a web service. The request then returns the status code 302, indicating that the web container can be accessed normally.

     root@c68a460635b8c405e83c052b7c2057c7b-node2:~# docker ps
     CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
     b403ea045fa1 registry.aliyuncs.com/acs-sample/wordpress:4.5 "/entrypoint.sh apach" 13 seconds ago Up 11 seconds 0.0.0.0:32768->80/tcp w_web_2
     025f7967cec3 registry.aliyuncs.com/acs-sample/mysql:5.7 "/entrypoint.sh mysql" About a minute ago Up About a minute 3306/tcp w_db_1
     2f247b8a76e5 registry.aliyuncs.com/acs/ilogtail:0.9.9 "/bin/sh -c 'sh /usr/" 31 minutes ago Up 31 minutes acslogging_logtail_1
     42b75bee6cd8 registry.aliyuncs.com/acs/monitoring-agent:latest "acs-mon-run.sh --hel" 31 minutes ago Up 31 minutes acsmonitoring_acs-monitoring-agent_2
     0a9afa527f03 registry.aliyuncs.com/acs/volume-driver:0.7-252cb09 "acs-agent volume_exe" 31 minutes ago Up 31 minutes acsvolumedriver_volumedriver_2
     3c1440fd114c registry.aliyuncs.com/acs/logspout:0.1-41e0e21 "/bin/logspout" 32 minutes ago Up 32 minutes acslogging_logspout_1
     f171110f2fe2 registry.aliyuncs.com/acs/routing:0.7-staging "/opt/run.sh" 32 minutes ago Up 32 minutes 127.0.0.1:1936->1936/tcp, 0.0.0.0:9080->80/tcp acsrouting_routing_1
     0bdeb8464c14 registry.aliyuncs.com/acs/agent:0.7-bfe8bdf "acs-agent join --nod" 33 minutes ago Up 33 minutes acs-agent
     ba32a0e9e7fe registry.aliyuncs.com/acs/tunnel-agent:0.21 "/acs/agent -config=c" 33 minutes ago Up 33 minutes tunnel-agent
     root@c68a460635b8c405e83c052b7c2057c7b-node2:~# docker exec -it f171110f2fe2 sh
     / # curl -v 172.19.0.7
     * Rebuilt URL to: 172.19.0.7/
     * Trying 172.19.0.7...
     * Connected to 172.19.0.7 (172.19.0.7) port 80 (#0)
       > GET / HTTP/1.1
       > Host: 172.19.0.7
       > User-Agent: curl/7.47.0
       > Accept: */*
       >
       < HTTP/1.1 302 Found
       < Date: Mon, 09 May 2016 03:19:47 GMT
       < Server: Apache/2.4.10 (Debian) PHP/5.6.21
       < X-Powered-By: PHP/5.6.21
       < Expires: Wed, 11 Jan 1984 05:00:00 GMT
       < Cache-Control: no-cache, must-revalidate, max-age=0
       < Pragma: no-cache
       < Location: http://172.19.0.7/wp-admin/install.php
       < Content-Length: 0
       < Content-Type: text/html; charset=UTF-8
       <
     * Connection #0 to host 172.19.0.7 left intact
       
  5. Verify the validity of acsrouting.

    Upgrade routing to the latest version. Log on to each machine in the cluster (any machine might receive requests, no matter on which machine the application container is deployed), and request the routing health check page.

    root@c68a460635b8c405e83c052b7c2057c7b-node2:~# curl -Ss -u admin:admin 'http://127.0.0.1:1936/haproxy?stats' &> test.html

    Copy the page test.html to a machine with a browser and use the browser to open the local file test.html. Check the corresponding web service and container backend. The first part is the stats information, providing routing statistics. The second part is the frontend statistics. The third part, which provides backend information, is essential to view. Here, w_web_80_servers indicates the information for the port 80 backend servers of the service web under the application w. In total, three backend servers exist, namely, the backend has three containers that provide web service. Green indicates that the routing container can connect to the three containers and the system works properly. Any other color indicates an exception.



  6. Check whether or not the Server Load Balancer VIP is forwarding data correctly and the health check status is normal.
    1. Find the Server Load Balancer VIP of the cluster. Click Clusters in the left-side navigation pane in the Container Service console.


    2. Click Manage at the right of the cluster (test in this example). Click Load Balancer Settings in the left-side navigation pane. View and copy the Server Load Balancer ID. Click Products > Server Load Balancer to go to the Server Load Balancer console. Click Manage at the right of the Server Load Balancer instance to enter the instance details page.


    3. View the IP address of the Server Load Balancer instance.


    4. View the health status of the Server Load Balancer port. Click Listeners in the left-side navigation pane. The Running status indicates the port works properly.


    5. Check the status of the backend servers mounted to Server Load Balancer. Click Servers > > Backend Servers in the left-side navigation pane.Make sure the Health Check status is Normal.


  7. Check whether or not the domain name is correctly resolved to the Server Load Balancer VIP. For example, use the ping or dig command to view the resolution result. The domain name must be resolved and directed to the Server Load Balancer VIP address found in the previous step .
    $ ping www.example-domain.com
    $ dig www.example-domain.com