edit-icon download-icon

How to troubleshoot access link issues?

Last Updated: Feb 05, 2018

When a web container is set up in Container Service and routing is used to forward requests to this server, the request link is as follows: client > DNS resolution > Server Load Balancer VIP > an acsrouting container in the cluster > forwarded to the web container. This is shown in the following figure.


If a problem occurs at any stage in this process, user requests might not be correctly routed to the web container. Troubleshoot the access link issues as follows, starting from the health checks of the developers’ web containers, where issues are always located.

  1. Check whether or not the container is running.

    1. Log on to the Container Service console .

    2. Click Applications in the left-side navigation pane.

    3. Select the cluster from the Cluster list.

    4. Click the application name (wordpress-test in this example).


  2. Click the name of the service (web in this example) that provides the web container.


  3. Check the health check status of the container that provides the web service.

    Under the Containers tab, check whether or not all of the containers have Normal displayed for Health Check. If not, click the Logs tab to check the error message and click the Events tab to check if any deployment exception occurs. If the health check is set for the application, you must confirm that the health check page returns the status code 200 to make sure the health check status is normal.


  4. Check whether or not the web container page responds normally.

    If the health check status of the container is normal, you must bypass the routing service and check the accessibility of the web container directly. As shown in the preceding figure, you can view the container IP of a web container. Log on to the routing container of a machine in the cluster and use the container IP to request the web container page. If the returned HTTP status code is less than 400, the web container page is normal. In the following example, docker exec -it f171110f2fe2 sh is used. Here, f171110f2fe2 is the container ID of the container acsrouting_routing_1 and the IP address in curl -v is the container IP address of a web service. The request then returns the status code 302, indicating that the web container can be accessed normally.

    1. root@c68a460635b8c405e83c052b7c2057c7b-node2:~# docker ps
    3. b403ea045fa1 registry.aliyuncs.com/acs-sample/wordpress:4.5 "/entrypoint.sh apach" 13 seconds ago Up 11 seconds>80/tcp w_web_2
    4. 025f7967cec3 registry.aliyuncs.com/acs-sample/mysql:5.7 "/entrypoint.sh mysql" About a minute ago Up About a minute 3306/tcp w_db_1
    5. 2f247b8a76e5 registry.aliyuncs.com/acs/ilogtail:0.9.9 "/bin/sh -c 'sh /usr/" 31 minutes ago Up 31 minutes acslogging_logtail_1
    6. 42b75bee6cd8 registry.aliyuncs.com/acs/monitoring-agent:latest "acs-mon-run.sh --hel" 31 minutes ago Up 31 minutes acsmonitoring_acs-monitoring-agent_2
    7. 0a9afa527f03 registry.aliyuncs.com/acs/volume-driver:0.7-252cb09 "acs-agent volume_exe" 31 minutes ago Up 31 minutes acsvolumedriver_volumedriver_2
    8. 3c1440fd114c registry.aliyuncs.com/acs/logspout:0.1-41e0e21 "/bin/logspout" 32 minutes ago Up 32 minutes acslogging_logspout_1
    9. f171110f2fe2 registry.aliyuncs.com/acs/routing:0.7-staging "/opt/run.sh" 32 minutes ago Up 32 minutes>1936/tcp,>80/tcp acsrouting_routing_1
    10. 0bdeb8464c14 registry.aliyuncs.com/acs/agent:0.7-bfe8bdf "acs-agent join --nod" 33 minutes ago Up 33 minutes acs-agent
    11. ba32a0e9e7fe registry.aliyuncs.com/acs/tunnel-agent:0.21 "/acs/agent -config=c" 33 minutes ago Up 33 minutes tunnel-agent
    12. root@c68a460635b8c405e83c052b7c2057c7b-node2:~# docker exec -it f171110f2fe2 sh
    13. / # curl -v
    14. * Rebuilt URL to:
    15. * Trying
    16. * Connected to ( port 80 (#0)
    17. > GET / HTTP/1.1
    18. > Host:
    19. > User-Agent: curl/7.47.0
    20. > Accept: */*
    21. >
    22. < HTTP/1.1 302 Found
    23. < Date: Mon, 09 May 2016 03:19:47 GMT
    24. < Server: Apache/2.4.10 (Debian) PHP/5.6.21
    25. < X-Powered-By: PHP/5.6.21
    26. < Expires: Wed, 11 Jan 1984 05:00:00 GMT
    27. < Cache-Control: no-cache, must-revalidate, max-age=0
    28. < Pragma: no-cache
    29. < Location:
    30. < Content-Length: 0
    31. < Content-Type: text/html; charset=UTF-8
    32. <
    33. * Connection #0 to host left intact
    34. / #
  5. Verify the validity of acsrouting.

    Upgrade routing to the latest version. Log on to each machine in the cluster (any machine might receive requests, no matter on which machine the application container is deployed), and request the routing health check page.

    1. root@c68a460635b8c405e83c052b7c2057c7b-node2:~# curl -Ss -u admin:admin '' &> test.html

    Copy the page test.html to a machine with a browser and use the browser to open the local file test.html. Check the corresponding web service and container backend. The first part is the stats information, providing routing statistics. The second part is the frontend statistics. The third part, which provides backend information, is essential to view. Here, w_web_80_servers indicates the information for the port 80 backend servers of the service web under the application w. In total, three backend servers exist, namely, the backend has three containers that provide web service. Green indicates that the routing container can connect to the three containers and the system works properly. Any other color indicates an exception.


  6. Check whether or not the Server Load Balancer VIP is forwarding data correctly and the health check status is normal.

    1. Find the Server Load Balancer VIP of the cluster. Click Clusters in the left-side navigation pane in the Container Service console.

    2. Click Manage at the right of the cluster (test in this example).


    3. Click Load Balancer Settings in the left-side navigation pane. View and copy the Server Load Balancer ID.


    4. Click Products > Server Load Balancer to go to the Server Load Balancer console.

    5. Click Manage at the right of the Server Load Balancer instance to enter the instance details page.

    6. View the IP address of the Server Load Balancer instance.


    7. View the health status of the Server Load Balancer port. Click Listeners in the left-side navigation pane. The Running status indicates the port works properly.


    8. Check the status of the backend servers mounted to Server Load Balancer. Click Servers > Backend Servers in the left-side navigation pane. Make sure the Health Check status is Normal.


  7. Check whether or not the domain name is correctly resolved to the Server Load Balancer VIP. For example, use the ping or dig command to view the resolution result. The domain name must be resolved and directed to the Server Load Balancer VIP address found in the previous step.

    1. $ ping www.example-domain.com
    1. $ dig www.example-domain.com
Thank you! We've received your feedback.