Charlene
Assistant Engineer
Assistant Engineer
  • UID626
  • Fans1
  • Follows1
  • Posts53
Reads:2212Replies:0

TCP load balancing configurations of Alibaba Cloud container service

Created#
More Posted time:Dec 16, 2016 9:37 AM
During the usage of Alibaba Cloud container service, you may encounter this type of problem in TCP load balancing scenarios: If both the client image and service image of an application are deployed on the same node (ECS), the application client won’t be able to visit its own service end through the SLB due to SLB limitations. This article tries to take the common TCP-based Redis as an example to solve this problem step by step. At the same time, I hope to help you understand the concept of the container service.
Solution 1: Avoid deploying the client and service containers on the same node through scheduling containers.
Example application template (using the lb tag and the swarm filter feature)
redis-master:
    ports:
      - 6379:6379/tcp
    image:  'redis:alpine'
    labels:
        aliyun.lb.port_6379: tcp://proxy_test:6379
redis-client:
    image:  'redis:alpine'
    links:
      - redis-master
    environment:
      - 'affinity:aliyun.lb.port_6379!=tcp://proxy_test:6379'
    command: redis-cli -h 120.25.131.64
    stdin_open: true
    tty: true


Notes:
• If the scheduling doesn’t work, enter the Service List, select the desired Service to be scheduled, and select Re-schedule > Force Re-scheduling.
• The Force Re-scheduling option will discard the existing container volume, so please back up the files for migration.
Solution 2: Use the link to visit the service end on the client end in the container cluster, and use SLB to visit the service end out of the cluster
Example application template (using the lb tag)
redis-master:
    ports:
      - 6379:6379/tcp
    image:  'redis:alpine'
    labels:
        aliyun.lb.port_6379: tcp://proxy_test:6379
redis-client:
    image:  'redis:alpine'
    links:
      - redis-master
    command: redis-cli -h redis-master
    stdin_open: true
    tty: true


Solution 3: Use the custom routing (based on HAProxy) as the proxy to visit the service end on the client end in the container cluster, and use SLB to visit the service end out of the cluster
Example application template (using the lb tag and the custom routing image)
lb:
    image:  registry.aliyuncs.com/acs/proxy:0.5
    ports:
            -  '6379:6379/tcp'
    restart:  always
    labels:
        # The add-on enables the proxy image with the subscription and registration center capability and to dynamically load service routers.
        aliyun.custom_addon:  "proxy"
        # Every VM is deployed with a container of this image.
        aliyun.global:  "true"
        #  Bind the SLB at the front end, and use the lb tag.
        aliyun.lb.port_6379: tcp://proxy_test:6379
        # Tell the system that the custom routing should be started after the master and slave nodes are started and be dependent on the master and slave nodes.
        aliyun.depends: redis-master,redis-slave
    environment:
        #  The range of the back-end containers supporting router loading. “*” indicates the entire cluster. By default, it is the services in the application.
        ADDITIONAL_SERVICES:  "*"
        EXTRA_DEFAULT_SETTINGS: "log rsyslog local0,log global,option httplog"
        # Configure HAProxy to work in the TCP mode.
        MODE: "tcp"
    links:
        - rsyslog:rsyslog
rsyslog:
    image: registry.cn-hangzhou.aliyuncs.com/linhuatest/rsyslog:latest
redis-master:
    ports:
      - 6379/tcp
    image:  'redis:alpine'
    labels:
        # Tell the custom routing to expose the Port 6379.
        aliyun.proxy.TCP_PORTS:  "6379"
        # Tell the system that this service router should be added to the custom routing service.
        aliyun.proxy.required: "true"
redis-slave:
    ports:
      -  6379/tcp
    image:  'redis:alpine'
    links:
      - redis-master
    labels:
      # Tell the custom routing to expose the Port 6379.
      aliyun.proxy.TCP_PORTS:  "6379"
      # Tell the system that this service router should be added to the custom routing service.
      aliyun.proxy.required: "true"
      # Tell the system that the slave node should be started after the master node is started and be dependent on the master node.
      aliyun.depends: redis-master
    command: redis-server --slaveof redis-master 6379
redis-client:
    image:  'redis:alpine'
    links:
      - lb:www.example.com
    labels:
      aliyun.depends: lb
    command: redis-cli -h www.example.com
    stdin_open: true
    tty: true


This solution achieves the master/slave architecture of Redis and load balancing through the custom routing image, ensuring a high availability to some extent.
Guest