edit-icon download-icon

Routing and Server Load Balancer between services in a cluster

Last Updated: Feb 13, 2018

Swarm mode clusters have built-in Server Load Balancer. Docker will create an ingress network between nodes by default and Container Service in the container network can be exposed to the host network using the ingress load balancer so as to be discovered quickly by external requests. Access by service name or alias is also supported.

Swarm mode clusters support using healthcheck for health check. When Container Service of a node acting as the access point suffers a problem, the ingress load balancer will automatically route the requests to the active container repica of another node to achieve automatic Server Load Balancer of Container Service and ultimately achieve high availability of the cluster. For more information, see Routing mesh.

Implementation principle

  • By default, a swarm mode cluster will generate an ingress network, which is responsible for communications between nodes and internal containers in the access link of user request > node IP: port > ingress > container. Ingress is implemented by using IPVS and has the functions of Server Load Balancer and name resolution.

  • The ingress load balancer sets rules at the node iptables, the ingress_sbox iptables and IPVS, and the iptables of service containers so that the container network services are accessible on the host network by using the host IP address and published ports.

You can use the Alibaba Cloud Server Load Balancer service to configure external Server Load Balancer service for a swarm mode cluster to route external requests to the target swarm mode cluster and achieve service discovery and Server Load Balancer on the Internet.

Routing mesh

In the preceding figure, the specific data flow from the node published port > the container targeted port is as follows.

  1. The host network accesses the service by using the worker node IP address and the published port for the service. This example defines that --publish 8888:80 can access the service by using workerIP:8888.

  2. The NAT table in the worker node iptables defines the rules. The dstIP of data matching the published host port (8888) is converted to the IP address in the ingress_sbox.

  3. The node Ingress_sbox sets the mangle table to mark (fwmark) data with the dst port as 8888. Data packets with fwmark are forwarded by Server Load Balancer to various actual IP addresses by using IPVS. The round-robin algorithm is used by default and the forware is in VS/NAT mode.

  4. Containers in the service will convert the dst port of data with a dst port of 8888 to 80 so as to access the actual service.

Advantages

  • The traditional DNS round-robin timeliness issue is eliminated and UDP and other protocols are supported. The performance is superior to DNS.

  • Any node in the cluster can act as the access point and the service is accessible from the published port on any host.

Orchestration example

Two methods are available for you to set ports:

  • You can manually specify an unused port (-p <PUBLISHED-PORT>:<TARGET-PORT> or --publish <PUBLISHED-PORT>:<TARGET-PORT>).
  • Or you can leave it to Docker Swarm to specify a port between 30000–32767.

Note: Make sure you reserve 7946 TCP/UDP for discovering the container network and reserve 4789 UDP to enable the ingress network to function normally.

The orchestration example is as follows:

  1. version: "3"
  2. services:
  3. ngx:
  4. image: nginx:latest
  5. ports:
  6. - 8888:80

In this example, Nginx container exposes port 80 to port 8888 of all nodes in the cluster. External requests can be automatically routed to the Nginx container running in the cluster from the port 8888 on all nodes.

Thank you! We've received your feedback.