Nginx reverse proxy and load balancing

1.1 What is a cluster
Simply put, a cluster refers to a group (several) of mutually independent computers that use a high-speed communication network to form a larger computer service system. Each cluster node (that is, each computer in the cluster) runs its own service. standalone server. These servers can communicate with each other, cooperate to provide users with applications, system resources and data, and manage them in a single system mode. When the user client requests the cluster system, the cluster gives the user the impression that it is a single independent server, but in fact the user requests a group of cluster servers.

When you open the pages of Google and Baidu, it looks so simple. You may think that you can create a similar webpage in a few minutes, but in fact, behind this page is the result of the collaborative work of thousands of server clusters. And so many server maintenance and management, and mutual coordination may be your future job responsibilities, readers.

If you want to describe a cluster in one sentence, that is, a bunch of servers cooperate to do the same thing. These machines may require the entire technical team to structure, design, and coordinate management. These machines can be distributed in a computer room, or distributed in various regions of the country and the world. Multiple machine rooms.

1.2 Why there is a cluster
High performance, price effectiveness, scalability, high availability

Transparency, manageability, editability

1.2.1 Cluster types
Load balancing cluster LB solves the scheduling problem

High availability cluster HA solves the problem of single point of failure (keeplived)

High performance computing cluster HP, network computing cluster GC

1.2.2 Hardware equipment
F5 device A10

1.2.3 Software
nginx (7 layers, 4 layers after version 1.9), LVS (4 layers), HAproxy (4 layers, 7 layers)

1.2.4 Concept description of load balancing
Scheduling and managing user access requests

To share the pressure of the user's access request

1.2.5 Reverse proxy
Receive user requests instead of users to access the backend

The difference between reverse proxy and data forwarding

1.2.6 Ways of stress testing
ab (command in apache)

Obtained by yum install httpd-tools

1.3 nginx reverse proxy practice

1.4.1 Module scheduling algorithm
①. Define round-robin scheduling algorithm-rr-default scheduling algorithm

②. Define the weight scheduling algorithm-wrr

③. Define the static scheduling algorithm-ip_hash

④. Define the minimum number of connections -least_conn

1.4.2 Two modules related to nginx reverse proxy
The upstream module is similar to a pond, and nginx nodes are placed in the pond

The proxy module uses the nginx node in the pond to use the proxy to call

1.4.3 Introduction to the core parameters of the upstream module
weight weight

max_fails the number of throws

fail_timeout timeout of failure

backup backup

1.4.10 least_conn parameter
See who is free, and who is free to send to whom

The least_conn algorithm will determine the allocation according to the number of connections of the back-end nodes, and the machine with the least number of connections will be distributed.

1.4.11 The fair parameter
Let's see who responds quickly

This algorithm will allocate requests according to the response time of the back-end node server, and assign priority to those with short response time. This is a smarter scheduling algorithm. This algorithm can intelligently perform load balancing according to the page size and loading time, that is, allocate requests according to the response time of the back-end server, and give priority to allocation with short response time. Nginx itself does not support the fair scheduling algorithm. If you need to use this scheduling algorithm, you must download the related module upstream_fair of Nginx.

Examples are as follows:

upstream name {
server 192.168.1.1;
server 192.168.1.2;
fair;
}
In addition to the above algorithms, there are some third-party scheduling algorithms, such as: url_hash, consistent hash algorithm, etc.

1.4.12 Scheduling algorithm
Define round-robin scheduling algorithm rr default scheduling algorithm average distribution

Define the weight scheduling algorithm wrr

Define the static scheduling algorithm ip-hash

Define the minimum number of connections -least_conn

1.4.14 Reverse Proxy Troubleshooting Ideas
01. First visit the backend node on lb01 for testing

02. Access the local address on lb01 for testing

03. Test on the browser

Cache, domain name resolution

1.4.15 proxy_next_uptream parameter
When nginx receives the status code defined by the proxy_next_upstream parameter returned by the backend server, it will forward the request to the normal working backend server, such as 500, 502, 503, 504. This parameter can improve the user's access experience.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us