Nginx catches all at once

1. Performance monster - Nginx concept explained in simple language

1. Performance monster - Nginx concept explained in simple language
Nginx is the mainstream solution in the current load balancing technology, and almost most projects use it. Nginx is a lightweight high-performance HTTP reverse proxy server, and it is also a general-purpose proxy server that supports most protocols. , such as TCP, UDP, SMTP, HTTPS, etc.

Nginx is the same as the "Redis" mentioned before. It is a product built based on the multiplexing model. Therefore, it has the same characteristics as Redis with low resource consumption and high concurrency support. In theory, a single-node Nginx can support 5W at the same time. Concurrent connections, but in the actual production environment, the hardware foundation in place and combined with simple tuning can indeed achieve this value.
Let's first take a look at the comparison of the client request processing process before and after the introduction of Nginx:

Originally, the client directly requested the target server, and the target server directly completed the request processing work, but after adding Nginx, all requests will first go through Nginx, and then distributed to the specific server for processing, and then return to Nginx after processing, and finally The final response is returned to the client by Nginx.
After understanding the basic concepts of Nginx, let's quickly build the environment and understand some advanced features of Nginx, such as dynamic and static separation, resource compression, cache configuration, IP blacklist, high availability guarantee, etc.
Second, Nginx environment construction
❶ First create the Nginx directory and enter:
[root@localhost]# mkdir /soft && mkdir /soft/nginx/
[root@localhost]# cd /soft/nginx/

❷Download the Nginx installation package. You can upload the offline environment package through FTP tools, or you can obtain the installation package online through the wget command:
[root@localhost]# wget

Without wget command, it can be installed by yum command:
[root@localhost]# yum -y install wget

❸ Unzip the Nginx compressed package:
[root@localhost]# tar -xvzf nginx-1.21.6.tar.gz

❹ Download and install the dependent libraries and packages required by Nginx:
[root@localhost]# yum install --downloadonly --downloaddir=/soft/nginx/gcc-c++
[root@localhost]# yum install --downloadonly --downloaddir=/soft/nginx/ pcre pcre-devel4
[root@localhost]# yum install --downloadonly --downloaddir=/soft/nginx/ zlib zlib-devel
[root@localhost]# yum install --downloadonly --downloaddir=/soft/nginx/ openssl openssl-devel

You can also download it with one click through the yum command (the above method is recommended):
[root@localhost]# yum -y install gcc zlib zlib-devel pcre-devel openssl openssl-devel

After the execution is complete, then ls looks at the directory file and sees a lot of dependencies:

Then build the dependent packages one by one through the rpm command, or install all dependent packages with one click through the following command:
[root@localhost]# rpm -ivh --nodeps *.rpm

❺ Enter the decompressed nginx directory, and then execute the Nginx configuration script to configure the environment in advance for the subsequent installation, which is located in the /usr/local/nginx/ directory by default (customizable directory):
[root@localhost]# cd nginx-1.21.6
[root@localhost]# ./configure --prefix=/soft/nginx/

❻ Compile and install Nginx:
[root@localhost]# make && make install

❼ Finally, go back to the previous /soft/nginx/ directory and enter ls to see the files generated after installing nginx.
❽ Modify the nginx.conf configuration file in the conf directory generated after installation:
[root@localhost]# vi conf/nginx.conf
Modify the port number: listen 80;
Modify the IP address: server_name the local IP of your current machine (online configuration domain name);

❾ Make a configuration file and start Nginx:
[root@localhost]# sbin/nginx -c conf/nginx.conf
[root@localhost]# ps aux | grep nginx

Other Nginx operation commands:
sbin/nginx -t -c conf/nginx.conf # Check whether the configuration file is normal
sbin/nginx -s reload -c conf/nginx.conf # Smooth restart after modifying the configuration
sbin/nginx -s quit # Shut down Nginx gracefully, it will exit after completing the current task
sbin/nginx -s stop # Forcibly terminate Nginx, regardless of whether there are tasks currently executing

❿ Open port 80 and update the firewall:
[root@localhost]# firewall-cmd --zone=public --add-port=80/tcp --permanent
[root@localhost]# firewall-cmd --reload
[root@localhost]# firewall-cmd --zone=public --list-ports

⓫ In the Windows/Mac browser, directly enter the IP address just configured to access Nginx:

Finally, you see the Nginx welcome interface above, which means the Nginx installation is complete.
3. Nginx reverse proxy - load balancing
First, quickly build a WEB project through SpringBoot+Freemarker: springboot-web-nginx, and then in this project, create an file, the logic is as follows:
public class IndexNginxController {
private String port;

public ModelAndView index(){
ModelAndView model = new ModelAndView();
model.addObject("port", port);
return model;

In the Controller class, there is a member variable: port, and its value is the server.port value obtained from the configuration file. When there is a request to access/resource, jump to the front-end index page and return the value with it.
The front-end index.ftl file code is as follows:

Nginx demo page

Welcome to the Panda Club, my name is Bamboo ${port}!

It can be seen from the above that the logic is not complicated, and only the port output is obtained from the response.
OK~, after the premise work is ready, simply modify the configuration of nginx.conf:
upstream nginx_boot{
# Check the heartbeat and send two packets within 30s. If there is no reply, it means that the machine is down, and the request distribution weight ratio is 1:2
server weight=100 max_fails=2 fail_timeout=30s;
server weight=200 max_fails=2 fail_timeout=30s;
# Please configure the IP here as the IP of the machine where your WEB service is located

server {
location / {
root html;
# Configure the address of the index, and finally add index.ftl.
index index.html index.htm index.jsp index.ftl;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# The request is handed over to the upstream named nginx_boot
proxy_pass http://nginx_boot;

At this point, all the premise work is ready, then start Nginx, and then start two web services. When the first web service starts, in the configuration file, change the port number to 8080, and the second web service When the service starts, change its port number to 8090.

Finally, take a look at the effect:

Because the weight of the request distribution is configured, the weight ratio of 8080 and 8090 is 2:1, so the request will be evenly distributed to each machine according to the weight ratio, that is, 8080 once, 8090 twice, 8080 once...
Nginx request distribution principle
The request sent by the client will eventually change to:, and then initiate a request to the target IP. The process is as follows:

Since Nginx listens on port 80 of, the request will eventually find the Nginx process;
Nginx will first match according to the configured location rule, and according to the client's request path /, it will locate the location /{} rule;
Then according to the proxy_pass configured in the location, the upstream named nginx_boot will be found;
Finally, according to the configuration information in the upstream, the request is forwarded to the machine running the WEB service for processing. Since multiple WEB services are configured and the weight value is configured, Nginx will distribute the request in turn according to the weight ratio.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us