nginx reverse proxy cache server construction

Proxy services can be simply divided into forward proxy and reverse proxy:

Forward proxy: It is used to proxy the connection request (such as VPN/NAT) of the internal network to the Internet. The client specifies the proxy server, and sends the HTTP request that was originally intended to be sent directly to the target Web server to the proxy server, and then The proxy server accesses the web server, and returns the response of the web server to the client:

Reverse proxy: Contrary to the forward proxy, if the local area network provides resources to the Internet and allows other users on the Internet to access resources in the local area network, you can also set up a proxy server, and the service it provides is a reverse proxy. Reverse proxy server Accepts connections from the Internet, then forwards the request to the server on the internal network and sends the Response back to

Clients on the Internet requesting a connection:

1. nginx reverse proxy: the scheduler of the web server

1. Reverse Proxy means that the proxy server accepts the connection request of the client, and then forwards the request to the web server (may be apache, nginx, tomcat, iis, etc.) on the network, and sends the request from the web server. The result obtained above is returned to the client requesting the connection, and the proxy server acts as a server to the outside world.

As can be seen from the above figure: the reverse proxy server proxy website Web server receives the Http request and forwards the request. Moreover, as a reverse proxy server, nginx can forward requests to different backend web servers according to the content of the user request, such as static and dynamic separation, and for example, create multiple virtual hosts on nginx, so that it can be successfully done in the browser. Access different web servers or web clusters in the backend when entering different domain names (urls).

2. What is the role of reverse proxy?

①Protect the website security: any request from the Internet must first go through a proxy server;

wKioL1jsz9yAHyulAABvTU4R-Ew435.png-wh_50

②Accelerate web requests by configuring the caching function: some static resources on the real web server can be cached to reduce the load pressure of the real web server;

wKiom1jsz-nwYDqSAABjrKK3l5E661.png-wh_50

③ Realize load balancing: Act as a load balancing server to distribute requests in a balanced manner, balancing the load pressure of each server in the cluster;

wKiom1jsz__Q7RpoAAJQnYSWdA8640.png-wh_50

2. What is nginx

1. Introduction to nginx

Nginx is a lightweight web server, reverse proxy and email proxy server. Known for its stability, rich feature set, sample configuration files, and low consumption of system resources. Nginx (pronounced with engine x) was developed by Russian programmer Igor Sysoev. It was originally used by the large Russian portal and search engine Rambler (Russian: Рамблер). This software is released under the BSD-like agreement and can run on UNIX, GNU/Linux, BSD, Mac OS X, Solaris, and Microsoft Windows.

Application Status of Nginx

Nginx is already running on Rambler Media (www.rambler.ru), the largest portal site in Russia, and more than 20% of virtual hosting platforms in Russia use Nginx as a reverse proxy server.

In China, many websites such as Taobao, Sina Blog, Sina Podcast, Netease News, Liujianfang, 56.com, Discuz!, Shuimu Community, Douban, YUPOO, Domestic, Xunlei Online, etc. already use Nginx as a web server or reverse proxy server.

2. The core features of Nginx

(1) Cross-platform: Nginx can be compiled and run on most OS, and there is also a Windows version;

(2) The configuration is very simple: it is very easy to get started.

(3) Non-blocking, high concurrent connections: The official test can support 50,000 concurrent connections, and in the actual production environment, it runs to 20,000 to 30,000 concurrent connections. (This is thanks to Nginx using the latest epoll model);

Note:

For a web server, first look at the basic process of a request: establishing a connection - receiving data - sending data, from the bottom of the system: the above process (establishing a connection - receiving data - sending data) is a read and write event at the bottom of the system.

If the blocking call method is used, when the read and write events are not ready, then the only way is to wait, the current thread is suspended, and the read and write events can only be performed when the events are ready.

If you use a non-blocking call method: the event returns immediately, telling you that the event is not ready yet, come back later. After a while, check the event again until the event is ready, in the meantime, you can do other things first, and then come back to see if the event is okay. Although it is not blocked, you have to check the status of the event from time to time, you can do more things, but the overhead is not small. A non-blocking call means that the call will not block the current thread until the result is not immediately available

(4) Event-driven: The communication mechanism adopts the epoll model to support larger concurrent connections.

Non-blocking judges whether to perform read and write operations by constantly checking the status of events, which brings a lot of overhead, so there is an asynchronous non-blocking event processing mechanism. This mechanism allows you to monitor multiple events at the same time. Calling them is non-blocking, but you can set a timeout. Within the timeout, if an event is ready, it will return. This mechanism solves the above two problems of blocking calls and non-blocking calls.

Take the epoll model as an example: when the event is not ready, it is put into the epoll (queue). If an event is ready, handle it; when the event is not ready, wait in epoll. In this way, we can process a large number of concurrent requests concurrently. Of course, the concurrent requests here refer to unprocessed requests. There is only one thread, so of course there is only one request that can be processed at the same time. It is just a constant switch between requests. The switch is also actively given up because the asynchronous event is not ready. There is no cost to switching here, you can understand it as looping through multiple prepared events.

Compared with the multi-threaded method, this event processing method has great advantages. It does not need to create threads, and each request occupies very little memory. There is no context switching. The event processing is very lightweight, and the number of concurrency is large. Nor will it lead to unnecessary waste of resources (context switching). For the apache server, each request will have an exclusive worker thread, and when the number of concurrency reaches several thousand, there will be several thousand threads processing requests at the same time. This is a big challenge for the operating system: because the memory usage caused by the thread is very large, the CPU overhead caused by the context switch of the thread is very large, and the natural performance cannot be improved, which leads to the performance in high concurrency scenarios. serious decline.

Summary: Through the asynchronous non-blocking event processing mechanism, Nginx implements multiple prepared events cyclically processed by the process, thereby achieving high concurrency and light weight.

(5) Master/Worker structure: a master process generates one or more worker processes.

Note: The Master-Worker design mode mainly includes two main components, Master and Worker. The Master maintains the Worker queue and sends requests to multiple Workers for parallel execution. The Worker mainly performs actual logic calculations and returns the results to the Master.

What are the benefits of nginx adopting this process model? Using independent processes can prevent each other from affecting each other. After one process exits, other processes are still working, and the service will not be interrupted. The Master process will quickly restart the new Worker process. Of course, the abnormal exit of the Worker process must be caused by a bug in the program. The abnormal exit will cause all requests on the current Worker to fail, but it will not affect all requests, so the risk is reduced.

(6) Small memory consumption: The memory consumption of processing large concurrent requests is very small. Under 30,000 concurrent connections, only 150M memory is consumed by 10 Nginx processes (15M*10=150M).

(7) Built-in health check function: If a web server at the back end of the Nginx proxy goes down, it will not affect front-end access.

(8) Save bandwidth: GZIP compression is supported, and the Header header cached locally by the browser can be added.

(9) High stability: used for reverse proxy, the probability of downtime is minimal.



3. Nginx+apache builds load balancing of web server clusters

nginx configure reverse proxy

Configure nginx as a reverse proxy and load balancer, and use its caching function to cache static pages in nginx to reduce the number of back-end server connections and check the health of the back-end web server.



wKiom1js0B-iFnITAACVfFt6894317.png-wh_50

1. Install nginx

surroundings:

OS: centos7.2

nginx: 192.168.31.83

apache1:192.168.31.141

apache2:192.168.31.250

Install dependencies such as zlib-devel and pcre-devel

[root@www ~]# yum -y install gcc gcc-c++ make libtool zlib zlib-devel pcre pcre-devel openssl openssl-devel

Note:

Combining proxy and upstream modules to achieve back-end web load balancing

Static file caching using proxy module

Combined with nginx's default ngx_http_proxy_module module and ngx_http_upstream_module module

To implement the health check of the backend server, you can also use the third-party module nginx_upstream_check_module

Use the nginx-sticky-module extension module to implement cookie session sticking (keep session)

Use ngx_cache_purge to achieve more powerful cache purge function

The two modules mentioned above belong to third-party extension modules. You need to download the source code in advance, and then install them together with --add-moudle=src_path when compiling.

install nginx

[root@www ~]# groupadd www #Add www group

[root@www ~]# useradd -g www www -s /sbin/nologin #Create nginx running account www and add it to the www group, and do not allow www users to log in directly to the system

#tar zxf nginx-1.10.2.tar.gz

#tar zxf ngx_cache_purge-2.3.tar.gz

#tar zxf master.tar.gz

# cd nginx-1.10.2/

[root@www nginx-1.10.2]# ./configure --prefix=/usr/local/nginx1.10 --user=www --group=www --with-http_stub_status_module --with-http_realip_module --with- http_ssl_module --with-http_gzip_static_module --http-client-body-temp-path=/var/tmp/nginx/client --http-proxy-temp-path=/var/tmp/nginx/proxy --http-fastcgi- temp-path=/var/tmp/nginx/fcgi --with-pcre --add-module=../ngx_cache_purge-2.3 --with-http_flv_module --add-module=../nginx-goodies-nginx-sticky -module-ng-08a395c66e42

[root@www nginx-1.10.2]# make && make install

Note: All modules of nginx must be added at compile time and cannot be dynamically loaded at runtime.

4. Other scheduling schemes for load-balance:

Here is an introduction to other scheduling algorithms supported by nginx's load balancing module:

Polling (default): Each request is allocated to different back-end servers one by one in chronological order. If a back-end server goes down, the faulty system will be automatically eliminated, so that user access will not be affected. Weight specifies the polling weight. The larger the Weight value is, the higher the access probability is allocated. It is mainly used when the performance of each server in the backend is uneven.

ip_hash: Each request is allocated according to the hash result of the access IP, so that visitors from the same IP can access a back-end server, which effectively solves the problem of session sharing in dynamic web pages. Of course, if this node is unavailable, it will be sent to the next node, and if there is no session synchronization at this time, it will be logged out.

least_conn : The request is sent to the realserver with the least current active connections. The value of weight will be considered.

url_hash : This method allocates requests according to the hash result of accessing the url, so that each url is directed to the same backend server, which can further improve the efficiency of the backend cache server. Nginx itself does not support url_hash. If you need to use this scheduling algorithm, you must install the Nginx hash package nginx_upstream_hash .

fair : This is a smarter load balancing algorithm than the two above. This algorithm can intelligently perform load balancing according to page size and loading time, that is, allocate requests according to the response time of the backend server, and give priority to those with short response time. Nginx itself does not support fair. If you need to use this scheduling algorithm, you must download the upstream_fair module of Nginx.

5. Load balancing and health check:

Strictly speaking, nginx does not have a health check for load balancing backend nodes, but it can be completed by the relevant instructions in the default ngx_http_proxy_module module and ngx_http_upstream_module module. When the backend node fails, it will automatically switch to the next a node to provide access.

weight : The polling weight can also be used in ip_hash, the default value is 1

max_fails : The number of times to allow requests to fail, defaults to 1. When the maximum number of times is exceeded, an error defined by the proxy_next_upstream module is returned.

fail_timeout: It has two meanings, one is to allow up to 2 failures within 10s; the other is to not allocate requests to this server within 10s after 2 failures.

6. nginx's proxy cache uses:

Caching is to cache js, css, image and other static files from the back-end server to the cache directory specified by nginx, which can not only reduce the burden on the back-end server, but also speed up the access speed, but it becomes a problem to clean up the cache in time, so The ngx_cache_purge module is needed to manually clean the cache before the expiration time.

The commonly used directives in the proxy module are proxy_pass and proxy_cache.

The web caching function of nginx is mainly completed by the proxy_cache, fastcgi_cache instruction set and related instruction sets. The proxy_cache instruction is responsible for the reverse proxy cache static content of the back-end server, and fastcgi_cache is mainly used to handle FastCGI dynamic process caching.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us