This topic describes how to use NGINX as a proxy for Apsara File Storage NAS.

Background information

NGINX is a light-weight high-performance Web server. It includes many features and can be used as a reverse proxy. One of the most common application modes for NGINX is to serve as a reverse proxy. A proxy server accepts connection requests from clients over the Internet. Then, the proxy server forwards these requests to a server that resides in an internal network and returns responses from the server to these clients. In such cases, when a proxy server acts on behalf of the server, it is called a reverse proxy.

An application server that resides in a private network is not accessible by clients outside the private network. In such cases, a reverse proxy is required to serve as an intermediary between an application server and clients. The reverse proxy resides in the same private network as the application server but is accessible by clients outside the internal network. The reverse proxy and the application server can share the same physical server but use different ports.

The following example uses one NGINX server as a reverse proxy, four NGINX servers as proxy servers, and Apsara File Storage NAS as backend storage. Apsara File Storage NAS stores cache files of proxy servers, and back-to-origin files or static data files uploaded by end-users. Apsara File Storage NAS allows shared access to the same file system from different proxy servers. This enables data to be synchronized between proxy servers and ensures data consistency. This also prevents servers from repeatedly retrieving files from the origin and guarantees efficient use of bandwidth. The following figure shows an example of network topology.

Network topology

You can create an environment as shown in the preceding topology by following the instructions provided in this topic. This topic takes a CentOS ECS instance as an example.

Step 1: Deploy an NGINX reverse proxy

  1. Install NGINX.
    yum install nginx
  2. Configure a reverse proxy that points to a proxy server.
    1. Use the following command to open the /etc/nginx/nginx.conf file.
      vim /etc/nginx/nginx.conf
    2. In the /etc/nginx/nginx.conf file, configure the http context. Take the following code as an example.
      http {
      upstream web{
               server 10.10.0.10;
               server 10.10.0.11;
               server 10.10.0.12;
               server 10.10.0.13;
            }
            server {
                listen 80;
                    location / {
                         proxy_pass http://web;
                     }
            }
      }

Step 2: Create a file system and mount target

  1. Create an NFS file system in a region. For more information, see Create file systems.
    Note A file system and an ECS instance on which the file system is mounted must reside in the same region.
  2. Create a mount target of the VPC type. For more information, see Add a mount target.

Step 3: Deploy an NGINX proxy server

  1. Use the following command to install NGINX.
    sudo yum install nginx
  2. Use the following command to install an NFS client.
    sudo yum install nfs-utils
  3. Use the following command to mount a file system on a directory of the NGINX website.
    sudo mount -t nfs -o vers=4.0,file-system-id.region.nas.aliyuncs.com:/ /usr/share/nginx/html/ 

    In the preceding command, file-system-id.region.nas.aliyuncs.com:/ specifies the domain name of the mount point. You need to replace the domain name based on your business requirements.

  4. Edit the NGINX root file.
    echo “This is Testing for Nginx&NAS”> /usr/share/nginx/html/index.html
  5. Repeat the preceding steps to configure the other three NGINX proxy servers and mount the same NFS file system on each proxy server.
  6. Verify the configuration result.

    A successful configuration of proxy servers is indicated if each NGINX proxy server can access the index.html root file.

    Verify the configuration result