This topic describes how to use OpenTelemetry to perform tracing analysis on NGINX. NGINX is a high-performance, lightweight, and open source web server. It can also be used as a reverse proxy. You can extend the features of NGINX by adding modules. The ngx_otel_module can collect the trace data of NGINX and report the trace data to the OpenTelemetry Collector. Then, the OpenTelemetry Collector forwards the trace data to Managed Service for OpenTelemetry.
Limits
The ngx_otel_module can report the trace data of NGINX only over gRPC and cannot report the trace data of NGINX over HTTP.
The ngx_otel_module does not allow you to configure an authentication token for reporting the trace data of NGINX over gRPC. Therefore, you must deploy the OpenTelemetry Collector. The ngx_otel_module collects the trace data of NGINX and reports the trace data to the OpenTelemetry Collector. Then, the OpenTelemetry Collector forwards the trace data to Managed Service for OpenTelemetry. For more information, see the Step 3: Deploy the OpenTelemetry Collector section of this topic.
Before you begin
Procedure
Step 1: Download the ngx_otel_module
Download the ngx_otel_module for Alibaba Cloud Linux, Red Hat Linux, Red Hat Enterprise Linux, or their derivatives.
sudo yum install nginx-module-otel
Download the ngx_otel_module for Debian, Ubuntu, or their derivatives.
sudo apt install nginx-module-otel
Step 2: Enable the ngx_otel_module
To enable the tracing analysis feature for NGINX, you must load the ngx_otel_module and configure parameters of the ngx_otel_module in the main configuration file /etc/nginx/nginx.conf
of NGINX. For more information about the parameters of the ngx_otel_module, see Module ngx_otel_module.
Enable tracing analysis for all HTTP requests
NoteReplace the following two variables in the configuration file of NGINX:
${OTEL_COLLECTOR_GRPC_RECEIVER_ENDPOINT}
: the endpoint that the OpenTelemetry Collector uses to receive data reported over gRPC. Example:localhost:4317
. This endpoint is not the gRPC endpoint that you obtained from the Managed Service for OpenTelemetry console in the "Before you begin" section of this topic.${SERVICE_NAME}
: the application name of NGINX. The application name of NGINX is displayed on the Applications page in the Managed Service for OpenTelemetry console.
load_module modules/ngx_otel_module.so; # Load the ngx_otel_module. ... http { ... otel_exporter { endpoint ${OTEL_COLLECTOR_GRPC_RECEIVER_ENDPOINT}; # The endpoint that the OpenTelemetry Collector uses to receive data reported over gRPC. Example: localhost:4317. } otel_trace on; # Enable tracing. otel_service_name ${SERVICE_NAME}; # The application name of NGINX. otel_trace_context propagate; # Inject the trace context to downstream services. ... }
Enable tracing analysis for HTTP requests sent to a single location
NoteReplace the following two variables in the configuration file of NGINX:
${OTEL_COLLECTOR_GRPC_RECEIVER_ENDPOINT}
: the endpoint that the OpenTelemetry Collector uses to receive data reported over gRPC. Example:localhost:4317
. This endpoint is not the gRPC endpoint that you obtained from the Managed Service for OpenTelemetry console in the "Before you begin" section of this topic.${SERVICE_NAME}
: the application name of NGINX. The application name of NGINX is displayed in the Managed Service for OpenTelemetry console.
load_module modules/ngx_otel_module.so; # Load the ngx_otel_module. ... http { otel_exporter { endpoint ${OTEL_COLLECTOR_GRPC_RECEIVER_ENDPOINT}; # The endpoint that the OpenTelemetry Collector uses to receive data reported over gRPC. Example: localhost:4317. } server { listen 127.0.0.1:80; location /hello { otel_trace on; # Enable tracing only for the 127.0.0.1:80/hello location. otel_service_name ${SERVICE_NAME} # The application name of NGINX. otel_trace_context propagate; # Inject the trace context to downstream services. ... } } }
Step 3: Deploy the OpenTelemetry Collector
The following example shows how to deploy the OpenTelemetry Collector by using Docker. For more information, see Install the Collector.
Create a file named
opentelemetry-config.yaml
and copy the following content to the file.This file is used to define and configure the behaviors and features of the OpenTelemetry Collector, including how to receive, process, and export data.
NoteReplace
${GRPC_ENDPOINT}
and${GRPC_ENDPOINT_TOKEN}
with the gRPC endpoint and authentication token that you obtained in the Before you begin section of this topic.receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: otlp: endpoint: ${GRPC_ENDPOINT} tls: insecure: true headers: "Authentication": "${GRPC_ENDPOINT_TOKEN}" processors: batch: service: pipelines: traces: receivers: [otlp] processors: [batch] exporters: [otlp]
Run the following command to start the OpenTelemetry Collector:
docker run -v $(pwd)/opentelemetry-config.yaml:/etc/otelcol-contrib/config.yaml otel/opentelemetry-collector-contrib:0.105.0
Step 4: View NGINX traces
After you complete the preceding steps and restart NGINX, you can send requests to NGINX to generate traces. Then, you can log on to the Managed Service for OpenTelemetry console to view the NGINX traces generated by OpenTelemetry.
On the Applications page, find the NGINX application and click the application name.
On the Trace details tab, view the trace details, including the request duration, client IP address, and HTTP status code. If your backend service is also connected to Managed Service for OpenTelemetry, the traces of NGINX and the backend services are automatically associated to provide an integrated view.
Example
The following example shows how to collect the trace data of NGINX and a backend service and report the trace data to Managed Service for OpenTelemetry.
Step 1: Make preparations
Install Git, Docker, and Docker Compose.
Step 2: Create project directories
nginx-otel-demo
│
├── docker-compose.yml # The configuration file of Docker Compose.
│
├── nginx_conf/ # The configuration files of NGINX.
│ ├── default.conf
│ └── nginx.conf
│
├── otel_conf/ # The configuration file of the OpenTelemetry Collector.
│ └── config.yaml
│
└── backend/ # The backend service developed by using Node.js.
├── Dockerfile
├── main.js
├── package.json
└── package-lock.json
Create directories.
mkdir nginx-otel-demo && cd nginx-otel-demo
mkdir -p nginx_conf otel_conf backend
Step 3: Create the configuration files of NGINX
Create the main configuration file
nginx.conf
of NGINX.cat << 'EOF' > nginx_conf/nginx.conf load_module modules/ngx_otel_module.so; user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; otel_exporter { endpoint otel-collector:4317; } otel_trace on; otel_trace_context propagate; otel_service_name nginx; include /etc/nginx/conf.d/*.conf; } EOF
Create the configuration file
default.conf
of NGINX.cat << 'EOF' > nginx_conf/default.conf server { listen 80; server_name localhost; location / { root /usr/share/nginx/html; index index.html index.htm; } # Configure parameters. location /hello { proxy_pass http://backend-api:7001/hello; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } EOF
Step 4: Create the configuration file of the OpenTelemetry Collector
Create the configuration file of the OpenTelemetry Collector. This configuration file defines the methods to receive, process, and report data.
Replace ${GRPC_ENDPOINT}
and ${GRPC_ENDPOINT_TOKEN}
with the gRPC endpoint and authentication token that are obtained in the Before you begin section of this topic.
cat << 'EOF' > otel_conf/config.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
exporters:
otlp:
endpoint: ${GRPC_ENDPOINT}
tls:
insecure: true
headers:
"Authentication": "${GRPC_ENDPOINT_TOKEN}"
processors:
batch:
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp]
EOF
Step 5: Create a backend service by using Node.js
Create the package.json file. This file contains the configuration information of the backend service, including the service name, service version, and dependencies.
cat << 'EOF' > backend/package.json { "name": "backend", "version": "1.0.0", "main": "index.js", "scripts": {}, "keywords": [], "author": "", "license": "ISC", "description": "", "dependencies": { "@opentelemetry/api": "^1.9.0", "@opentelemetry/auto-instrumentations-node": "^0.52.0", "axios": "^1.7.7", "express": "^4.21.1" } } EOF
Create the main.js file that defines a basic Express web application.
cat << 'EOF' > backend/main.js "use strict"; const axios = require("axios").default; const express = require("express"); const app = express(); app.get("/", async (req, res) => { const result = await axios.get("http://localhost:7001/hello"); return res.status(201).send(result.data); }); app.get("/hello", async (req, res) => { console.log("hello world!") res.json({ code: 200, msg: "success" }); }); app.use(express.json()); app.listen(7001, () => { console.log("Listening on http://localhost:7001"); }); EOF
Create the Dockerfile.
cat << 'EOF' > backend/Dockerfile FROM node:20.16.0 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . ENV OTEL_TRACES_EXPORTER="otlp" ENV OTEL_LOGS_EXPORTER=none ENV OTEL_METRICS_EXPORTER=none ENV OTEL_EXPORTER_OTLP_TRACES_PROTOCOL=grpc ENV OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://otel-collector:4317" ENV OTEL_NODE_RESOURCE_DETECTORS="env,host,os" ENV OTEL_SERVICE_NAME="ot-nodejs-demo" ENV NODE_OPTIONS="--require @opentelemetry/auto-instrumentations-node/register" EXPOSE 7001 CMD ["node", "main.js"] EOF
Step 6: Create the configuration file of Docker Compose
The configuration file of Docker Compose defines the configurations of a multi-container application, including NGINX that serves as a reverse proxy, the OpenTelemetry Collector, and a Node.js backend service. This file also defines the network connections and port mappings between the components.
cat << 'EOF' > docker-compose.yml
version: "3"
services:
nginx:
image: nginx:1.27.2-alpine-otel # The default NGINX image that contains the ngx_otel_module.
volumes:
- ./nginx_conf/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx_conf/default.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- "80:80"
networks:
- nginx-otel-demo
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
volumes:
- ./otel_conf/config.yaml:/etc/otelcol-contrib/config.yaml # Load the configuration file of the OpenTelemetry Collector.
ports:
- "4317:4317" # OTLP gRPC receiver
networks:
- nginx-otel-demo
backend-api:
build:
context: ./backend
dockerfile: Dockerfile
environment:
- NODE_ENV=production
ports:
- "7001:7001"
networks:
- nginx-otel-demo
networks:
nginx-otel-demo:
driver: bridge
EOF
Step 7: Start the service
Run the following command in the nginx-otel-demo directory:
docker compose up -d
Expected output
Access the backend service.
curl http://localhost:80/hello
Expected output:
{"code":200,"msg":"success"}
Log on to the Managed Service for OpenTelemetry console to view the traces of NGINX and the backend service.
In this example, nginx is displayed as the application name of NGINX on the Applications page, and the name of the backend service is ot-nodejs-demo.
FAQ
What do I do if I fail to download nginx-module-otel with the error message "
Unable to find a match: nginx-module-otel
"?Check whether you have configured the NGINX package repository. If no, configure it by referring to nginx-otel.
What do I do if I fail to start NGINX after I configure the ngx_otel_module in the configuration file of NGINX?
Run the
nginx -t
command to check whether the NGINX configurations are valid, or view the error message in the NGINX logs.sudo tail -n 50 /var/log/nginx/error.log
What do I do if no NGINX traces are available in the Managed Service for OpenTelemetry console after I enable the ngx_otel_module?
Check whether the value of the otel_exporter.endpoint parameter is valid in the configuration file of NGINX. This endpoint consists of the IP address of the server on which the OpenTelemetry Collector is deployed and the port that the OpenTelemetry Collector uses to receive data reported over gRPC. Example:
localhost:4317
. You can check whether the configurations are valid by viewing the NGINX logs. If errors shown in the following figure are reported, the endpoint is invalid.What do I do if the NGINX traces cannot be associated with those of another application?
Check whether
otel_trace_context propagate;
is configured in the configuration file of NGINX, and whether the protocol that the application uses for trace context propagation is the same as the protocol used by NGINX. The ngx_otel_module uses the OpenTelemetry protocol and W3C specification for trace context propagation. Therefore, the NGINX traces can be associated with those of the application only if the application also uses the OpenTelemetry protocol and W3C specification for trace context propagation.Does the ngx_otel_module affect the NGINX performance?
The ngx_otel_module is a native module of NGINX. According to the NGINX team, the impact of this module on the NGINX performance is limited to 10% to 15%. For more information, see NGINX Native OpenTelemetry (OTel) Module.