All Products
Search
Document Center

Managed Service for OpenTelemetry:Use OpenTelemetry to report the trace data of APISIX

Last Updated:Mar 11, 2026

When APISIX handles API requests, you need visibility into request latency, error propagation, and service dependencies. The APISIX OpenTelemetry plug-in generates distributed traces for each request and sends them through an OpenTelemetry Collector to Managed Service for OpenTelemetry, where you can visualize and analyze trace data.

The APISIX OpenTelemetry plug-in sends data to the Collector over HTTP only. gRPC is not supported.

How it works

image

Trace data flows through three components:

  1. APISIX generates a trace for each request using the OpenTelemetry plug-in.

  2. OpenTelemetry Collector receives, batches, and exports the trace data over OTLP HTTP.

  3. Managed Service for OpenTelemetry stores and displays the traces in the console.

Prerequisites

Before you begin, make sure that you have:

  • APISIX 2.13.0 or later

  • An HTTP endpoint from Managed Service for OpenTelemetry (see Obtain an endpoint)

Obtain an endpoint

New console

  1. Log on to the Managed Service for OpenTelemetry console. In the left-side navigation pane, click Integration Center.

  2. On the Integration Center page, click the OpenTelemetry card in the Open Source Frameworks section.

  3. In the OpenTelemetry panel, click the Start Integration tab, and then select a region.

    Resources are automatically initialized on first access to a region.
  4. Configure the Connection Type and Export Protocol parameters, then copy the endpoint.

    ParameterRecommended valueWhen to use
    Connection TypeAlibaba Cloud VPC NetworkYour service runs in the same region on Alibaba Cloud
    Connection TypePublic NetworkYour service runs outside Alibaba Cloud or in a different region
    Export ProtocolHTTP (recommended)Default for most clients
    Export ProtocolgRPCYour client requires gRPC

    75.jpg

Old console

  1. Log on to the Managed Service for OpenTelemetry console.

  2. In the left-side navigation pane, click Cluster Configurations. On the page that appears, click the Access point information tab.

  3. In the top navigation bar, select a region. In the Cluster Information section, turn on Show Token.

  4. Set the Client parameter to OpenTelemetry. In the Related Information column, copy the endpoint.

    Use a Virtual Private Cloud (VPC) endpoint if your application runs in an Alibaba Cloud production environment. Otherwise, use a public endpoint.

    ot旧版中.jpg

Step 1: Deploy the OpenTelemetry Collector

The OpenTelemetry Collector receives trace data from APISIX and exports it to Managed Service for OpenTelemetry. Choose a deployment method based on your environment:

EnvironmentMethodRecommendation
ACK (Kubernetes) clusterInstall from the ACK marketplaceRecommended for Kubernetes-based deployments
Docker or VMInstall manually with DockerFor non-Kubernetes environments

Option A: Install from the ACK marketplace

  1. Log on to the ACK console. In the left-side navigation pane, choose Marketplace > Marketplace.

  2. Find and click opentelemetry-collector. In the panel that appears, click Deploy in the upper-right corner.

  3. In the Deploy panel, select the target cluster and click Next.

  4. In the Parameters step, add the following configuration and click OK.

    Replace ${HTTP Endpoint} with the endpoint you obtained in Obtain an endpoint. Example: http://tracing-analysis-dc-hz.aliyuncs.com/adapt_xxxxx/api/otlp/traces.
       receivers:
         otlp:
           protocols:
             grpc:
               endpoint: 0.0.0.0:4317
             http:
               cors:
                 allowed_origins:
                 - http://*
                 - https://*
               endpoint: 0.0.0.0:4318 # OTLP HTTP Receiver
       processors:
         batch:
    
       exporters:
         otlphttp:
           traces_endpoint: '${HTTP Endpoint}'
           tls:
             insecure: true
    
       service:
         pipelines:
           traces:
             receivers: [otlp]
             processors: [batch]
             exporters: [otlphttp]

    80

Option B: Install manually with Docker

For more deployment options, see Install the Collector.

  1. Create a file named opentelemetry-config.yaml with the following content. This file defines how the Collector receives, processes, and exports trace data.

    Replace ${HTTP Endpoint} with the endpoint you obtained in Obtain an endpoint. Example: http://tracing-analysis-dc-hz.aliyuncs.com/adapt_xxxxx/api/otlp/traces.
       receivers:
         otlp:
           protocols:
             grpc:
               endpoint: 0.0.0.0:4317
             http:
               cors:
                 allowed_origins:
                 - http://*
                 - https://*
               endpoint: 0.0.0.0:4318 # OTLP HTTP Receiver
       processors:
         batch:
    
       exporters:
         otlphttp:
           traces_endpoint: '${HTTP Endpoint}'
           tls:
             insecure: true
    
       service:
         pipelines:
           traces:
             receivers: [otlp]
             processors: [batch]
             exporters: [otlphttp]
  2. Start the Collector.

       docker run -v $(pwd)/opentelemetry-config.yaml:/etc/otelcol-contrib/config.yaml otel/opentelemetry-collector-contrib:0.105.0

Step 2: Enable the OpenTelemetry plug-in in APISIX

Configuration differs by APISIX version.

APISIX V3.12 and later

  1. Enable the plug-in in the APISIX config.yaml file.

       ...
       plugins:
         ... # Other enabled plug-ins.
         - opentelemetry # Enable the OpenTelemetry plug-in.
  2. Set the plug-in metadata through the Admin API. Replace the following placeholders with your values: For more information about how to configure the OpenTelemetry plug-in, see the "Configuring the collector" section of the opentelemetry topic.

    PlaceholderDescriptionExample
    ${Service Name}Application name displayed on the Applications page in the consoleAPISIX
    ${Host IP}Host IP address displayed in the Span Details section of the Trace details page10.0.0.1
    ${OpenTelemetry Collector Address}IP address of the OpenTelemetry Collector127.0.0.1
    ${admin_key}Authentication key for the APISIX Admin API-
       curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/opentelemetry -H "X-API-KEY: ${admin_key}" -X PUT -d '
       {
           "trace_id_source": "x-request-id",
           "resource": {
             "service.name": "${Service Name}",
             "host.ip":"${Host IP}"
           },
           "collector": {
             "address": "${OpenTelemetry Collector Address}:4318",
             "request_timeout": 3,
             "batch_span_processor": {
               "drop_on_queue_full": false,
               "max_queue_size": 1024,
               "batch_timeout": 2,
               "inactive_timeout": 1,
               "max_export_batch_size": 16
             },
             "set_ngx_var": false
           }
       }'

APISIX earlier than V3.12

Enable the plug-in and set the Collector address in the APISIX config.yaml file.

Replace the following placeholders with your values:

PlaceholderDescriptionExample
${Service Name}Application name displayed on the Applications page in the consoleAPISIX
${Host IP}Host IP address displayed in the Span Details section of the Trace details page10.0.0.1
${OpenTelemetry Collector Address}IP address of the OpenTelemetry Collector127.0.0.1
...
plugins:
  ... # Other enabled plug-ins.
  - opentelemetry # Enable the OpenTelemetry plug-in.

plugin_attr:
  ...
   opentelemetry: # OpenTelemetry plug-in configuration.
    resource:
      service.name: ${Service Name} # Application name.
      host.ip: ${Host IP}   # Host IP address.
    collector:
      address: ${OpenTelemetry Collector Address}:4318 # OTLP HTTP receiver endpoint of the Collector.
      request_timeout: 3
    batch_span_processor: # Batch processing configuration.
      drop_on_queue_full: false
      max_queue_size: 6
      batch_timeout: 2
      inactive_timeout: 1
      max_export_batch_size: 2

For more information about how to configure the OpenTelemetry plug-in, see the "Configuring the collector" section of the opentelemetry topic.

Step 3: Set the plug-in scope

Use the APISIX Admin API to apply the OpenTelemetry plug-in globally or to specific routes.

Apply globally

Enable the plug-in for all routes:

If the sampler parameter is set to always_on, each request is tracked and a trace is generated.
curl 'http://127.0.0.1:9080/apisix/admin/global_rules/1' \
-H 'X-API-KEY:  edd1c9f034335f136f87ad84b625c8f1' \
-X PUT -d '{
  "plugins": {
      "opentelemetry": {
          "sampler": {
              "name": "always_on"
          }
      }
  }
}'

Apply to a specific route

Enable the plug-in only for requests matching /get:

curl http://127.0.0.1:9080/apisix/admin/routes/1 \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' \
-X PUT -d '
{
  "uri": "/get",
  "plugins": {
      "opentelemetry": {
          "sampler": {
              "name": "always_on"
          }
      }
  },
  "upstream": {
      "type": "roundrobin",
      "nodes": {
          "httpbin.org:80": 1
      }
  }
}'

For more information about how to configure the attributes of OpenTelemetry, see the "Attributes" section of the opentelemetry topic.

Step 4: Verify traces in the console

After you configure the plug-in, send requests through APISIX to generate traces, then view them in the Managed Service for OpenTelemetry console.

  1. Log on to the Managed Service for OpenTelemetry console. On the Applications page, click the name of the APISIX application.

    Applications page

  2. On the Trace details tab, view the APISIX trace information.

    Trace details

End-to-end example with Docker Compose

This example deploys the full APISIX stack with an OpenTelemetry Collector using the official APISIX Docker Compose demo.

Prerequisites

Before you begin, make sure that you have:

  • Git, Docker, and Docker Compose installed

  • APISIX 2.13.0 or later

  • An HTTP endpoint from Managed Service for OpenTelemetry

Procedure

  1. Clone the APISIX Docker demo.

       git clone https://github.com/apache/apisix-docker.git
       cd apisix-docker/example
  2. Add the OpenTelemetry Collector configuration. Create a folder named ot_conf in the apisix-docker/example directory, and create a file named config.yaml inside it.

    Replace ${HTTP Endpoint} with the endpoint obtained in Obtain an endpoint. Example: http://tracing-analysis-dc-hz.aliyuncs.com/adapt_xxxxx/api/otlp/traces.
       receivers:
         otlp:
           protocols:
             grpc:
               endpoint: 0.0.0.0:4317
             http:
               cors:
                 allowed_origins:
                 - http://*
                 - https://*
               endpoint: 0.0.0.0:4318
       processors:
         batch:
    
       exporters:
         otlphttp:
           traces_endpoint: '${HTTP Endpoint}'
           tls:
             insecure: true
    
       service:
         pipelines:
           traces:
             receivers: [otlp]
             processors: [batch]
             exporters: [otlphttp]
  3. Add the Collector service to Docker Compose. Edit the apisix-docker/example/docker-compose.yml file and add the following service definition:

       otel-collector:
         image: otel/opentelemetry-collector-contrib:0.105.0
         volumes:
           - ./ot_conf/config.yaml:/etc/otelcol-contrib/config.yaml
         ports:
           - 4317:4317 # OTLP gRPC receiver
           - 4318:4318 # OTLP HTTP receiver
         networks:
           apisix:
       #
       # Licensed to the Apache Software Foundation (ASF) under one or more
       # contributor license agreements.  See the NOTICE file distributed with
       # this work for additional information regarding copyright ownership.
       # The ASF licenses this file to You under the Apache License, Version 2.0
       # (the "License"); you may not use this file except in compliance with
       # the License.  You may obtain a copy of the License at
       #
       #     http://www.apache.org/licenses/LICENSE-2.0
       #
       # Unless required by applicable law or agreed to in writing, software
       # distributed under the License is distributed on an "AS IS" BASIS,
       # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
       # See the License for the specific language governing permissions and
       # limitations under the License.
       #
    
       version: "3"
    
       services:
         apisix:
           image: apache/apisix:${APISIX_IMAGE_TAG:-3.9.0-debian}
           restart: always
           volumes:
             - ./apisix_conf/config.yaml:/usr/local/apisix/conf/config.yaml:ro
           depends_on:
             - etcd
           ##network_mode: host
           ports:
             - "9180:9180/tcp"
             - "9080:9080/tcp"
             - "9091:9091/tcp"
             - "9443:9443/tcp"
             - "9092:9092/tcp"
           networks:
             apisix:
    
         etcd:
           image: bitnami/etcd:3.5.11
           restart: always
           volumes:
             - etcd_data:/bitnami/etcd
           environment:
             ETCD_ENABLE_V2: "true"
             ALLOW_NONE_AUTHENTICATION: "yes"
             ETCD_ADVERTISE_CLIENT_URLS: "http://etcd:2379"
             ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2379"
           ports:
             - "2379:2379/tcp"
           networks:
             apisix:
    
         web1:
           image: nginx:1.19.0-alpine
           restart: always
           volumes:
             - ./upstream/web1.conf:/etc/nginx/nginx.conf
           ports:
             - "9081:80/tcp"
           environment:
             - NGINX_PORT=80
           networks:
             apisix:
    
         web2:
           image: nginx:1.19.0-alpine
           restart: always
           volumes:
             - ./upstream/web2.conf:/etc/nginx/nginx.conf
           ports:
             - "9082:80/tcp"
           environment:
             - NGINX_PORT=80
           networks:
             apisix:
    
         prometheus:
           image: prom/prometheus:v2.25.0
           restart: always
           volumes:
             - ./prometheus_conf/prometheus.yml:/etc/prometheus/prometheus.yml
           ports:
             - "9090:9090"
           networks:
             apisix:
    
         grafana:
           image: grafana/grafana:7.3.7
           restart: always
           ports:
             - "3000:3000"
           volumes:
             - "./grafana_conf/provisioning:/etc/grafana/provisioning"
             - "./grafana_conf/dashboards:/var/lib/grafana/dashboards"
             - "./grafana_conf/config/grafana.ini:/etc/grafana/grafana.ini"
           networks:
             apisix:
    
         otel-collector:
           image: otel/opentelemetry-collector-contrib:0.105.0
           volumes:
             - ./ot_conf/config.yaml:/etc/otelcol-contrib/config.yaml
           ports:
             - 4317:4317 # OTLP gRPC receiver
             - 4318:4318 # OTLP HTTP receiver
           networks:
             apisix:
    
       networks:
         apisix:
           driver: bridge
    
       volumes:
         etcd_data:
           driver: local
  4. Enable the OpenTelemetry plug-in in APISIX. Append the following content to the apisix-docker/example/apisix_conf/config.yaml file:

       plugins:
         - opentelemetry
    
       plugin_attr:
         prometheus:
           export_addr:
             ip: "0.0.0.0"
             port: 9091
         opentelemetry:
           resource:
             service.name: APISIX
             host.ip: 127.0.0.1
           collector:
             address: docker-apisix-otel-collector-1:4318 # OTLP HTTP Receiver address
             request_timeout: 3
           batch_span_processor:
             drop_on_queue_full: false
             max_queue_size: 6
             batch_timeout: 2
             inactive_timeout: 1
             max_export_batch_size: 2
  5. Start all services. Run the following command from the apisix-docker/example directory:

       docker compose -p docker-apisix up -d
  6. Enable the plug-in globally.

       curl 'http://127.0.0.1:9180/apisix/admin/global_rules/1' \
       -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' \
       -X PUT -d '{
           "plugins": {
               "opentelemetry": {
                   "sampler": {
                       "name": "always_on"
                   }
               }
           }
       }'
  7. Create a test route and send a request.

    1. Create a route.

      curl "http://127.0.0.1:9180/apisix/admin/routes/1" -H "X-API-KEY: edd1c9f034335f136f87ad84b625c8f1" -X PUT -d '
      {
        "methods": ["GET"],
        "host": "example.com",
        "uri": "/anything/*",
        "upstream": {
          "type": "roundrobin",
          "nodes": {
            "httpbin.org:80": 1
          }
        }
      }'
    2. Send a request. The plug-in generates a trace and reports it to Managed Service for OpenTelemetry.

      curl -i -X GET "http://127.0.0.1:9080/anything/foo?arg=10" -H "Host: example.com"

      Expected output:

      HTTP/1.1 200 OK
      Content-Type: application/json
      Content-Length: 501
      Connection: keep-alive
      Date: Wed, 24 Jul 2024 03:26:11 GMT
      Access-Control-Allow-Origin: *
      Access-Control-Allow-Credentials: true
      Server: APISIX/3.9.0
      
      {
        "args": {
          "arg": "10"
        },
        "data": "",
        "files": {},
        "form": {},
        "headers": {
          "Accept": "*/*",
          "Host": "example.com",
          "Traceparent": "00-xxxxxx-xxxx-01",
          "User-Agent": "curl/7.61.1",
          "X-Amzn-Trace-Id": "Root=1-xxx-xxxx",
          "X-Forwarded-Host": "example.com"
        },
        "json": null,
        "method": "GET",
        "origin": "x.x.x.x, x.x.x.x",
        "url": "http://example.com/anything/foo?arg=10"
      }

      The Traceparent header in the response confirms that trace context propagation is active.

  8. View traces in the console.

    1. Log on to the Managed Service for OpenTelemetry console. On the Applications page, click the name of the APISIX application. Applications page

    2. On the Trace details tab, view the trace information. Trace details

References