All Products
Search
Document Center

ApsaraMQ for Kafka:Send Filebeat output to ApsaraMQ for Kafka over the Internet

Last Updated:Mar 11, 2026

When you collect logs with Filebeat across multiple servers, you need a centralized destination that accepts high-throughput writes over the public Internet with built-in authentication. ApsaraMQ for Kafka provides an SSL endpoint (port 9093) that Filebeat can connect to directly, so you can stream logs into Kafka topics without managing your own brokers or VPN tunnels.

This guide walks you through retrieving your instance credentials, creating a topic, configuring Filebeat with SSL/SASL authentication, and verifying message delivery.

Prerequisites

Before you begin, make sure that you have:

Step 1: Get the endpoint, username, and password

Filebeat connects to ApsaraMQ for Kafka through an SSL endpoint (port 9093) over the public Internet.

  1. Log on to the ApsaraMQ for Kafka console.

  2. In the Resource Distribution section of the Overview page, select the region of your instance.

  3. On the Instances page, click the instance name.

  4. On the Instance Details page, find the following values:

    • Endpoint Information section: Copy the SSL endpoint. The endpoint consists of multiple broker addresses in the format alikafka-pre-cn-zv**********-{N}.alikafka.aliyuncs.com:9093.

    • Configuration Information section: Note the Username and Password.

endpoint
For details about the differences between endpoint types, see Comparison among endpoints.

Step 2: Create a topic

Create a topic to receive Filebeat messages.

Important

Create the topic in the same region as the Elastic Compute Service (ECS) instance where your producers and consumers run. Topics cannot be used across regions.

  1. Log on to the ApsaraMQ for Kafka console.

  2. In the Resource Distribution section of the Overview page, select the region of your instance.

  3. On the Instances page, click the instance name.

  4. In the left-side navigation pane, click Topics.

  5. On the Topics page, click Create Topic.

  6. In the Create Topic panel, configure the following parameters and click OK.

ParameterDescriptionExample
NameThe topic name.demo
DescriptionA brief description of the topic.demo test
PartitionsThe number of partitions.12
Storage EngineThe storage engine type. Available only for non-serverless Professional Edition instances. Other instance types default to Cloud Storage. Options: Cloud Storage -- Uses Alibaba Cloud disks with three-replica distributed storage. Provides low latency, high performance, long durability, and high reliability. Required for Standard (High Write) edition instances. Local Storage -- Uses the in-sync replicas (ISR) algorithm of open-source Apache Kafka with three-replica distributed storage.Cloud Storage
Message TypeThe message ordering guarantee. Normal Message -- Messages with the same key are stored in the same partition in send order. Partition ordering may not be preserved during a broker failure. Auto-selected when Storage Engine is set to Cloud Storage. Partitionally Ordered Message -- Messages with the same key are stored in the same partition in send order. Ordering is preserved even during a broker failure, but affected partitions become temporarily unavailable. Auto-selected when Storage Engine is set to Local Storage.Normal Message
Log Cleanup PolicyThe log retention policy. Available only when Storage Engine is set to Local Storage (Professional Edition only). Delete -- Default. Retains messages up to the maximum retention period. Deletes the oldest messages when storage usage exceeds 85%. Compact -- The log compaction policy from Apache Kafka. Retains only the latest value for each message key. Suitable for scenarios such as restoring a failed system or reloading the cache after a system restarts. For example, when you use Kafka Connect or Confluent Schema Registry, you must store system status and configuration information in a log-compacted topic.
Important

You can use log-compacted topics only in specific cloud-native components, such as Kafka Connect and Confluent Schema Registry. For more information, see aliware-kafka-demos.

Compact
TagOptional tags for the topic.demo

After creation, the topic appears on the Topics page.

Step 3: Configure and run Filebeat

Configure Filebeat to send log data to the topic you created over an authenticated SSL connection.

Download the CA certificate

Download the certificate authority (CA) certificate for SSL on the server where Filebeat is installed:

cd <filebeat-install-dir>
wget https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20220826/ytsw/only-4096-ca-cert

Replace <filebeat-install-dir> with your Filebeat installation directory.

Create the Filebeat configuration file

Create a file named output.yml in the Filebeat installation directory with the following content:

filebeat.inputs:
  - type: log
    paths:
      - /var/log/messages    # Path to the log file to monitor

output.kafka:
  hosts:
    - "alikafka-pre-cn-zv**********-1.alikafka.aliyuncs.com:9093"
    - "alikafka-pre-cn-zv**********-2.alikafka.aliyuncs.com:9093"
    - "alikafka-pre-cn-zv**********-3.alikafka.aliyuncs.com:9093"

  username: "<your-username>"        # Instance username from Configuration Information
  password: "<your-password>"        # Instance password from Configuration Information

  topic: "filebeat_test"
  partition.round_robin:
    reachable_only: false

  ssl.certificate_authorities:
    - "<filebeat-install-dir>/only-4096-ca-cert"
  ssl.verification_mode: none

  required_acks: 1
  compression: none
  max_message_bytes: 1000000

Replace the following placeholders with your actual values:

PlaceholderDescriptionExample
<your-username>The username from the Configuration Information section of your instance.alikafka_pre-cn-v641e1d***
<your-password>The password from the Configuration Information section of your instance.aeN3WLRoMPRXmAP2jvJuGk84Kuuo***
<filebeat-install-dir>The absolute path to the Filebeat installation directory./home/admin/filebeat/filebeat-7.7.0-linux-x86_64

Parameter reference

ParameterDescriptionDefault
hostsThe SSL endpoint addresses of your ApsaraMQ for Kafka instance. Use the public endpoint (port 9093).--
usernameThe instance username for SASL authentication. When username and password are set, Filebeat uses PLAIN as the SASL mechanism.--
passwordThe instance password for SASL authentication.--
topicThe Kafka topic to send messages to.--
partition.round_robin.reachable_onlyWhether to send messages only to reachable partitions. false: Output is not blocked if a partition leader is unavailable. true: Output may be blocked if a partition leader is unavailable.false
ssl.certificate_authoritiesThe absolute path to the downloaded CA certificate file.--
ssl.verification_modeThe SSL certificate verification mode. Set to none to skip hostname verification.full
required_acksThe ACK reliability level. 0: No acknowledgment. 1: Wait for the partition leader to confirm. -1: Wait for all in-sync replicas to confirm.1
compressionThe compression codec. Valid values: none, snappy (C++ compression and decompression library), lz4 (lossless data compression algorithm for fast compression and decompression), gzip (GNU file compression program).gzip
max_message_bytesThe maximum message size in bytes. Must be smaller than the maximum message size configured for your ApsaraMQ for Kafka instance.1000000

For the full list of Kafka output parameters, see the Filebeat Kafka output plugin documentation.

Send a test message

Run Filebeat with the configuration file:

./filebeat -c ./output.yml

With type: log configured, Filebeat starts shipping entries from /var/log/messages immediately. To quickly test the pipeline without waiting for new log entries, change the input type to stdin, run the command above, type test, and press Enter:

# Quick test configuration -- replace the filebeat.inputs section
filebeat.inputs:
  - type: stdin

Step 4: Verify message delivery

After Filebeat starts, check the topic in the ApsaraMQ for Kafka console to confirm messages are arriving.

Check partition status

  1. Log on to the ApsaraMQ for Kafka console.

  2. In the Resource Distribution section of the Overview page, select your region.

  3. On the Instances page, click the instance name.

  4. In the left-side navigation pane, click Topics.

  5. On the Topics page, click the topic name, then click the Partition Status tab.

The partition status table shows the following information:

ParameterDescription
Partition IDThe partition ID.
Minimum OffsetThe earliest message offset in the partition.
Maximum OffsetThe latest message offset in the partition.
MessagesThe total number of messages in the partition.
Last Updated AtWhen the most recent message was stored.
Partition status

If Messages is greater than zero and Last Updated At shows a recent timestamp, Filebeat is delivering messages successfully.

Query a message by offset

  1. In the left-side navigation pane, click Message Query.

  2. Set Search Method to Search by offset.

  3. Select the Topic and Partition, enter an Offset value, and click Search. The console returns messages with offsets greater than or equal to the specified value in the selected partition.

The query results include:

ParameterDescription
PartitionThe partition from which the message was retrieved.
OffsetThe message offset.
KeyThe message key, displayed as a string.
ValueThe message content, displayed as a string.
Created AtThe timestamp when the message was produced. Uses the client-recorded timestamp or the ProducerRecord timestamp field value. A 1970/x/x timestamp means the field was set to 0 or an invalid value. Clients on ApsaraMQ for Kafka 0.9 or earlier cannot set this field.
ActionsDownload Key and Download Value let you download the full message key or content.
The console displays up to 1 KB of content per message. Larger messages are truncated in the display. Download the message to view the full content. Each download is limited to 10 MB.

Troubleshooting

IssueCauseSolution
Authentication errors in Filebeat logsIncorrect username, password, or missing SSL configuration.Verify the Username and Password in the Configuration Information section of your instance. Make sure ssl.certificate_authorities points to the downloaded CA certificate file.
Unexpected gzip compression errorsCompression codec mismatch between Filebeat and the Kafka instance.Set compression: none in the configuration, or verify that your ApsaraMQ for Kafka instance supports the selected codec.

What's next