All Products
Search
Document Center

ApsaraMQ for Kafka:Connect Filebeat to ApsaraMQ for Kafka over the internet

Last Updated:Mar 11, 2026

Internet-based log pipelines often need to consume messages from a managed Kafka service without VPC connectivity. ApsaraMQ for Kafka exposes a public SSL endpoint on port 9093 that Filebeat can connect to using its built-in Kafka input plugin with SASL authentication.

This topic walks through the end-to-end setup: obtaining your SSL endpoint and credentials, creating a topic and consumer group, and configuring Filebeat to consume messages.

# Minimal Filebeat configuration for ApsaraMQ for Kafka (internet/SSL)
filebeat.inputs:
- type: kafka
  hosts:
    - <your-kafka-endpoint-1>:9093
    - <your-kafka-endpoint-2>:9093
    - <your-kafka-endpoint-3>:9093
  username: "<your-instance-username>"
  password: "<your-instance-password>"
  topics: ["<your-topic>"]
  group_id: "<your-group-id>"
  ssl.certificate_authorities: ["<path-to-ca-cert>"]
  ssl.verification_mode: none

output.console:
  pretty: true

The following sections explain how to get each value.

Prerequisites

Before you begin, make sure that you have:

Step 1: Get the endpoint and credentials

Filebeat connects through the public SSL endpoint and authenticates with the SASL username and password of your ApsaraMQ for Kafka instance.

  1. Log on to the ApsaraMQ for Kafka console.

  2. In the Resource Distribution section of the Overview page, select the region where your instance is deployed.

  3. On the Instances page, click the name of your instance.

  4. On the Instance Details page, collect the following values:

    ValueLocation on pageDescription
    SSL endpointEndpoint Information sectionThe public endpoint uses port 9093.
    UsernameConfiguration Information sectionThe SASL username for authentication.
    PasswordConfiguration Information sectionThe SASL password for authentication.
Endpoint and credential information on the Instance Details page
Note

ApsaraMQ for Kafka provides multiple endpoint types. For internet access, use the SSL endpoint. For details, see Comparison among endpoints.

Step 2: Create a topic

Create a topic that Filebeat consumes messages from.

Important

Create the topic in the same region as your Elastic Compute Service (ECS) instance. Topics cannot be used across regions. For example, if producers and consumers run on an ECS instance in the China (Beijing) region, the topic must also be in the China (Beijing) region.

  1. In the ApsaraMQ for Kafka console, navigate to your instance and click Topics in the left-side navigation pane.

  2. Click Create Topic.

  3. In the Create Topic panel, configure the following parameters and click OK.

ParameterDescriptionExample
NameThe topic name.demo
DescriptionA brief description.demo test
PartitionsThe number of partitions.12
Storage EngineCloud Storage (default) or Local Storage. Cloud Storage stores data in three distributed replicas on Alibaba Cloud disks with low latency and high reliability. Local Storage uses the Apache Kafka in-sync replicas (ISR) algorithm. Only non-serverless Professional Edition instances allow you to select the storage engine. If the Instance Edition is set to Standard (High Write), only Cloud Storage is available.Cloud Storage
Message TypeNormal Message or Partitionally Ordered Message. Auto-selected based on storage engine: Cloud Storage sets Normal Message; Local Storage sets Partitionally Ordered Message. Partitionally Ordered Message preserves message order during broker failures, but affected partitions may be temporarily unavailable.Normal Message
Log Cleanup PolicyOnly available with Local Storage. Delete: retains messages up to the max retention period and removes the oldest when storage exceeds 85%. Compact: retains only the latest value per message key, used with Kafka Connect and Confluent Schema Registry. See log compaction in Apache Kafka.
Important

Log-compacted topics can only be used in specific cloud-native components, such as Kafka Connect and Confluent Schema Registry. For more information, see aliware-kafka-demos.

Compact
TagOptional tags for the topic.demo

After the topic is created, it appears on the Topics page.

Step 3: Send test messages

Publish messages to the topic so Filebeat has data to consume.

  1. In the ApsaraMQ for Kafka console, go to Topics and click the topic name.

  2. In the upper-right corner of the Topic Details page, click Send Message.

  3. In the Start to Send and Consume Message panel, choose a sending method:

    • Console: Enter a Message Key (for example, demo) and Message Content (for example, {"key": "test"}). Optionally, set Send to Specified Partition to Yes and enter a Partition ID to target a specific partition. For information about how to query partition IDs, see View partition status. After sending the message, use ApsaraMQ for Kafka SDKs or run the Docker commands displayed in the panel to subscribe to the test message.

    • Docker: Run the Docker commands displayed in the panel.

    • SDK: Select an SDK for your programming language and follow the instructions.

Step 4: Create a consumer group

Create a consumer group for Filebeat to use.

  1. In the ApsaraMQ for Kafka console, navigate to your instance and click Groups in the left-side navigation pane.

  2. Click Create Group.

  3. In the Create Group panel, enter a Group ID and an optional Description, attach tags if needed, and click OK.

The consumer group appears on the Groups page.

Step 5: Download the certificate, configure Filebeat, and start consuming

With the endpoint, topic, and consumer group ready, configure Filebeat to consume messages over SSL.

Download the CA certificate

Internet connections to ApsaraMQ for Kafka use SSL encryption. Filebeat needs the CA certificate to verify the server identity. In the Filebeat installation directory, run:

wget https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20220826/ytsw/only-4096-ca-cert

This downloads the only-4096-ca-cert file to the current directory.

Create the configuration file

Create a file named input.yml in the Filebeat installation directory:

filebeat.inputs:
- type: kafka
  hosts:
    - <your-kafka-endpoint-1>:9093
    - <your-kafka-endpoint-2>:9093
    - <your-kafka-endpoint-3>:9093
  username: "<your-instance-username>"
  password: "<your-instance-password>"
  topics: ["<your-topic>"]
  group_id: "<your-group-id>"
  ssl.certificate_authorities: ["<path-to-ca-cert>"]
  ssl.verification_mode: none

output.console:
  pretty: true

Replace the placeholders with your actual values:

PlaceholderWhere to find itExample
<your-kafka-endpoint-{1,2,3}>SSL endpoint hostnames from Step 1alikafka-pre-cn-zv**********-1.alikafka.aliyuncs.com
<your-instance-username>Username from Step 1alikafka_pre-cn-v641e1dt***
<your-instance-password>Password from Step 1aeN3WLRoMPRXmAP2jvJuGk84Kuuo***
<your-topic>Topic name from Step 2filebeat_test
<your-group-id>Group ID from Step 4filebeat_group
<path-to-ca-cert>Absolute path to the downloaded CA certificate/home/admin/filebeat/filebeat-7.7.0-linux-x86_64/only-4096-ca-cert
Important

ssl.verification_mode: none skips hostname verification. This is acceptable for testing but not recommended for production. For production, set it to full and make sure the CA certificate chain is valid.

For all available Kafka input parameters, see Kafka input plugin in the Filebeat documentation.

Start Filebeat

Run the following command in the Filebeat installation directory:

./filebeat -c ./input.yml

On a successful connection, consumed messages appear in JSON format:

Filebeat console output showing consumed messages

What to do next

  • Route to Elasticsearch: Replace the output.console section with an output.elasticsearch configuration to index consumed messages. See Elasticsearch output in the Filebeat documentation.

  • Parse message content: Add parsers or processors to your Filebeat configuration to parse JSON or multiline messages before output.