A ApsaraMQ for Kafka instance can be connected to Filebeat as an input. This topic describes how to use Filebeat to consume messages from ApsaraMQ for Kafka in a virtual private cloud (VPC).
Background information
Before you start this tutorial, make sure that the following operations are complete:- A ApsaraMQ for Kafka instance is purchased and deployed. For more information, see Purchase and deploy a VPC-connected instance.
- Filebeat is downloaded and installed. For more information, see Download Filebeat.
- Java Development Kit (JDK) 8 is downloaded and installed. For more information, see Download JDK 8.
Step 1: Obtain an endpoint
Filebeat establishes a connection to ApsaraMQ for Kafka by using a ApsaraMQ for Kafka endpoint.
Log on to the ApsaraMQ for Kafka console.
In the Resource Distribution section of the Overview page, select the region where your instance is deployed.
- On the Instances page, click the name of the instance that you want to connect to Filebeat as an input.
On the Instance Details page, obtain an endpoint of the instance in the Endpoint Information section. In the Configuration Information section, obtain the values of the Username parameter and Password parameter.
NoteFor information about the differences among endpoints, see Comparison among endpoints.
Step 2: Create a topic
Perform the following operations to create a topic for storing messages:
Log on to the ApsaraMQ for Kafka console.
In the Resource Distribution section of the Overview page, select the region where your instance is deployed.
ImportantYou must create topics in the region where your application is deployed. When you create a topic, select the region where your Elastic Compute Service (ECS) instance is deployed. A topic cannot be used across regions. For example, if your message producers and consumers run on ECS instances that are deployed in the China (Beijing) region, create topics in the China (Beijing) region.
On the Instances page, click the name of the instance that you want to manage.
In the left-side navigation pane, click Topics.
On the Topics page, click Create Topic.
In the Create Topic panel, configure the parameters and click OK.
Parameter
Description
Example
Name
The topic name.
demo
Description
The topic description.
demo test
Partitions
The number of partitions in the topic.
12
Storage Engine
NoteYou can select the type of the storage engine only if you use a Professional Edition instance. If you use a Standard Edition instance, cloud storage is selected by default.
The type of the storage engine that is used to store messages in the topic.
ApsaraMQ for Kafka supports the following types of storage engines:
Cloud Storage: If you select this value, the system uses Alibaba Cloud disks for the topic and stores data in three replicas in distributed mode. This storage engine features low latency, high performance, durability, and high reliability. If you set the Instance Edition parameter to Standard (High Write) when you created the instance, you can set this parameter only to Cloud Storage.
Local Storage: If you select this value, the system uses the in-sync replicas (ISR) algorithm of open source Apache Kafka and stores data in three replicas in distributed mode.
Cloud Storage
Message Type
The message type of the topic. Valid values:
Normal Message: By default, messages that have the same key are stored in the same partition in the order in which the messages are sent. When a broker in the cluster fails, the order of the messages may not be preserved in the partitions. If you set the Storage Engine parameter to Cloud Storage, this parameter is automatically set to Normal Message.
Partitionally Ordered Message: By default, messages that have the same key are stored in the same partition in the order in which the messages are sent. When a broker in the cluster fails, the messages are still stored in the partitions in the order in which the messages are sent. Messages in some partitions cannot be sent until the partitions are restored. If you set the Storage Engine parameter to Local Storage, this parameter is automatically set to Partitionally Ordered Message.
Normal Message
Log Cleanup Policy
The log cleanup policy that is used by the topic.
If you set the Storage Engine parameter to Local Storage, you must configure the Log Cleanup Policy parameter. You can set the Storage Engine parameter to Local Storage only if you use an ApsaraMQ for Kafka Professional Edition instance.
ApsaraMQ for Kafka provides the following log cleanup policies:
Delete: The default log cleanup policy. If sufficient storage space is available in the system, messages are retained based on the maximum retention period. After the storage usage exceeds 85%, the system deletes messages from the earliest stored message to ensure service availability.
Compact: The Apache Kafka log compaction policy is used. For more information, see Kafka 3.4 Documentation Log compaction ensures that the latest values are retained for messages that have the same key. This policy is suitable for scenarios such as restoring a failed system or reloading the cache after a system restarts. For example, when you use Kafka Connect or Confluent Schema Registry, you must store the information about the system status and configurations in a log-compacted topic.
ImportantYou can use log-compacted topics only in specific cloud-native components such as Kafka Connect and Confluent Schema Registry. For more information, see aliware-kafka-demos.
Compact
Tag
The tags that you want to attach to the topic.
demo
After the topic is created, it is displayed on the Topics page.
Step 3: Send messages
Perform the following operations to send messages to the topic that you created:
- Log on to the ApsaraMQ for Kafka console.
In the Resource Distribution section of the Overview page, select the region where your instance is deployed.
On the Instances page, click the name of the instance that you want to manage.
In the left-side navigation pane, click Topics.
On the Topics page, find the topic that you want to manage, and choose in the Actions column.
In the Start to Send and Consume Message panel, configure the required parameters to send a test message.
Set the Method of Sending parameter to Console.
In the Message Key field, enter the key of the message. For example, you can enter demo as the key of the message.
In the Message Content field, enter the content of the message. For example, you can enter {"key": "test"} as the content of the message.
Configure the Send to Specified Partition parameter to specify whether to send the message to a specified partition.
If you want to send the message to a specified partition, click Yes and enter the partition ID in the Partition ID field. For example, you can enter 0 as the partition ID. For information about how to query partition IDs, see View partition status.
If you do not want to send the message to a specified partition, click No.
Use Message Queue for Apache Kafka SDKs or run the docker commands that are displayed in the Start to Send and Consume Message panel to consume the message.
Set the Method of Sending parameter to Docker. Run a Docker container to produce a test message, and then consume the message.
Run the docker commands that are provided in the Run the Docker container to produce a sample message section to send a test message.
Run the docker commands that are provided in the How do I consume a message after the message is sent? section to consume the message.
Set the Method of Sending parameter to SDK and click the link to the topic that describes how to obtain and use the SDK that you want to use. Then, use the SDK to send and consume a test message. Message Queue for Apache Kafka provides topics that describe how to use SDKs for different programming languages based on different connection types.
Step 4: Create a group
Perform the following operations to create a group for Filebeat:
Log on to the ApsaraMQ for Kafka console.
In the Resource Distribution section of the Overview page, select the region where your instance is deployed.
On the Instances page, click the name of the instance that you want to manage.
In the left-side navigation pane, click Groups.
On the Groups page, click Create Group.
In the Create Group panel, enter the group name in the Group ID field and the group description in the Description field, attach tags to the group, and then click OK.
After the group is created, you can view the group on the Groups page.
Step 5: Use Filebeat to consume messages
Start Filebeat on the server where Filebeat is installed to consume messages from the created topic.
- Run the cd command to switch to the installation directory of Filebeat:
- Create a configuration file named input.yml.
- Run the
vim input.yml
command to create an empty configuration file. Press the I key to enter the insert mode.
- Enter the following content:
filebeat.inputs: - type: kafka hosts: - alikafka-pre-cn-zv**********-1-vpc.alikafka.aliyuncs.com:9092 - alikafka-pre-cn-zv**********-2-vpc.alikafka.aliyuncs.com:9092 - alikafka-pre-cn-zv**********-3-vpc.alikafka.aliyuncs.com:9092 topics: ["filebeat_test"] group_id: "filebeat_group" output.console: pretty: true
Parameter Description Example type The input type of Filebeat. kafka hosts The VPC endpoint of the Message Queue for Apache Kafka instance. ApsaraMQ for Kafka supports the following VPC endpoints: - Default endpoint
- SASL endpoint
- alikafka-pre-cn-zv**********-1-vpc.alikafka.aliyuncs.com:9092 - alikafka-pre-cn-zv**********-2-vpc.alikafka.aliyuncs.com:9092 - alikafka-pre-cn-zv**********-3-vpc.alikafka.aliyuncs.com:9092
topics The name of the topic. filebeat_test group_id The name of the group. filebeat_group For more information about parameter settings, see Kafka input plugin.
Press the Esc key to return to the command line mode.
Press the : key to go to the bottom line. Enter wq, press the Enter key to save the file, and then exit.
- Run the
- Run the following command to consume messages:
./filebeat -c ./input.yml