All Products
Search
Document Center

ApsaraMQ for Kafka:Migrate data using a tool

Last Updated:Mar 11, 2026

Migrate topics, consumer groups, and message data from a self-managed Apache Kafka cluster to an ApsaraMQ for Kafka instance using the ApsaraMQ for Kafka migration tool and Apache Kafka MirrorMaker.

The migration tool (kafka-migration-assessment.jar) handles metadata:

  • Reads topic configurations from your source ZooKeeper cluster and recreates them on the destination instance

  • Discovers consumer groups from your source Kafka cluster and recreates them on the destination instance

MirrorMaker handles message data (optional):

  • Consumes messages from the source cluster and produces them to the destination cluster

Migration workflow

Migration process

The end-to-end migration follows four steps:

  1. Evaluate specifications -- Assess the instance specifications required for your workload.

  2. Purchase and deploy an instance -- Create an ApsaraMQ for Kafka instance based on the evaluation.

  3. Migrate topics and consumer groups -- Transfer metadata from the self-managed cluster to the new instance.

  4. (Optional) Migrate message data -- Replicate messages using MirrorMaker.

Before you begin

  • If your self-managed Apache Kafka cluster runs on Alibaba Cloud, purchase the ApsaraMQ for Kafka instance in the same region and deploy it in the same Virtual Private Cloud (VPC). VPC-based migration is faster and more secure than Internet-based migration.

  • This guide uses an Internet- and VPC-connected ApsaraMQ for Kafka instance as an example.

Step 1: Evaluate specifications

ApsaraMQ for Kafka provides a specification evaluation feature that recommends instance specifications based on your self-managed cluster's traffic, disk capacity, and disk type.

For instructions, see Evaluate specifications.

Step 2: Purchase and deploy an instance

Purchase an ApsaraMQ for Kafka instance that matches the evaluated specifications.

For instructions, see Purchase and deploy an Internet- and VPC-connected instance.

Step 3: Migrate topics and consumer groups

Prerequisites

Before you begin, make sure that you have:

Migrate topics

Topic migration reads topic metadata from the source ZooKeeper and creates matching topics on the destination ApsaraMQ for Kafka instance. The process has two phases: a precheck (dry run) and a commit (actual migration).

Phase 1: Precheck

Preview which topics will be migrated:

java -jar kafka-migration-assessment.jar TopicMigrationFromZk  \
  --sourceZkConnect 192.168.XX.XX  \
  --destAk <your-access-key-id>  \
  --destSk <your-access-key-secret>  \
  --destRegionId <your-region-id>  \
  --destInstanceId <your-instance-id>

Replace the following placeholders with your actual values:

ParameterDescriptionExample
192.168.XX.XXIP address of the source ZooKeeper cluster192.168.0.100
<your-access-key-id>AccessKey ID of the Alibaba Cloud account that owns the destination instanceLTAI5tXxx
<your-access-key-secret>AccessKey secret of the Alibaba Cloud account that owns the destination instancexXxXxXx
<your-region-id>Region ID of the destination ApsaraMQ for Kafka instancecn-hangzhou
<your-instance-id>Instance ID of the destination ApsaraMQ for Kafka instancealikafka_post-cn-xxx

Sample output:

13:40:08 INFO - Begin to migrate topics:[test]
13:40:08 INFO - Total topic number:1
13:40:08 INFO - Will create topic:test, isCompactTopic:false, partition number:1

Review the output to confirm the topic list, partition counts, and compaction settings.

Phase 2: Commit

Append the --commit flag to run the actual migration:

java -jar kafka-migration-assessment.jar TopicMigrationFromZk  \
  --sourceZkConnect 192.168.XX.XX  \
  --destAk <your-access-key-id>  \
  --destSk <your-access-key-secret>  \
  --destRegionId <your-region-id>  \
  --destInstanceId <your-instance-id>  \
  --commit

Sample output:

13:51:12 INFO - Begin to migrate topics:[test]
13:51:12 INFO - Total topic number:1
13:51:13 INFO - cmd=TopicMigrationFromZk, request=null, response={"code":200,"requestId":"7F76C7D7-AAB5-4E29-B49B-CD6F1E0F508B","success":true,"message":"operation success"}
13:51:13 INFO - TopicCreate success, topic=test, partition number=1, isCompactTopic=false

A "code":200 response with "success":true confirms the topic was created on the destination instance.

Migrate consumer groups

Consumer group migration requires a Kafka consumer properties file to read consumer offsets from the source cluster.

Step 1: Create a kafka.properties file

Create a file named kafka.properties with the following content:

# Endpoint of the self-managed Apache Kafka cluster
bootstrap.servers=localhost:9092

# Consumer group ID. Use a group that has no existing offset data
# to make sure consumption starts from the first message.
group.id=XXX

## If the source cluster requires authentication, uncomment
## and configure the following parameters:

# SASL authentication mechanism
#sasl.mechanism=PLAIN

# Security protocol
#security.protocol=SASL_SSL

# SSL truststore path
#ssl.truststore.location=/path/to/kafka.client.truststore.jks

# SSL truststore password
#ssl.truststore.password=***

# SASL JAAS configuration path
#java.security.auth.login.config=/path/to/kafka_client_jaas.conf

Step 2: Precheck

Preview which consumer groups will be migrated:

java -jar kafka-migration-assessment.jar ConsumerGroupMigrationFromTopic  \
  --propertiesPath /usr/local/kafka_2.12-2.4.0/config/kafka.properties  \
  --destAk <your-access-key-id>  \
  --destSk <your-access-key-secret>  \
  --destRegionId <your-region-id>  \
  --destInstanceId <your-instance-id>
ParameterDescription
--propertiesPathAbsolute path to the kafka.properties file
--destAkAccessKey ID of the destination account
--destSkAccessKey secret of the destination account
--destRegionIdRegion ID of the destination instance
--destInstanceIdInstance ID of the destination instance

Sample output:

15:29:45 INFO - Will create consumer groups:[XXX, test-consumer-group]

Step 3: Commit

Append the --commit flag to run the actual migration:

java -jar kafka-migration-assessment.jar ConsumerGroupMigrationFromTopic  \
  --propertiesPath /usr/local/kafka_2.12-2.4.0/config/kafka.properties  \
  --destAk <your-access-key-id>  \
  --destSk <your-access-key-secret>  \
  --destRegionId <your-region-id>  \
  --destInstanceId <your-instance-id>  \
  --commit

Sample output:

15:35:51 INFO - cmd=ConsumerGroupMigrationFromTopic, request=null, response={"code":200,"requestId":"C9797848-FD4C-411F-966D-0D4AB5D12F55","success":true,"message":"operation success"}
15:35:51 INFO - ConsumerCreate success, consumer group=XXX
15:35:57 INFO - cmd=ConsumerGroupMigrationFromTopic, request=null, response={"code":200,"requestId":"3BCFDBF2-3CD9-4D48-92C3-385C8DBB9709","success":true,"message":"operation success"}
15:35:57 INFO - ConsumerCreate success, consumer group=test-consumer-group

Monitor migration progress

  1. Log on to the ApsaraMQ for Kafka console.

  2. In the Resource Distribution section of the Overview page, select the region of your destination instance.

  3. In the left-side navigation pane, click Migration, then click the Metadata Import tab.

  4. Find your destination instance and check the migration status for topics and consumer groups.

Verify migrated topics and consumer groups

  1. Log on to the ApsaraMQ for Kafka console.

  2. In the Resource Distribution section of the Overview page, select the region of your destination instance.

  3. In the left-side navigation pane, click Instances, then click the instance name.

  4. Verify that the migrated resources exist:

    • Click Topics in the left-side navigation pane to confirm all topics are present.

    • Click Groups in the left-side navigation pane to confirm all consumer groups are present.

(Optional) Step 4: Migrate message data

If you need to replicate historical or live message data, use Apache Kafka MirrorMaker. MirrorMaker runs a built-in consumer to read messages from the source cluster and a built-in producer to write them to the destination cluster. For more information, see Mirroring data between clusters & Geo-replication.

MirrorMaker architecture

Prerequisites

Before you begin, make sure that you have:

Constraints

Review the following constraints before starting data migration:

ConstraintDetails
Topic namesMust match between source and destination
Partition countCan differ between source and destination
Partition assignmentMessages in the same source partition may land in a different partition on the destination
Key-based routingMessages with the same key are still routed to the same partition by default
Message orderingNormal (unkeyed) messages may arrive out of order during node failures. Partitionally ordered messages can retain their order.
AuthenticationIf both clusters use password-based authentication, the passwords must match. Mismatched passwords are not supported.

Option A: Migrate over the Internet

Use this option when the source cluster and destination instance are in different networks.

Step 1: Download the SSL certificate: mix.4096.client.truststore.jks.

Step 2: Create a kafka_client_jaas.conf file:

KafkaClient {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="<your-username>"
   password="<your-password>";
};

Step 3: Create a consumer.properties file for the source cluster:

# Endpoint of the self-managed Apache Kafka cluster
bootstrap.servers=XXX.XXX.XXX.XXX:9092

# Partition assignment strategy
partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor

# Consumer group name
group.id=test-consumer-group

Step 4: Create a producer.properties file for the destination instance:

# SSL endpoint of the ApsaraMQ for Kafka instance.
# Get this from the ApsaraMQ for Kafka console.
bootstrap.servers=XXX.XXX.XXX.XXX:9093

# Data compression method
compression.type=none

# SSL truststore configuration (use the certificate from Step 1)
ssl.truststore.location=kafka.client.truststore.jks
ssl.truststore.password=KafkaOnsClient
security.protocol=SASL_SSL
sasl.mechanism=PLAIN

# Required only for ApsaraMQ for Kafka 2.x instances
ssl.endpoint.identification.algorithm=

Step 5: Set the JAAS configuration path:

export KAFKA_OPTS="-Djava.security.auth.login.config=kafka_client_jaas.conf"

Step 6: Start MirrorMaker:

sh bin/kafka-mirror-maker.sh \
  --consumer.config config/consumer.properties \
  --producer.config config/producer.properties \
  --whitelist topicName

Replace topicName with the name of the topic to migrate. Use a regular expression to match multiple topics (for example, topic1|topic2 or .* for all topics).

Option B: Migrate over a VPC

Use this option when the source cluster and destination instance are in the same VPC. No SSL or SASL configuration is required.

Step 1: Create a consumer.properties file for the source cluster:

# Endpoint of the self-managed Apache Kafka cluster
bootstrap.servers=XXX.XXX.XXX.XXX:9092

# Partition assignment strategy
partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor

# Consumer group name
group.id=test-consumer-group

Step 2: Create a producer.properties file for the destination instance:

# Default endpoint of the ApsaraMQ for Kafka instance.
# Get this from the ApsaraMQ for Kafka console.
bootstrap.servers=XXX.XXX.XXX.XXX:9092

# Data compression method
compression.type=none

Step 3: Start MirrorMaker:

sh bin/kafka-mirror-maker.sh \
  --consumer.config config/consumer.properties \
  --producer.config config/producer.properties \
  --whitelist topicName

Replace topicName with the name of the topic to migrate. Use a regular expression to match multiple topics (for example, topic1|topic2 or .* for all topics).

Verify message replication

Use either of the following methods to confirm MirrorMaker is working:

  • Check consumer lag on the source cluster. Run the following command to verify that MirrorMaker is consuming messages: A decreasing LAG value confirms that MirrorMaker is actively consuming and forwarding messages.

      bin/kafka-consumer-groups.sh --new-consumer --describe \
        --bootstrap-server <source-cluster-endpoint> \
        --group test-consumer-group
  • Send a test message and verify it arrives. Produce a message to a topic on the source cluster, then check the partition status of that topic in the ApsaraMQ for Kafka console. Verify that the total message count increases and the message content is correct. For details, see Query messages.

Cut over to the new cluster

After metadata and message data are migrated, switch traffic from the self-managed cluster to the ApsaraMQ for Kafka instance:

  1. Start new consumers. Connect new consumer groups to the ApsaraMQ for Kafka instance and begin consuming messages.

  2. Switch producers. Point new producers to the ApsaraMQ for Kafka instance and shut down the original producers. Keep the original consumer groups running on the self-managed cluster to drain remaining messages.

  3. Decommission the old cluster. After the original consumer groups have consumed all remaining messages on the self-managed cluster, shut them down and decommission the self-managed cluster.