×
Community Blog Redis Data Synchronization with RDS MySQL Using Canal & Kafka

Redis Data Synchronization with RDS MySQL Using Canal & Kafka

This tutorial shows you how to synchronize Redis data with ApsaraDB RDS for MySQL using Canal and Kafka.

This tutorial shows you how to synchronize Redis data with ApsaraDB RDS for MySQL using Canal and Kafka.

You can access all the tutorial resources, including deployment script (Terraform), related source code, sample data, and instruction guidance, from the GitHub project: https://github.com/alibabacloud-howto/opensource_with_apsaradb/tree/main/apache-superset

For more tutorials around Alibaba Cloud Database, please refer to: https://github.com/alibabacloud-howto/database

Project URL

https://github.com/alibabacloud-howto/solution-mysql-redis-canal-kafka-sync

Architecture Overview

1

Experimental Scenario Introduction

This document uses Kafka and Canal to achieve data synchronization between Redis and MySQL. The architecture diagram is as follows:

2

Through the architecture diagram, we can clearly know the components that need to be used, MySQL, Canal, Kafka, Redis, etc.

Except for Canal which needs to be deployed on ECS, these components will be created using Terraform tools.

How Canal Works

  1. Canal simulates the interactive protocol of the MySQL slave, pretending to be a MySQL slave, and sends the dump protocol to the MySQL master.
  2. The MySQL master receives the dump request and starts to push the binary log to the slave (ie canal).
  3. Canal parses binary log objects (originally byte streams) and sends them to storage destinations, such as MySQL, Kafka, Elastic Search, etc.

Use Terraform to Create Resources & Set up Environment

Install Terraform (skip this step if you already have Terraform installed)

Run the following command to update the apt installation source.(This experiment uses the Ubuntu 16.04 system)

apt update

3

Run the following command to install the unpacking tool:

apt install -y unzip zip

4

Run the following command to download the Terraform installation package:

wget http://labex-ali-data.oss-us-west-1.aliyuncs.com/terraform/terraform_0.14.6_linux_amd64.zip

5

Run the following command to unpack the Terraform installation package to /usr/local/bin:

unzip terraform_0.14.6_linux_amd64.zip -d /usr/local/bin/

6

Create Resources

Refer back to the user's home directory as shown below, click AccessKey Management.

7

Click Create AccessKey. After AccessKey has been created successfully, AccessKeyID and AccessKeySecret are displayed. AccessKeySecret is only displayed once. Click Download CSV FIle to save the AccessKeySecret

8

Back to the ECS command line,

Enter the following command,

mkdir -p terraform && cd terraform

9

Enter the command vim main.tf, copy the content of this file https://github.com/alibabacloud-howto/solution-mysql-redis-canal-kafka-sync/blob/master/deployment/terraform/main.tf to the file, save and exit. Please pay attention to modify YOUR-ACCESS-ID and YOUR-ACCESS-KEY to your own AccessKey

10

Enter the following command,

terraform init

11

Enter the following command,

terraform plan

12

Enter the following command,

terraform apply

13

Enter "yes" to start creating related resources. It takes about 10 minutes, please wait patiently.

14

Created successfully.

15

Install MySQL Client

Back to the ECS console, you can see the two ECS instances just created. First, remotely log in to the instance of "labex".

16

ssh root@<labex-ECS-public-IP>

The default account name and password of the ECS instance:

Account name: root

Password: Aliyun-test

Enter the following command to install the MySQL client.

apt update && apt -y install mysql-client

17

Install Redis Client

Enter the following command to download the Redis installation package.

wget https://labex-ali-data.oss-us-west-1.aliyuncs.com/redis/redis-5.0.12.tar.gz

18

Enter the following command to decompress the installation package.

tar -xzf redis-5.0.12.tar.gz

19

Enter the following command to compile Redis.

cd redis-5.0.12 && make 

20

Enter the command vim /etc/profile, copy the following content to the file, save and exit.

vim /etc/profile
export PATH=$PATH:/root/redis-5.0.12/src

21

Enter the following command to make the modification effective.

source /etc/profile

22

Enter the following command.

redis-cli --help

23

Note that redis-cli has been installed.

Install JDK

Input the following command to download the installation package。

cd && wget https://labex-ali-data.oss-accelerate.aliyuncs.com/spark-analysis/jdk-8u181-linux-x64.tar.gz

24

Applications on Linux are generally installed in the /usr/local directory. Input the following command to zip the downloaded installation package into the /usr/local directory.

tar -zxf jdk-8u181-linux-x64.tar.gz -C /usr/local/

25

Input the vim /etc/profile command to open this fie and then add the following code to the end of this file.

vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_181
export PATH=$PATH:$JAVA_HOME/bin

26

Input the source /etc/profile command to make your changes take effect.

source /etc/profile

Execute command java -version to verify the JDK installation.

java -version

27

Kafka Environment Preparation

Go to the Alibaba Cloud Kafka console and you can see the Kafka instance created by Terraform just now.

28

Click the instance name to view the connection address of the Kafka instance.

29

RDS Environment Preparation

Back to the Alibaba Cloud RDS console, you can see the RDS instance created by Terraform just now.

30

You can see the database accounts that have been created.

31

The database created.

32

Intranet address of the database.

33

Back to the ECS command line,

Enter the following command to connect to the RDS database. Please pay attention to replace YOUR-RDS-PRIVATE-ADDR with the user's own RDS intranet address.

mysql -hYOUR-RDS-PRIVATE-ADDR -ulabex -pAliyun-test

34

Enter the following command to create a table in the "labex" database.

use labex;

create table user(
    id          varchar(5),
    username    varchar(20),
    password    varchar(20),
    addr        varchar(40),
    phone       varchar(11),
    nickname    varchar(12)
);

35

Enter the following command to view the current database status. Please remember the log file and location here, it will be used when configuring Canal later

show master status;

36

Enter the exit command to exit the database.

37

Install Canal

Enter the following command to download the canal installation package.

cd && wget https://labex-ali-data.oss-us-west-1.aliyuncs.com/canal/canal.deployer-1.1.5.tar.gz

38

Enter the following command to create a canal directory and download the canal installation package to this directory.

mkdir canal

tar -zxvf canal.deployer-1.1.5.tar.gz -C canal

39

Enter the following command to view the files in the canal directory.

cd canal && ls 

40

Enter the command vim conf/canal.properties and modify the relevant configuration referring to the following. Please pay attention to replace YOUR-KAFKA-ADDR with the user's own Kafka connection address.

vim conf/canal.properties
# tcp, kafka, RocketMQ choose kafka mode here
canal.serverMode = kafka
# The number of threads of the parser. If this configuration is turned on, it will block or fail to parse if it is not turned on
canal.instance.parser.parallelThreadSize = 16
# Configure the service address of MQ, here is the address and port corresponding to kafka
kafka.bootstrap.servers = YOUR-KAFKA-ADDR
# Configure instance, there must be a directory with the same name as example in the conf directory, and you can configure multiple
canal.destinations = example

41
42
43
44

Enter the command vim conf/example/instance.properties, refer to the following to modify the relevant configuration. Please pay attention to replace YOUR-RDS-ADDR with the user's own RDS connection address.

vim conf/example/instance.properties
## mysql serverId , v1.0.26+ will autoGen
# canal.instance.mysql.slaveId=0

# position info
canal.instance.master.address=YOUR-RDS-ADDR
# Execute SHOW MASTER STATUS in MySQL; view the binlog of the current database
canal.instance.master.journal.name=mysql-bin.000003
canal.instance.master.position=181545
# account password
canal.instance.dbUsername=labex
canal.instance.dbPassword=Aliyun-test
canal.instance.connectionCharset = UTF-8
# MQ queue name
canal.mq.topic=canaltopic
# Partition index of single-queue mode
canal.mq.partition=0

45
46
47

Enter the following command to start the canal service.

bin/startup.sh

48

Go back to the Alibaba Cloud Kafka console and view the topic information.

49

You can see that the topic on Kafka has started to have messages, indicating that Canal is synchronizing RDS log data to Kafka.

50

Back to the ECS command line,

Enter the following command to download a sample jar package, which will be responsible for synchronizing the data in Kafka to Redis.

cd && wget https://labex-ali-data.oss-us-west-1.aliyuncs.com/canal/canal-0.0.1-SNAPSHOT.jar

Or you can build canal-0.0.1-SNAPSHOT.jar from the source code under the directory: https://github.com/alibabacloud-howto/solution-mysql-redis-canal-kafka-sync/tree/master/source

mvn clean package assembly:single -DskipTests

51

Enter the following command to start the synchronization process, Please pay attention to replace YOUR-KAFKA-ADDR, YOUR-REDIS-ADDR with the user's own Kafka and Redis connection address

java -cp canal-0.0.1-SNAPSHOT.jar canal.SyncKafkaRedis YOUR-KAFKA-ADDR topic1 group1 YOUR-REDIS-ADDR Aliyun-test

Such as:

java -cp canal-0.0.1-SNAPSHOT.jar canal.SyncKafkaRedis 172.16.4.16:9092,172.16.4.14:9092,172.16.4.15:9092 topic1 group1 r-3nsa4cc5c3d04814.redis.rds.aliyuncs.com Aliyun-test

52

You can see that the data is being synchronized, and the output messages are consumed from Kafka.

The message being consumed here is that Canal synchronizes RDS binlog file data to Kafka, which is aimed at RDS

The default "mysql" database message in the example. When the sample jar package consumes these messages, it will be output directly.

When the message of the target database "labex" is consumed, the data in Redis will be updated.

Data Synchronization Test

Next, create two new ECS command line interfaces.

The one that is executing the synchronization process is called command line 1.

The newly created ones are called command line 2 and command line 3, respectively.

Insert Data

At the command line 2.

Enter the following command to log in to the RDS database. Please replace YOUR-RDS-PRIVATE-ADDR with the user's own RDS intranet address.

mysql -ulabex -hYOUR-RDS-PRIVATE-ADDR -pAliyun-test -D labex

Such as:

mysql -ulabex -hrm-3ns7ry11cc5qiq5nj.mysql.rds.aliyuncs.com -pAliyun-test -D labex

53

Enter the following command to insert data into the user table.

insert into user values("19832", "name1", "ddsdfdfd", "addr1", "17138141002", "nickname1");
insert into user values("20122", "name2", "xdfdsafd", "addr2", "13877686321", "nickname2");

54

On the command line 1,

You can see that there are two records of data synchronization that were inserted just now.

55

On the command line 3,

Enter the following command to check whether there is data in redis. Please pay attention to replace YOUR-REDIS-ADDR with the user's own Redis address

redis-cli -h YOUR-REDIS-ADDR -a Aliyun-test

Such as:

redis-cli -h r-3nsa4cc5c3d04814.redis.rds.aliyuncs.com -a Aliyun-test

56

You can see that there are already two pieces of data.

Enter the following command, you can see that the data has been synchronized successfully.

get 19832

get 20122

57

Update Data

Next we update the data in RDS.

On the command line 2,

Enter the following command to update the data with id = "19832".

update user set username = "nanzhao" where id = "19832";

58

On the command line 3,

Enter the following command, you can see that the data has been updated.

get 19832

59

Delete Data

Next we delete the data in RDS.

On the command line 2,

Enter the following command to delete the data with id = "19832".

delete from user where id = "19832";

60

On the command line 3,

Enter the following command, you can see that the Redis Key no longer exists, indicating that the synchronization is successful.

get 19832

61

Demo with Rendering the Redis Data on Web App

Install the Apache Environment

Next, remotely log in to the "labex2" instance.

62

ssh root@<labex2-ECS-public-IP>

The default account name and password of the ECS instance:

Account name: root

Password: Aliyun-test

Enter the following command to install apache2.

apt update && apt install -y apache2 python3-pip

63

Enter the following command to install the redis dependency of python.

export LC_ALL=C

pip3 install redis

64

Run the following command to create a folder:

mkdir /var/www/python

65

Run the following command to disable the event module and enable the prefork module:

a2dismod mpm_event

a2enmod mpm_prefork cgi

66

Run the vim /etc/apache2/sites-enabled/000-default.conf command to open the Apache configuration file. Replace all the contents of the file with the following configuration. Save the settings and exit.

vim /etc/apache2/sites-enabled/000-default.conf
<VirtualHost *:80>
        DocumentRoot /var/www/python
        <Directory /var/www/python>
                Options +ExecCGI
                DirectoryIndex leaderboards.py
        </Directory>
        AddHandler cgi-script .py
        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

67

Run the vim /var/www/python/showRedis.py command to create a new file. Copy the following content to the file. Save the settings and exit.Replace YOUR-REDIS-ADDR with the address of your Redis instance.

vim /var/www/python/showRedis.py
#!/usr/bin/python3

import cgitb
import redis
import json

r = redis.StrictRedis(host='YOUR-REDIS-ADDR', port=6379, db=0, password='Aliyun-test')

cgitb.enable()
print("Content-Type: text/html;charset=utf-8")
print()

keys = r.keys()

print ("<h1>Data is fetched from Redis:</h1>")
print ("<table border=1><tr><td><b>id</b></td><td><b>username</b></td><td><b>password</b></td><td><b>iphone</b></td><td><b>addr</b></td></tr>")

for key in keys:
    print("<tr>")
    dic1 = json.loads(r.get(key).decode("utf-8"))
    print ("<td>", dic1.get("id", "null"), '</td>')
    print ("<td>", dic1.get("username", "null"), '</td>')
    print ("<td>", dic1.get("password", "null"), '</td>')
    print ("<td>", dic1.get("phone","null"), '</td>')
    print ("<td>", dic1.get("addr", "null"), '</td>')
    print("<tr/>")
print ("</table><br/><br/>")

68

Run the following command to grant the file execution permission:

chmod 755 /var/www/python/showRedis.py

69

Run the following command to restart Apache to make the preceding configurations take effect:

service apache2 restart

70

Access from the browser with "labex2" ECS in this tutorial. Please pay attention to replace the IP address with the user's own ECS public network address

71

Next, we repeat the operations of inserting, updating, and deleting data in MySQL in the section Data synchronization test, and then refresh the browser to see the data in Redis.

72

1 1 0
Share on

ApsaraDB

376 posts | 57 followers

You may also like

Comments

5811131261712595 June 26, 2022 at 2:19 pm

in this blog i see using ECS and script main.tf wich is already configured.my questions is, in the main.tf script is gonna create new rds mysql database, can i use my existing rds?and can i use mysql db that running in kube ali cloud? and in the scipt the region is using hongking can i use my region?