×
Community Blog PolarDB-X Open Source | Three Replicas of MySQL Based on Paxos

PolarDB-X Open Source | Three Replicas of MySQL Based on Paxos

The article introduces the release and features of PolarDB-X version 2.3.0 and provides a quick start guide for the PolarDB-X centralized form.

By Qifeng

Overview

PolarDB-X, as a distributed edition of PolarDB, is a high-performance cloud-native distributed database independently designed and developed by Alibaba. It adopts a shared-nothing and storage-compute separation architecture, providing financial-level data availability, distributed scalability, hybrid load, low-cost storage, and maximum flexibility. It is designed to be compatible with the MySQL open source ecosystem, offering high throughput, large storage, low latency, easy scalability, and ultra-high availability database services for users in the cloud era.

The architecture of PolarDB-X can be simply divided into CNs (Compute nodes) responsible for SQL parsing and execution, and DNs (Data nodes) responsible for distributed transactions and high-availability storage.

In October 2023, the open-source PolarDB-X officially released version 2.3.0, focusing on the PolarDB-X Standard Edition (Centralized Form) and providing independent services for PolarDB-X DNs. PolarDB-X v.2.3.0 supports the multi-replica mode of the Paxos protocol and the Lizard distributed transaction engine. The new version adopts a three-node architecture with one primary node, one secondary node, and one log node. It synchronously replicates multiple replicas of the Paxos protocol to ensure strong data consistency (RPO=0) and 100 percent compatibility with MySQL. In terms of performance, it adopts production-level deployment and parameters, enabling Double 1 + Paxos multi-replica strong synchronization. Compared with open-source MySQL 8.0.34, PolarDB-X demonstrates a 30 to 40 percent performance improvement in hybrid read and write scenarios, making it the best alternative to open-source MySQL.

The subsequent part of this article mainly introduces how to quickly get started with the centralized form of PolarDB-X (three replicas of MySQL based on Paxos).

How It Works

1

The working principles of PolarDB-X three replicas of MySQL based on Paxos are as follows:

  1. At any given time, the cluster can have at most one leader node responsible for writing data, while other nodes act as followers participating in majority voting and data synchronization.
  2. The consensus log of the Paxos protocol integrates the original binary log content of MySQL. The leader node adds consensus-related binary log events to the binary log protocol, while follower nodes replace the traditional relay log. The secondary database replays log content to data files through the SQL thread. In essence, Paxos consensus logs ≈ MySQL binary logs.
  3. Based on the automatic primary node selection mechanism of the majority in Paxos, a heartbeat/election timeout mechanism is used to monitor changes in the leader node. When the leader node is unavailable, follower nodes automatically select a new primary node. Before the new leader node provides services, it waits for the SQL thread to complete the replay of existing logs to ensure that the new leader has the latest data.

PolarDB-X three replicas of MySQL based on Paxos have the following technical features:

  1. High Performance. The replicas use single-Leader mode to provide performance comparable to MySQL semi-sync mode.
  2. RPO = 0. Paxos protocol logs integrate the original content of MySQL binary logs. The majority synchronization mechanism ensures no data loss.
  3. Automatic HA. It is an election heartbeat mechanism based on Paxos. MySQL automatically completes node probing and HA switching to replace the traditional HA mechanism of MySQL.

Quick Deployment

PolarDB-X supports different forms of rapid deployments to meet individual needs.

TABLE

This article uses the RPM package deployment method with the least dependencies. Before deploying a PolarDB-X Standard Edition (Centralized Form) by using RPM, you need to obtain the corresponding RPM package. You can manually compile and generate the RPM package, or download the RPM package (download the RPM corresponding to x86 or arm based on your need).

1.  Download RPM from: https://github.com/polardb/polardbx-engine/releases/

2

2.  Select the source code to compile the RPM package. For more information, see Generate RPM by compiling the source code (in Chinese)

# Pull code
git clone https://github.com/polardb/polardbx-engine.git --depth 1

# Compile to generate RPM
cd polardbx-engine/rpm && rpmbuild -bb t-polardbx-engine.spec

Finally, install based on RPM packages

yum install -y <the RPM you downloaded or compiled>

The installed binary file will appear in /opt/polardbx-engine/bin.

Experience the Standalone Mode

Prepare a copy of my.cnf (Template) and data catalog (if my.cnf is changed, the following catalog should be modified accordingly), and then you can prepare to start.

# Create a polarx user and switch to it
useradd -ms /bin/bash polarx
echo "polarx:polarx" | chpasswd
echo "polarx    ALL=(ALL)    NOPASSWD: ALL" >> /etc/sudoers
su - polarx

# Create the necessary catalog
mkdir polardbx-engine
cd polardbx-engine && mkdir log mysql run data tmp

# Initialize the my.cnf file
vi my.cnf

# Initialize
/opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --initialize-insecure
# Start up
/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf &

Log on to the database and verify the status.

# Log on to the database. The port is specified by my.cnf.
mysql -h127.0.0.1 -P4886 -uroot

# Query the paxos role of the host machine.
MySQL [(none)]> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
*************************** 1. row ***************************
          SERVER_ID: 1
       CURRENT_TERM: 2
     CURRENT_LEADER: 127.0.0.1:14886
       COMMIT_INDEX: 1
      LAST_LOG_TERM: 2
     LAST_LOG_INDEX: 1
               ROLE: Leader
          VOTED_FOR: 1
   LAST_APPLY_INDEX: 0
SERVER_READY_FOR_RW: Yes
      INSTANCE_TYPE: Normal
1 row in set (0.00 sec)

# Query the paxos roles of all machines in the cluster (only the leader node returns data).
MySQL [(none)]> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL \G
*************************** 1. row ***************************
      SERVER_ID: 1
        IP_PORT: 127.0.0.1:14886
    MATCH_INDEX: 1
     NEXT_INDEX: 0
           ROLE: Leader
      HAS_VOTED: Yes
     FORCE_SYNC: No
ELECTION_WEIGHT: 5
 LEARNER_SOURCE: 0
  APPLIED_INDEX: 0
     PIPELINING: No
   SEND_APPLIED: No
1 row in set (0.00 sec)

Because my.cnf is only configured to start in standalone mode by default, only the leader status of a single replica is displayed.

Experience the High Availability Based on Paxos

We deploy a complete centralized cluster on three machines and verify the high availability switching. Assume that the IP addresses of our three machines are as follows:

10.0.3.244
10.0.3.245
10.0.3.246

On the three machines, after installing RPM, prepare my.cnf and catalogs (if any step fails, please completely clean up these catalogs and recreate them). Then, start as follows on the three machines:

# Execute on 10.0.3.244
/opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf \
--cluster-info='10.0.3.244:14886;10.0.3.245:14886;10.0.3.246:14886@1' \
--initialize-insecure

/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
--cluster-info='10.0.3.244:14886;10.0.3.245:14886;10.0.3.246:14886@1' \
&

# Execute on 10.0.3.245
/opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf \
--cluster-info='10.0.3.244:14886;10.0.3.245:14886;10.0.3.246:14886@2' \
--initialize-insecure

/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
--cluster-info='10.0.3.244:14886;10.0.3.245:14886;10.0.3.246:14886@2' \
&

# Execute on 10.0.3.246
/opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf \
--cluster-info='10.0.3.244:14886;10.0.3.245:14886;10.0.3.246:14886@3' \
--initialize-insecure

/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
--cluster-info='10.0.3.244:14886;10.0.3.245:14886;10.0.3.246:14886@3' \
&

Note: we modified the configurations of cluster-info at startup. The formats are [host1]:[port1];[host2]:[port2];[host3]:[port3]@[idx]. Machines are only different in [idx]. [idx] reflects which host the machine is. Modify the configuration items based on the IP address of the machine.

In addition, the replicas of the PolarDB-X start in logger mode. Set up the cluster-log-type-node=ON.

# For example, we configure the third host to logger mode.

/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
cluster-log-type-node=ON \
--cluster-info='10.0.3.244:14886;10.0.3.245:14886;10.0.3.246:14886@3' \
&

Experience One (Start the Three Replicas)

Start the three replicas of Paxos one by one. When the first one starts, the primary node cannot be selected because the majority condition of Paxos is not met. In this case, you cannot log on to the database.

> tail -f /home/polarx/polardbx-engine/log/alert.log
......
[ERROR] Server 1 : Paxos state change from FOLL to CAND !!
[ERROR] Server 1 : Start new requestVote: new term(2)
[ERROR] Server 1 : Paxos state change from CAND to CAND !!
[ERROR] Server 1 : Start new requestVote: new term(3)
[ERROR] Server 1 : Paxos state change from CAND to CAND !!
[ERROR] Server 1 : Start new requestVote: new term(4)
[ERROR] Server 1 : Paxos state change from CAND to CAND !!
[ERROR] Server 1 : Start new requestVote: new term(5)
...... 
# Block until the second node joins and the primary node is selected.
[ERROR] EasyNet::onConnected server 2
[ERROR] Server 1 : Paxos state change from CAND to CAND !!
[ERROR] Server 1 : Start new requestVote: new term(6)
[ERROR] Server 1 : server 2 (term:6) vote me to became leader.
[ERROR] Server 1 : Paxos state change from CAND to LEDR !!
[ERROR] Server 1 : become Leader (currentTerm 6, lli:1, llt:6)!!

After starting the database, log on to the database and verify the status of the cluster.

MySQL [(none)]> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
*************************** 1. row ***************************
          SERVER_ID: 1
       CURRENT_TERM: 6
     CURRENT_LEADER: 10.0.3.244:14886
       COMMIT_INDEX: 1
      LAST_LOG_TERM: 6
     LAST_LOG_INDEX: 1
               ROLE: Leader
          VOTED_FOR: 1
   LAST_APPLY_INDEX: 0
SERVER_READY_FOR_RW: Yes
      INSTANCE_TYPE: Normal

MySQL [(none)]> `
+-----------+------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+
| SERVER_ID | IP_PORT          | MATCH_INDEX | NEXT_INDEX | ROLE     | HAS_VOTED | FORCE_SYNC | ELECTION_WEIGHT | LEARNER_SOURCE | APPLIED_INDEX | PIPELINING | SEND_APPLIED |
+-----------+------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+
|         1 | 10.0.3.244:14886 |           1 |          0 | Leader   | Yes       | No         |               5 |              0 |             0 | No         | No           |
|         2 | 10.0.3.245:14886 |           1 |          2 | Follower | Yes       | No         |               5 |              0 |             1 | Yes        | No           |
|         3 | 10.0.3.246:14886 |           1 |          2 | Follower | No        | No         |               5 |              0 |             1 | Yes        | No           |
+-----------+------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+
3 rows in set (0.00 sec)

We can see that 10.0.3.244 is the leader of the three machines, and 10.0.3.245 and 10.0.3.246 are the followers.

Experience Two (kill -9 Switching)

Based on the three-replica mode of Paxos, only the leader node can write data. We build a database and table on the leader to write simple data:

CREATE DATABASE db1;
USE db1;
CREATE TABLE tb1 (id int);
INSERT INTO tb1 VALUES (0), (1), (2);

Then we can query the data on the leader and followers. We can also query the status of the cluster on the leader:

MySQL [db1]> SELECT SERVER_ID,IP_PORT,MATCH_INDEX,ROLE,APPLIED_INDEX FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL ;
+-----------+------------------+-------------+----------+---------------+
| SERVER_ID | IP_PORT          | MATCH_INDEX | ROLE     | APPLIED_INDEX |
+-----------+------------------+-------------+----------+---------------+
|         1 | 10.0.3.244:14886 |           4 | Leader   |             4 |
|         2 | 10.0.3.245:14886 |           4 | Follower |             4 |
|         3 | 10.0.3.246:14886 |           4 | Follower |             4 |
+-----------+------------------+-------------+----------+---------------+
3 rows in set (0.00 sec)

If APPLIED_INDEX is 4, the log index of the three Paxos nodes is the same. Next, we run the kill -9 on the leader node (10.0.3.244) to select a new leader in the cluster.

kill -9 $(pgrep -x mysqld)

After the original leader is killed, mysqld_safe immediately starts the mysqld process again. Later, we can see that the 10.0.3.245 node becomes the leader.

# Query the status of the new leader on 10.0.3.245.
MySQL [(none)]> SELECT SERVER_ID,IP_PORT,MATCH_INDEX,ROLE,APPLIED_INDEX FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL ;
+-----------+------------------+-------------+----------+---------------+
| SERVER_ID | IP_PORT          | MATCH_INDEX | ROLE     | APPLIED_INDEX |
+-----------+------------------+-------------+----------+---------------+
|         1 | 10.0.3.244:14886 |           5 | Follower |             5 |
|         2 | 10.0.3.245:14886 |           5 | Leader   |             4 |
|         3 | 10.0.3.246:14886 |           5 | Follower |             5 |
+-----------+------------------+-------------+----------+---------------+
3 rows in set (0.00 sec)

On the original leader of 10.0.3.244, the query status has changed to follower.

MySQL [(none)]> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
*************************** 1. row ***************************
          SERVER_ID: 1
       CURRENT_TERM: 7
     CURRENT_LEADER: 10.0.3.245:14886
       COMMIT_INDEX: 5
      LAST_LOG_TERM: 7
     LAST_LOG_INDEX: 5
               ROLE: Follower
          VOTED_FOR: 2
   LAST_APPLY_INDEX: 5
SERVER_READY_FOR_RW: No
      INSTANCE_TYPE: Normal

We can continue to kill -9 replicas to verify the ability of the leader to continuously migrate and recover among the three nodes. Through the preceding steps, we simply verified the ability of automatic primary selection and switching based on the three replicas of Paxos.

Experience Three (Switch Command as Expected)

PolarDB-X provides built-in O&M management commands for the three replicas of Paxos, such as the current cluster status:

MySQL [(none)]> SELECT SERVER_ID,IP_PORT,MATCH_INDEX,ROLE,APPLIED_INDEX FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL ;
+-----------+------------------+-------------+----------+---------------+
| SERVER_ID | IP_PORT          | MATCH_INDEX | ROLE     | APPLIED_INDEX |
+-----------+------------------+-------------+----------+---------------+
|         1 | 10.0.3.244:14886 |           9 | Leader   |             8 |
|         2 | 10.0.3.245:14886 |           9 | Follower |             9 |
|         3 | 10.0.3.246:14886 |           9 | Follower |             9 |
+-----------+------------------+-------------+----------+---------------+

Command 1: Switch the leader by specifying the IP address.

call dbms_consensus.change_leader("10.0.3.245:14886");

Command 2: Query and clear consensus logs.

# Query consensus logs (PolarDB-X finishes Paxos consensus logs based on binary log files)
MySQL [(none)]> show consensus logs;
+---------------------+-----------+-----------------+
| Log_name            | File_size | Start_log_index |
+---------------------+-----------+-----------------+
| mysql-binlog.000001 |      1700 |               1 |
+---------------------+-----------+-----------------+
1 row in set (0.00 sec)

# Clear consensus logs and specify the log index. The logs are protected. If a replica is consuming the logs, the logs are not cleared.
MySQL [(none)]> call dbms_consensus.purge_log(1);
Query OK, 0 rows affected (0.00 sec)

Additional support: dynamic addition and deletion of replicas, node role change (learner/follower), and election weight setting

# Add learner
call dbms_consensus.add_learner("127.0.0.1:14886");
# Drop learner
call dbms_consensus.drop_learner("127.0.0.1:14886");

# After the learner changes to the follower, the return fails if the learner logs are too far behind.
call dbms_consensus.upgrade_learner("127.0.0.1:14886");
# The follower is downgraded to the learner.
call dbms_consensus.downgrade_follower("127.0.0.1:15700");

#  Modify the primary node selection weight [1-9] of the follower node. The default value is 5.
call dbms_consensus.configure_follower("127.0.0.1:15700", 9);

Experience Four (Simulate Offline Startup)

PolarDB-X supports the offline startup of multiple replicas. For example, because of the network disconnection or power outage, the database is expected to support the overall shutdown and offline startup, and three new replicas can be created offline based on local files. For a simple simulation, we log on to three machines to run an overall kill -9.

kill -9 $(pgrep -x mysqld)

Start up offline in the original directory and recreate a three-replica cluster:

# Execute on 10.0.3.244
/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
--cluster-info='10.0.3.244:14886;10.0.3.245:14886;10.0.3.246:14886@1' \
&

# Execute on 10.0.3.245
/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
--cluster-info='10.0.3.244:14886;10.0.3.245:14886;10.0.3.246:14886@2' \
&

# Execute on 10.0.3.246
/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
--cluster-info='10.0.3.244:14886;10.0.3.245:14886;10.0.3.246:14886@3' \
&

If the real business involves machine migration, after copying the original data files to the new machine, you can set --cluster-force-change-meta=ON when starting the three replicas to forcibly refresh the metadata of the cluster. Examples:

# Forcibly refresh the metadata. After the metadata is refreshed, the system exits mysqld and mysqld_safe.
/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
--cluster-force-change-meta=ON \
--cluster-info='192.168.6.183:14886;192.168.6.184:14886;192.168.6.185:14886@1' \
&

# Restart based on the new configuration.
/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
--cluster-info='192.168.6.183:14886;192.168.6.184:14886;192.168.6.185:14886@1' \
&

Experience Paxos-Based Performance Stress Testing

We deploy multiple replicas of Paxos on three machines to quickly verify the performance of the PolarDB-X based on the default parameters and tune parameters related to PolarDB-X (you can use most MySQL parameter tuning methods for reference):

[mysqld]

# Adjust the maximum number of connections
max_connections=20000

# Forcibly flush the disk
sync_binlog=1
innodb_flush_log_at_trx_commit=1

# Optimize the replication efficiency of the follower
slave_parallel_type=LOGICAL_CLOCK
slave_parallel_workers=16

# Parameters of binary logs
binlog_order_commits=OFF
binlog_cache_size=1M
binlog_transaction_dependency_tracking=WRITESET

# Adjust the size of the Innodb BP
innodb_buffer_pool_size=20G
# Parameters of Innodb
innodb_log_buffer_size=200M
innodb_log_file_size=2G
innodb_io_capacity=20000
innodb_io_capacity_max=40000
innodb_max_dirty_pages_pct=75
innodb_lru_scan_depth=8192
innodb_open_files=20000
innodb_undo_retention=0

# Isolation-level RC
transaction_isolation=READ-COMMITTED

# consensus
consensus_log_cache_size=512M
consensus_io_thread_cnt=8
consensus_worker_thread_cnt=8
consensus_prefetch_cache_size=256M

# timezone
default_time_zone=+08:00

Create a user for stress testing:

CREATE USER polarx IDENTIFIED BY 'polarx';
grant all privileges on *.* to 'polarx'@'%' ;
FLUSH PRIVILEGES ;

Deploy open source PolarDB-X benchmark-boot stress testing tools by referring to the stress testing documentation.

# Download the image
docker pull polardbx/benchmark-boot:latest

# Start the container
docker run -itd --name 'benchmark-boot' --privileged --net=host \
    -v /etc/localtime:/etc/localtime polardbx/benchmark-boot:latest \
    /usr/sbin/init

# Verify
curl http://127.0.0.1:4121/

During stress testing, you can use the system view of Paxos to pay attention to the data replication status.

MySQL [(none)]> select * from  INFORMATION_SCHEMA.ALISQL_CLUSTER_health;
+-----------+------------------+----------+-----------+---------------+-----------------+
| SERVER_ID | IP_PORT          | ROLE     | CONNECTED | LOG_DELAY_NUM | APPLY_DELAY_NUM |
+-----------+------------------+----------+-----------+---------------+-----------------+
|         1 | 10.0.3.244:14886 | Follower | YES       |             0 |              22 |
|         2 | 10.0.3.245:14886 | Leader   | YES       |             0 |               0 |
|         3 | 10.0.3.246:14886 | Follower | YES       |             0 |              11 |
+-----------+------------------+----------+-----------+---------------+-----------------+

LOG_DELAY_NUM represents the latency of binary logs replication to Paxos multiple replicas. If it is close to zero, it represents almost no latency. APPLY_DELAY_NUM represents the latency of binary logs applied in replicas. If it is close to zero, it represents almost no latency.

The stress testing environment uses three ecs.i4.8xlarge (32c256GB + 7TB disks) TPC-C, 1,000 warehouses. The testing runs the performance of 240,000 with 200 concurrency. Resource tpmC: leader node 95% CPU, follower node 30% CPU (logger node <10% CPU)

02:52:42,321 [main] INFO   jTPCC : Term-00, 
02:52:42,322 [main] INFO   jTPCC : Term-00, +-------------------------------------------------------------+
02:52:42,322 [main] INFO   jTPCC : Term-00,      BenchmarkSQL v5.0
02:52:42,323 [main] INFO   jTPCC : Term-00, +-------------------------------------------------------------+
02:52:42,323 [main] INFO   jTPCC : Term-00,  (c) 2003, Raul Barbosa
02:52:42,323 [main] INFO   jTPCC : Term-00,  (c) 2004-2016, Denis Lussier
02:52:42,324 [main] INFO   jTPCC : Term-00,  (c) 2016, Jan Wieck
02:52:42,324 [main] INFO   jTPCC : Term-00, +-------------------------------------------------------------+
02:52:42,324 [main] INFO   jTPCC : Term-00, 
02:52:42,324 [main] INFO   jTPCC : Term-00, db=mysql
02:52:42,324 [main] INFO   jTPCC : Term-00, driver=com.mysql.jdbc.Driver
02:52:42,324 [main] INFO   jTPCC : Term-00, conn=jdbc:mysql://10.0.3.245:4886/tpcc?readOnlyPropagatesToServer=false&rewriteBatchedStatements=true&failOverReadOnly=false&connectTimeout=3000&socketTimeout=0&allowMultiQueries=true&clobberStreamingResults=true&characterEncoding=utf8&netTimeoutForStreamingResults=0&autoReconnect=true&useSSL=false
02:52:42,324 [main] INFO   jTPCC : Term-00, user=polarx
02:52:42,324 [main] INFO   jTPCC : Term-00, 
02:52:42,324 [main] INFO   jTPCC : Term-00, warehouses=1000
02:52:42,325 [main] INFO   jTPCC : Term-00, terminals=200
02:52:42,326 [main] INFO   jTPCC : Term-00, runMins=5
02:52:42,326 [main] INFO   jTPCC : Term-00, limitTxnsPerMin=0
02:52:42,326 [main] INFO   jTPCC : Term-00, terminalWarehouseFixed=true
02:52:42,326 [main] INFO   jTPCC : Term-00, 
02:52:42,326 [main] INFO   jTPCC : Term-00, newOrderWeight=45
02:52:42,326 [main] INFO   jTPCC : Term-00, paymentWeight=43
02:52:42,326 [main] INFO   jTPCC : Term-00, orderStatusWeight=4
02:52:42,326 [main] INFO   jTPCC : Term-00, deliveryWeight=4
02:52:42,326 [main] INFO   jTPCC : Term-00, stockLevelWeight=4
02:52:42,326 [main] INFO   jTPCC : Term-00, newOrderRemotePercent=10
02:52:42,326 [main] INFO   jTPCC : Term-00, paymentRemotePercent=15
02:52:42,326 [main] INFO   jTPCC : Term-00, useStoredProcedure=false
02:52:42,326 [main] INFO   jTPCC : Term-00, 
02:52:42,327 [main] INFO   jTPCC : Term-00, resultDirectory=null
02:52:42,327 [main] INFO   jTPCC : Term-00, osCollectorScript=null
02:52:42,327 [main] INFO   jTPCC : Term-00, 
02:52:42,516 [main] INFO   jTPCC : Term-00, C value for C_LAST during load: 226
02:52:42,517 [main] INFO   jTPCC : Term-00, C value for C_LAST this run:    107
02:52:42,517 [main] INFO   jTPCC : Term-00, 
.......
02:57:43,133 [Thread-172] INFO   jTPCC : Term-00, 
02:57:43,133 [Thread-172] INFO   jTPCC : Term-00, 
02:57:43,134 [Thread-172] INFO   jTPCC : Term-00, Measured tpmC (NewOrders) = 237040.65
02:57:43,134 [Thread-172] INFO   jTPCC : Term-00, Measured tpmTOTAL = 526706.43
02:57:43,134 [Thread-172] INFO   jTPCC : Term-00, Session Start     = 2023-11-21 02:52:43
02:57:43,134 [Thread-172] INFO   jTPCC : Term-00, Session End       = 2023-11-21 02:57:43
02:57:43,134 [Thread-172] INFO   jTPCC : Term-00, Transaction Count = 2633935

Summary

This article validates the startup methods for single-node and three-replica configurations of PolarDB-X through source code compilation and RPM installation. It also simulates faults using kill -9 to quickly demonstrate the automatic failover process without data loss (RPO=0). Additionally, PolarDB-X supports a variety of O&M commands and offline restarts, aligning well with the operational practices of the MySQL ecosystem. The final section revisits the results of the PolarDB-X performance white paper through a performance stress test practice. Future updates will progressively include tests related to the technical principles and performance comparisons between PolarDB-X Paxos and MySQL MGR. Stay tuned.

Appendix (A Simple my.cnf Template)

[mysqld]
basedir = /opt/polardbx-engine
log_error_verbosity = 2
default_authentication_plugin = mysql_native_password
gtid_mode = ON
enforce_gtid_consistency = ON
log_bin = mysql-binlog
binlog_format = row
binlog_row_image = FULL
master_info_repository = TABLE
relay_log_info_repository = TABLE

# change me if needed
datadir = /home/polarx/polardbx-engine/data
tmpdir = /home/polarx/polardbx-engine/tmp
socket = /home/polarx/polardbx-engine/tmp.mysql.sock
log_error = /home/polarx/polardbx-engine/log/alert.log
port = 4886
cluster_id = 1234
cluster_info = 127.0.0.1:14886@1

[mysqld_safe]
pid_file = /home/polarx/polardbx-engine/run/mysql.pid
0 1 0
Share on

ApsaraDB

451 posts | 97 followers

You may also like

Comments

ApsaraDB

451 posts | 97 followers

Related Products