This topic describes Multi-master Cluster (Database/Table) Edition.

With the growth of PolarDB for MySQL customers, especially tier-1 customers, the PolarDB for MySQL architecture that consists of one primary node and multiple read-only nodes cannot provide sufficient write performance that is necessary for the large-scale customer business.

Therefore, PolarDB for MySQL provides the multi-master (database/table) architecture that contains multiple primary nodes and read-only nodes. The new architecture is suitable for high concurrent read and write scenarios such as multitenancy in SaaS, gaming, and e-commerce.

The following figure shows Multi-master Cluster (Database/Table) Edition.Multi-master cluster (database/table) architecture

All data files in a cluster are stored in PolarStore. Each primary node uses PolarFileSystem to share data files. You can access all nodes in a cluster by using the cluster endpoint. PolarProxy automatically forwards SQL statements to the required primary node.

Core advantages

  • Write scale-out in seconds

    Concurrent data writes to databases on up to 16 compute nodes are supported. Dynamic failover of nodes for databases can be implemented within seconds to improve the overall concurrent read and write capabilities of clusters.

  • Multiple-mater backup (no read-only nodes).

    If a primary node fails, failover to another low-traffic primary node can be implemented in seconds. The costs are halved because no additional idle resources are deployed for hot standby.


Multi-master Cluster (Database/Table) Edition is suitable for scenarios such as multitenancy in software as a service (SaaS), gaming, and e-commerce. These scenarios feature high concurrent read and write requests.
  • Multitenancy in SaaS: high concurrency and load balance between tenants

    Scenario: The number of databases of tenants rapidly changes, and the load volume undergoes substantial changes. Users must schedule database resources among different instances to deliver optimal experience.

    Solution: Multi-master Cluster (Database/Table) Edition helps customers to switch between different primary nodes of databases of tenants or add new primary nodes to process burst traffic. This implements load balance.

  • Global gaming server and e-commerce scenarios: scaling in minutes to cater to fast-growing business requests

    Scenario: The middleware-based or business-based database and table sharding solution is often used. During version updates and major promotions, sharp scale-out of cluster capacity is required. Quick scale-in is necessary when version updates and major promotions end. However, the scaling of traditional clusters involves complex steps for data migration.

    Solution: The scale-out in seconds and transparent routing features of Multi-master Cluster (Database/Table) Edition can be combined with the middleware-based or business-based database and table sharding solution to shorten the scale-out process from several days to several minutes.

  • Gaming applications deployed on different servers: better performance and scalability

    Scenario: During the growth period of a game, database loads are heavy and feature continual increase. During this period, the number of databases keeps growing. As a result, the loads of primary nodes also increase. During the decline period of a game, database loads are significantly reduced, and databases are merged. As a result, the loads of primary nodes are also decreased.

    Solution: During the growth period, you can switch some databases to new primary nodes to implement load balance. During the decline period, you can aggregate databases to a few primary nodes to reduce operating costs.

Performance improvement

After tests, the overall concurrent read and write capabilities of a cluster show a linear increase because the databases of the cluster are switched to more primary nodes. The following code snippet provides an example of stress testing:
  • Test background: The cluster contains eight databases and eight primary nodes.
  • Test procedure: At the beginning of a test, eight databases share one primary node. Data is synchronized to all databases at the same time to perform the same stress test. During the stress testing period, eight databases are scheduled to two primary nodes, four primary nodes, and eight primary nodes respectively. View the change trend of the overall performance of the cluster.
  • The following figure shows the change trend of QPS.Performance improvement

In the preceding figure, as databases are scheduled to more primary nodes, the overall concurrent read and write capabilities of the cluster are significantly improved and show a linear increase.

Supported kernel versions

Only PolarDB for MySQL 8.0 supports Multi-master Cluster (Database/Table) Edition.

Node specifications and pricing

Multi-master Cluster (Database/Table) Edition supports Dedicated and General-purpose specifications. For more information, see Specifications of compute nodes.

For more information about the billing of Multi-master Cluster (Database/Table) Edition, see Billable items.


For more information, see Usage.