This topic describes the technical architecture of global database networks (GDNs).
- A GDN consists of one primary cluster and multiple secondary clusters. Data is synchronized
among PolarDB clusters in each GDN.
Note A GDN can contain one primary cluster and up to four secondary clusters. To add more secondary clusters,Submit a ticket for technical support.
- By default, each cluster in a GDN contains two nodes. You can add up to 16 nodes. For more information, see .
Low-latency synchronization across regions
GDNs use an asynchronous replication mechanism to replicate data across regions. GDNs also reduce the latency of cross-region replication between the primary and secondary clusters by using technologies, such as physical logs and parallel processing. Data is synchronized between clusters and the network latency is limited to less than 2 seconds. This way, read requests from applications in non-central regions can be processed with the minimum latency. If you create cross-region secondary clusters and synchronize data, the stability and performance of the current primary cluster are not affected.
The following table describes the test results of the low-latency synchronization from the US (Silicon Valley) region to the China (Zhangjiakou) region in a GDN. These regions are used in the example.
|Specification and topology of the test clusters||Sysbench stress testing||Peak QPS/TPS||Synchronization latency from the secondary cluster in the US (Silicon Valley) region to the primary cluster in the China (Zhangjiakou) region|
|GDN that covers the China (Zhangjiakou) region and the US (Silicon Valley) region
PolarDB for MySQL
16-Core 128 GB
|OLTP_INSERT||82655/82655||Less than 1s|
|OLTP_WRITE_ONLY||157953/26325||Less than 1s|
|OLTP_READ_WRITE||136758/6837||Less than 1s|
Cross-region read/write splitting
- Key features
- Each cluster is in read and write mode.
- In most cases, read requests are forwarded to the secondary cluster in the same region.
Write requests are forwarded to the primary cluster.
Note The primary node in the secondary cluster is used to asynchronously replicate data from the primary cluster. By default, read requests are sent to the read-only nodes in the same region to reduce the latency of physical replication across regions.
- You do not need to modify the code of your applications to achieve read/write splitting.
- How to implement read/write splitting
The cross-region read/write splitting feature of a GDN must be implemented based on the cluster endpoints of PolarDB clusters. For more information about how to manage a cluster endpoint for a GDN, see Manage endpoints of a GDN.
- Forwarding rules
Node Forwarded request Only the primary node
- All data manipulation language (DML) operations, such as INSERT, UPDATE, DELETE, and SELECT FOR UPDATE operations.
- All data definition language (DDL) operations, such as creating databases or tables, deleting databases or tables, and changing table schemas or permissions.
- All requests in transactions.
- Queries by using user-defined functions.
- Queries by using stored procedures.
- EXECUTE statements.
- Requests that involve temporary tables.
- SELECT last_insert_id().
- All requests to query or modify user variables.
- SHOW PROCESSLIST statements.
- KILL statements in Structured Query Language (SQL) statements (not KILL commands in Linux).
The primary node or read-only nodesNote Requests are sent to the primary node only if the Offload Reads from Primary Node feature is disabled.
- Non-transactional read requests.
- COM_STMT_EXECUTE commands.
Note The primary node in the secondary cluster is used to asynchronously replicate data from the primary cluster, and does not process read and write requests. Therefore, the primary node in the table refers to the primary node in the primary cluster, and read-only nodes refer to the read-only nodes in the secondary cluster.
- All requests to modify system variables.
- USE statements.
- COM_STMT_PREPARE commands.
- COM_CHANGE_USER, COM_QUIT, and COM_SET_OPTION, and other commands.