All Products
Search
Document Center

PolarDB:Global Database Network (GDN)

Last Updated:Mar 25, 2026

A Global Database Network (GDN) is a network of multiple PolarDB clusters distributed across different regions. In a GDN, data is synchronized across all clusters. Each cluster serves read requests, while write requests are forwarded to the primary cluster for processing.

Overview

Basic Edition

Multi-write Edition

A Global Database Network (GDN) consists of one primary cluster and multiple secondary clusters. The primary cluster handles write requests, while the secondary clusters are distributed across different regions to handle local read requests. Data is synchronized across all clusters over low-latency links, forming a single logical database.

image

In the Basic Edition architecture, secondary clusters forward write requests to the primary cluster through cross-region routing. When clusters are separated by large physical distances, write latency on the secondary clusters increases significantly. The GDN Multi-write Edition provides a table-level multi-write solution, allowing each cluster to write locally to tables where it has write permissions. This significantly reduces cross-region write latency. For more information, see GDN Multi-write Edition User Guide.

Note

The PolarDB GDN Multi-write Edition is currently in phased release. To use this feature, you can join our DingTalk group by searching for the group number.

DingTalk group number: 30245017864

image

Data synchronization mechanism

GDN uses an asynchronous physical replication mechanism for cross-region data synchronization. Technologies such as parallel physical log replay keep the data replication latency between the primary and secondary clusters under 2 seconds. This synchronization method does not affect the performance or stability of the primary cluster, ensuring eventual data consistency across the globe. Each cluster in a GDN provides read and write services and supports geo-disaster recovery.

Read/write splitting and request routing

The database proxy configuration in each cluster determines how read and write requests are routed. No application code changes are required. Simply connect to the endpoint of the appropriate cluster, and requests are automatically routed based on the following logic:

  • Write requests, such as INSERT, UPDATE, and DELETE, other broadcast statements such as SET, and all requests within transactions are automatically forwarded to the primary node of the primary cluster for processing.

  • Read requests are routed by default to the read-only nodes of the local secondary cluster for local access. If session consistency is enabled, some read requests may also be routed to the primary node of the primary cluster to ensure data consistency.

GDN also provides a global endpoint. This feature enables local access. It also ensures the domain name remains unchanged after a primary cluster failover.

Detailed routing logic

Destination node

Forwarded requests

Forwarded only to the primary node of the primary cluster

  • All DML operations, such as INSERT, UPDATE, DELETE, and SELECT FOR UPDATE.

  • All DDL operations, such as creating, deleting, or altering tables or databases, and managing permissions.

  • All requests within transactions.

  • User-defined functions.

  • Stored procedures.

  • EXECUTE statements.

  • Multi Statements.

  • Requests that use temporary tables.

  • SELECT last_insert_id().

  • All queries and modifications to user variables.

  • SHOW PROCESSLIST.

  • KILL (the KILL statement, not the KILL command).

Forwarded to read-only nodes or the primary node

Note

Requests are sent to the primary node only if Primary Node Accepts Read Requests is set to Yes in the database proxy configuration.

  • Read requests outside of transactions.

  • COM_STMT_EXECUTE commands.

Always forwarded to all nodes

  • All modifications to system variables.

  • USE commands.

  • COM_STMT_PREPARE commands.

  • Commands such as COM_CHANGE_USER, COM_QUIT, and COM_SET_OPTION.

Note

The primary node in a secondary cluster primarily replicates data from the primary cluster and does not process any read or write requests. Therefore, in this table, 'primary node' refers to the primary node of the primary cluster, and 'read-only nodes' refer to the read-only nodes of secondary clusters.

Use cases

Active-active geo-redundancy (multi-region deployment)

Deploy your services across multiple regions. GDN features, such as low-latency cross-region synchronization, cross-region read/write splitting, and local reads, ensure that applications in each region achieve database access latency of less than 2 seconds.

  • Typical industries: Gaming, cross-border e-commerce, local services (such as food delivery), and new retail (such as physical stores).

  • Business architecture:

    • For optimal performance, applications in each region can directly read from and write to their local database. Write requests are forwarded to the primary cluster for processing.

    • In a GDN, each cluster provides an independent endpoint. You can connect to the endpoint of the nearest cluster based on your application's region.

    • The specifications of the secondary clusters in China (Beijing) and China (Shenzhen) must be greater than or equal to those of the primary cluster in China (Hangzhou). For best results, we recommend using the same specifications.

image

Geo-disaster recovery

Use GDN to achieve cross-region high availability, improving data security and system availability. When the data center that hosts the primary cluster fails, you can quickly restore your services by manually failing over to a secondary cluster. GDN supports various architectures, such as two-region three-data-center, two-region four-data-center, and three-region six-data-center.

  • Typical industries: Banking, securities, and insurance.

  • Business architecture (example: two-region three-data-center architecture):

    • The primary region is China (Beijing), which uses a dual-availability-zone deployment covering AZ1 and AZ2.

    • The disaster recovery region is China (Shanghai), which uses a single-availability-zone deployment.

    • By default, the application reads from and writes to the database in AZ1 of the Beijing region. If AZ1 fails, the system fails over to AZ2 in Beijing. If both AZ1 and AZ2 fail, the system fails over to AZ3 in Shanghai.

image
Note

A primary/secondary switchover in a GDN is typically completed in less than 5 minutes, but can take up to 10 minutes. During the switchover, a transient disconnection of up to 160 seconds may occur. We recommend that you perform the switchover during off-peak hours and make sure that your application has a reconnection mechanism.

Benefits

  • Cross-region deployment: Seamlessly expand from a single-region to a multi-region architecture without application code changes.

  • Cross-region read/write splitting and local reads: In a GDN, read requests are routed to the local secondary cluster, while write requests are forwarded to the primary cluster.

  • Flexible configuration: The primary and secondary clusters have independent configurations, including cluster specifications, whitelists, and parameter values.

  • Low-latency cross-region synchronization: GDN uses asynchronous physical replication (based on Redo Log) and parallel replay technologies to reduce cross-region replication latency. Data is synchronized across all clusters with a replication latency of less than 2 seconds, which significantly reduces read latency for applications in non-central regions.

Applicability

Cluster requirements

  • Edition: Enterprise Edition, and the series must be Cluster Edition.

  • The database engine version must be one of the following:

    • MySQL 8.0.2.

    • MySQL 8.0.1 with a minor engine version of 8.0.1.1.17 or later.

    • MySQL 5.7 with a minor engine version of 5.7.1.0.21 or later.

    • MySQL 5.6 with a minor engine version of 5.6.1.0.32 or later.

  • Nodes: The cluster must include at least one read-only node.

Supported regions

All regions in the Chinese mainland, China (Hong Kong), Japan (Tokyo), South Korea (Seoul), Singapore, Malaysia (Kuala Lumpur), Indonesia (Jakarta), Philippines (Manila), Thailand (Bangkok), Germany (Frankfurt), US (Silicon Valley), US (Virginia), and UK (London).

Note

You can deploy secondary clusters across borders, but you must submit an application. For more information, see Add a secondary cluster.

Feature limitations

  • Clusters in a Global Database Network (GDN) support the In-Memory Column Index (IMCI) feature. However, you can add a read-only columnar node only after you enable the loose_polar_enable_imci_with_standby cluster parameter and your cluster version meets one of the following requirements:

    • MySQL 8.0.1 with minor engine version 8.0.1.1.48 or later.

    • MySQL 8.0.2 with minor engine version 8.0.2.2.27 or later.

  • Clusters in a GDN can be serverless clusters or clusters with defined specifications that have the serverless feature enabled. However, if the minor engine version of the primary cluster is earlier than the following versions, all clusters in the GDN must have at least one read-only node:

    • MySQL 8.0.1 with a minor engine version earlier than 8.0.1.1.42.

    • MySQL 8.0.2 with a minor engine version earlier than 8.0.2.2.23.

  • Clusters in a GDN do not support the database and table restoration feature.

Other limitations

  • A GDN consists of one primary cluster and up to four secondary clusters.

    Note

    To add more secondary clusters, go to Quota Center, find the quota item by using the quota ID polardb_mysql_gdn_region, and click Apply in the Actions column.

  • A cluster can belong to only one GDN.

  • You can only add new clusters as secondary clusters; you cannot add an existing cluster.

  • The primary and secondary clusters must use the same database engine version: MySQL 8.0, MySQL 5.7, or MySQL 5.6.

  • For secondary clusters in a GDN that are not serverless clusters, each compute node must have at least 4 CPU cores.

  • By default, each cluster in a GDN contains 2 nodes. You can add up to 16 nodes.

Pricing

When you use a GDN, you are charged for the clusters and for inter-region data transfers. The detailed billing rules are as follows:

Important

Inter-region data transfer fees will be charged starting from 00:00:00 on April 1, 2026 (UTC+8). Before this time, this service is free of charge. For more information, see [Announcement] Adjustment of Network Fees for the Global Database Network (GDN) Feature.

  • Free scenarios:

    Your primary and secondary clusters are both deployed in regions within the Chinese mainland, or both are deployed in the China (Hong Kong) region or other overseas regions. Examples:

    • Both the primary and secondary clusters are in the Chinese mainland. For example, the primary cluster is in the China (Chengdu) region, and the secondary cluster is in the China (Hangzhou) or China (Shenzhen) region.

    • Both the primary and secondary clusters are in the China (Hong Kong) region or other overseas regions. For example, the primary cluster is in the Singapore region, and the secondary cluster is in the Philippines (Manila) region.

  • Billable scenarios:

    One of your clusters (primary or secondary) is deployed in a region in the Chinese mainland, and the other is deployed in the China (Hong Kong) region or another overseas region. Examples:

    • The primary cluster is in the Chinese mainland, and the secondary cluster is outside the Chinese mainland. For example, the primary cluster is in the China (Chengdu) region, and the secondary cluster is in the China (Hong Kong) or Singapore region.

    • The primary cluster is outside the Chinese mainland, and the secondary cluster is in the Chinese mainland. For example, the primary cluster is in the Singapore region, and the secondary cluster is in the China (Hangzhou) or China (Shenzhen) region.

  • Billing rules: USD 0.80 per GB, billed hourly. The fee is calculated based on the amount of Redo Log data that is physically replicated from the primary cluster to a cross-region secondary cluster within one hour. You can estimate this traffic fee by querying the physical position converted from the log sequence number (LSN).

    Billing example

    Example

    At 09:00, you query the physical write position of the log and find it is ib_logfile1/648143676. At 10:00, the position is updated to ib_logfile3/648142342. This indicates that the amount of data written in this hour is the difference between the two positions.

    1. Amount written to the start file (ib_logfile1):
      Subtract the start offset from the total file size. Each log file is 1 GB (1,073,741,824 bytes). The amount written is 1073741824 - 648143676 = 425598148 bytes.







    2. Amount written to the intermediate file (ib_logfile2):
      After ib_logfile1 is full, the system completely writes ib_logfile2. This amount is 1,073,741,824 bytes (1 GB).







    3. Amount written to the end file (ib_logfile3):
      This is the offset at the end, which is 648,142,342 bytes.







    Therefore, the total amount written = 425598148 + 1073741824 + 648142342 = 2147482314 bytes, which is 2147482314 / 1024 / 1024 / 1024 = 1.999998 GB (rounded down to six decimal places). The cross-region data transfer fee for this hour is approximately 1.999998 GB * USD 0.80/GB = USD 1.5999984.

    Query the log write progress and physical file offset

    -- Query the current write progress of the log system.
    SHOW STATUS LIKE 'Innodb_log_write_lsn'; 
    +----------------------+------------+
    | Variable_name        | Value      |
    +----------------------+------------+
    | Innodb_log_write_lsn | 1721889596 |
    +----------------------+------------+
    
    -- Query the physical file offset in bytes.
    SELECT lsn_to_pos(1721889596); 
    +------------------------+
    | lsn_to_pos(1721889596) |
    +------------------------+
    | ib_logfile1/648143676  |
    +------------------------+
Note

If you use the global domain name feature, you will incur additional fees for internal DNS resolution and cross-region data transfer. For more information, see Global domain name pricing.

Get started

  1. Create and manage a Global Database Network: Select a cluster that meets the requirements as the primary cluster of the GDN.

  2. Add a secondary cluster: Go to the PolarDB buy page to add a secondary cluster to the GDN that you created.

  3. Connect to a Global Database Network: In a GDN, each cluster (primary and secondary) provides an independent cluster endpoint. You can connect to the endpoint of the nearest cluster based on your application's region. GDN also provides a global endpoint. This feature enables local access. It also ensures the domain name remains unchanged after a primary cluster failover.