This topic describes the terms that are commonly used in PolarDB.


The geographic area of a data center where a PolarDB cluster is deployed.


A geographical area within a region. A zone has an independent power supply and network. The network latency between instances within the same zone is lower than that between different zones.


PolarDB uses a multi-node cluster architecture. A cluster has one primary node and multiple read-only nodes. A PolarDB cluster can be deployed across zones but not across regions.

Global database network (GDN)

A network that consists of multiple PolarDB clusters that are distributed across regions around the world. All clusters in the network synchronize with each other to reach data consistency. For more information, see Create and release a GDN.

Primary cluster

In each GDN, only one cluster is granted the read and write permissions. This cluster is called the primary cluster.

Secondary cluster

In each GDN, such cluster synchronizes data from the primary cluster.


A PolarDB cluster consists of multiple physical nodes. Nodes in a cluster fall into two types. Both node types are equivalent and have the same specifications. These two types of nodes are known as primary nodes and read-only nodes.

Primary node

A primary node supports both read and write operations. Each PolarDB cluster contains only one primary node.

Read-only node

You can add up to 15 read-only nodes to a PolarDB cluster.

Cluster zone

A zone where the data in the cluster is distributed. The data in the cluster is automatically replicated in two zones for disaster recovery. You can migrate nodes only within these zones.

Primary zone

The zone where the primary node of a PolarDB cluster is deployed.


During a failover, a read-only node becomes the primary node. For more information, see Automatic failover and manual failover.


The specification of a node in a PolarDB cluster. For example, the specification can be 8 CPU cores and 64 GB memory. For more information, see Specifications of compute nodes.


An endpoint is a domain name used to access a PolarDB cluster. Each cluster provides multiple endpoints. Each endpoint can be used to connect to one or more nodes. For example, requests received by the primary endpoint are forwarded to the primary node. Multiple endpoints of a cluster enable read/write splitting. The endpoints can be used to connect to the primary node and read-only nodes. An endpoint contains the attributes of database connections, such as the read/write mode, node list, load balancing, and consistency levels.


An address is the carrier of an endpoint on different networks. An endpoint may support a internal-facing address and an Internet-facing address. An address contains network attributes, such as the domain name, IP address, VPC, and vSwitch.

Primary endpoint

The endpoint of the primary node. If a failover occurs, the system automatically points the endpoint to the new primary node.

Cluster endpoint

A cluster endpoint can be used to access all nodes in a cluster. You can enable read-only or read/write mode for a cluster endpoint. In the dialog box for configuring cluster endpoints, you can also configure cluster settings such as auto scaling, read/write splitting, load balancing, and the consistency level. For more information, see Cluster endpoints and primary endpoints.

Eventual consistency

By default, eventual consistency is enabled in read-only mode. PolarDB clusters deliver the optimal performance if eventual consistency is enabled. For more information, see Eventual consistency.

Session consistency

Session consistency is also known as causal consistency. It is the default consistency option in read/write mode. Session consistency guarantees the read consistency at the session level to meet the requirements in most scenarios. For more information, see Session consistency.

Global consistency

Global consistency is also known as strong consistency, cross-session consistency, and highest-level consistency. It ensures the session consistency, but increases the workload on the primary node. When the replication latency between the primary node and read-only nodes is high, we recommend that you do not use global consistency. For more information, see Global consistency.

Transaction splitting

A configuration item of cluster endpoints. To reduce the load on the primary node, enable transaction splitting. This way, the read requests in a transaction are forwarded to read-only nodes without compromising data consistency. For more information, see Features.

Offload reads from primary node

A configuration item of cluster endpoints. After this feature is enabled, SQL query statements are sent to read-only nodes without compromising data consistency. This reduces the load on the primary node to ensure the stability of the primary node. For more information, see Features.

Private address

You can use PolarDB together with Alibaba Cloud DNS PrivateZone to reserve the domain name of your original database. This ensures that each internal-facing address of the primary endpoint and cluster endpoint of a PolarDB cluster can be associated with a private domain name. This private domain name takes effect only in the specified virtual private cloud (VPC) within the current region. For more information, see Private domain names.

Snapshot backup

PolarDB allows you to back up data only by creating snapshots. For more information, see Data backup.

Level-1 backup

A backup file that is locally stored in the cluster is a level-1 backup. Level-1 backups are stored in the distributed storage system that is associated with a cluster. These backups allow you to back up and restore data in a quick manner. However, level-1 backups have high costs. For more information, see Data backup.

Level-2 backup

Level-2 backups are backup files that are stored in on-premises storage media. All data in level-2 backups is archived from level-1 backups and can be permanently stored. Level-2 backup have low costs. However, a long time is required to restore data from level-2 backups. For more information, see Data backup.

Log backup

A log backup stores the redo logs of a database for point-in-time recovery (PITR). Log backups help avoid data loss caused by misoperations. Log backups must be retained for at least seven days. Log backups are stored in on-premises storage and are cost-effective. For more information, see Data backup.

Storage plan

PolarDB provides storage plans to reduce the storage costs of clusters. The storage capacity of a PolarDB cluster is automatically scaled based on the amount of the data in the cluster. You do not need to manually specify the storage capacity. You are charged only for the used storage. If you need to store a large amount of data, we recommend that you purchase PolarDB storage plans to reduce the cost. For more information, see Purchase a storage plan.

Storage capacity

Storage capacity is used to store cluster data files, index files, log files, and temporary files. Log files include online and archived logs.
Note After you purchase a PolarDB cluster, the system automatically creates the files that are required for regular database operations. These files include the preceding files and consume some storage.

Cluster Edition

PolarDB for MySQL Cluster Edition is recommended because it is developed based on an architecture in which compute is decoupled from storage. The number of compute nodes can be increased from 2 to a maximum of 16. For more information, see Cluster Edition.

Single Node Edition

As the starter of PolarDB for MySQL editions, Single Node Edition uses the burstable performance specification and shares resources in a computing resource pool to improve resource usage. The Single Node Edition architecture also reduces resource costs because no proxy is required. For more information, see Single Node Edition.

X-Engine Edition

PolarDB for MySQL X-Engine Edition provides a high compression ratio and uses X-Engine as the default storage engine. PolarDB Archive Database Edition provides a large storage capacity and allows you to store archived data at a low cost. For more information, see Archive Database.


Smart-SSD is a high compression technique at the physical SSD disk level. Smart-SSD disks are embedded with a dedicated FPGA or ASIC compression chip, which decompresses and compresses the data in real time during the data read/write process and greatly reduces data storage costs. Smart-SSD disks are optimized particularly for PolarDB scenarios and can provide almost the same performance as non-compressed standard hard disks. Smart-SSD disks provide access interfaces of standard disks that can be accessed by upper-layer applications to avoid the complex adaptation of applications. This technique is used in PolarDB PSL4 disks.

Hot standby storage cluster

A hot standby storage cluster is deployed in the secondary zone or in a different data center in the same zone of the region where a PolarDB cluster is deployed. The hot standby storage cluster uses independent storage for hot standby of the PolarDB cluster. When the PolarDB cluster or the primary zone is unavailable, the hot standby storage cluster quickly becomes the primary node to perform read and write operations on the PolarDB cluster. If you disable the hot standby storage cluster feature, the SLA of the PolarDB cluster is degraded.