This topic describes the terms that are commonly used in PolarDB.


The geographic area of a data center where a PolarDB cluster is deployed.


A geographical area within a region. A zone has an independent power supply and network. The network latency between instances within the same zone is lower than that between different zones.


PolarDB uses a multi-node cluster architecture. A cluster has one primary node and multiple read-only nodes. A PolarDB cluster can be deployed across zones but not across regions.

Global database network (GDN)

A network that consists of multiple PolarDB clusters that are distributed across regions around the world. All clusters in the network synchronize with each other to reach data consistency. For more information, see Create and release a GDN.

Primary cluster

In each GDN, only one cluster is granted the read and write permissions. This cluster is called the primary cluster.

Secondary cluster

In each GDN, such cluster synchronizes data from the primary cluster.


A PolarDB cluster consists of multiple physical nodes. Nodes in a cluster fall into two types. Both node types are equivalent and have the same specifications. These two types of nodes are known as primary nodes and read-only nodes.

Primary node

A primary node supports both read and write operations. Each PolarDB cluster contains only one primary node.

Read-only node

You can add up to 15 read-only nodes to a PolarDB cluster.

Cluster zone

A zone where the data in the cluster is distributed. The data in the cluster is automatically replicated in two zones for disaster recovery. You can migrate nodes only within these zones.

Primary zone

The zone where the primary node of a PolarDB cluster is deployed.


During a failover, a read-only node becomes the primary node. For more information, see Automatic failover and manual failover.


The specification of a node in a PolarDB cluster. For example, the specification can be 8 CPU cores and 64 GB memory. For more information, see Specifications of compute nodes.


An endpoint is a domain name used to access a PolarDB cluster. Each cluster provides multiple endpoints. Each endpoint can be used to connect to one or more nodes. For example, requests received by the primary endpoint are forwarded to the primary node. Multiple endpoints of a cluster enable read/write splitting. The endpoints can be used to connect to the primary node and read-only nodes. An endpoint contains the attributes of database connections, such as the read/write mode, node list, load balancing, and consistency levels.


An address is the carrier of an endpoint on different networks. An endpoint may support a internal-facing address and an Internet-facing address. An address contains network attributes, such as the domain name, IP address, VPC, and vSwitch.

Primary endpoint

The endpoint of the primary node. If a failover occurs, the system automatically points the endpoint to the new primary node.

Cluster endpoint

A cluster endpoint can be used to access all nodes in a cluster. You can enable read-only or read/write mode for a cluster endpoint. In the dialog box for configuring cluster endpoints, you can also configure cluster settings such as auto scaling, read/write splitting, load balancing, and the consistency level. For more information, see Cluster endpoints and primary endpoints.

Eventual consistency

By default, eventual consistency is enabled in read-only mode. PolarDB clusters deliver the optimal performance if eventual consistency is enabled. For more information, see Eventual consistency.

Session consistency

Session consistency is also known as causal consistency. It is the default consistency option in read/write mode. Session consistency guarantees the read consistency at the session level to meet the requirements in most scenarios. For more information, see Session consistency.

Global consistency

Global consistency is also known as strong consistency, cross-session consistency, and highest-level consistency. It ensures the session consistency, but increases the workload on the primary node. When the replication latency between the primary node and read-only nodes is high, we recommend that you do not use global consistency. For more information, see Global consistency.

Distributed transaction

A configuration item of cluster endpoints. To reduce the load on the primary node, enable transaction splitting. This way, the read requests in a transaction are forwarded to read-only nodes without compromising data consistency. For more information, see Features.

Offload reads from primary node

A configuration item of cluster endpoints. After this feature is enabled, SQL query statements are sent to read-only nodes without compromising data consistency. This reduces the load on the primary node to ensure the stability of the primary node. For more information, see Features.

Private address

You can use PolarDB together with Alibaba Cloud DNS PrivateZone to reserve the domain name of your original database. This ensures that each internal-facing address of the primary endpoint and cluster endpoint of a PolarDB cluster can be associated with a private domain name. This private domain name takes effect only in the specified virtual private cloud (VPC) within the current region. For more information, see Private domain names.

Snapshot backup

PolarDB allows you to back up data only by creating snapshots. For more information, see Data backup.

Level-1 backup

A backup file that is locally stored in the cluster is a level-1 backup. Level-1 backups are stored in the distributed storage system that is associated with a cluster. These backups allow you to back up and restore data in a quick manner. However, level-1 backups have high costs. For more information, see Data backup.

Level-2 backup

Level-2 backups are backup files that are stored in on-premises storage media. All data in level-2 backups is archived from level-1 backups and can be permanently stored. Level-2 backup have low costs. However, a long time is required to restore data from level-2 backups. For more information, see Data backup.

Log backup

A log backup stores the redo logs of a database for point-in-time recovery (PITR). Log backups help avoid data loss caused by misoperations. Log backups must be retained for at least seven days. Log backups are stored in on-premises storage and are cost-effective. For more information, see Data backup.

Storage plan

PolarDB provides storage plans to reduce the storage costs of clusters. The storage capacity of a PolarDB cluster is automatically scaled based on the amount of the data in the cluster. You do not need to manually specify the storage capacity. You are charged only for the used storage. If you need to store a large amount of data, we recommend that you purchase PolarDB storage plans to reduce the cost. For more information, see Purchase a storage plan.


PolarDB for MySQL Cluster is recommended because it is developed based on an architecture in which compute is decoupled from storage. The number of compute nodes can be increased from 2 to a maximum of 16. For more information, see Cluster Edition.

Single Node

As the starter of PolarDB for MySQL editions, Single Node uses the burstable performance specification and shares resources in a computing resource pool to improve resource usage. The Single Node architecture also reduces resource costs because no proxy is required. For more information, see Single Node Edition.

Archive Database

PolarDB for MySQL Archive Database provides a high compression ratio and uses X-Engine as the default storage engine. PolarDB Archive Database Edition provides a large storage capacity and allows you to store archived data at a low cost. For more information, see Archive Database.


Smart-SSDs use the AliFlash smart-SSD technology developed by Alibaba Cloud to compress and decompress data at the physical SSD disk level. This can minimize the storage price of data while keep a high disk performance. The compression engine is integrated with a smart-SSD. The special computing power provided by FPGA or ASIC can compress and decompress data in real time during read and write processes. After data is compressed, less amount of data is written to disks and storage space is saved. More free space can effectively reduce write amplification that is inherent in SSDs and therefore improve their performance. Smart-SSDs are compatible with the access interfaces of standard disks that are transparent to upper-layer applications to avoid complex adaptation of applications.