In a Global Database Network (GDN), each cluster — both primary and secondary — has its own independent cluster endpoint. Connect your application to the nearest cluster endpoint based on geography. The GDN automatically routes writes to the primary cluster and serves reads locally, with no changes required to your application code.
How routing works
When your application connects to any cluster endpoint in a GDN, the routing of read and write requests to the primary and secondary clusters is determined by the Configure PolarProxy of each cluster. PolarProxy handles request routing transparently:
Write requests —
INSERT,UPDATE,DELETE, DDL statements, and all requests within a transaction are forwarded to the primary node of the primary cluster.Read requests — Routed by default to read-only nodes on the local secondary cluster for low-latency access. If session consistency is enabled, some reads may be forwarded to the primary cluster to ensure data consistency.
Detailed routing rules
| Target node | Request types |
|---|---|
| Primary node of the primary cluster only | DML (INSERT, UPDATE, DELETE); DDL (creating or deleting tables or databases, altering table schemas); SHOW; BEGIN, COMMIT; LISTEN, UNLISTEN, NOTIFY; ANALYZE; two-phase commit commands; requests within a transaction (varies by transaction splitting config); function definitions and calls (varies by user-defined function routing rule config); requests using temporary tables; multi-statement (Multi Statements) requests; PREPARE statements containing write requests |
| Read-only nodes or the primary node | Read requests outside a transaction; EXPLAIN; PREPARE statements containing read requests |
| All nodes | USE; DISCARD and DEALLOCATE |
Endpoint requirements
Only certain endpoint types support GDN read/write splitting:
Cluster endpoints and custom endpoints with Read/Write Mode set to Read/Write (Automatic Read/Write Splitting) support GDN read/write splitting.
The Primary Endpoint and custom endpoints with Read/Write Mode set to Read-only do not support GDN read/write splitting.
On secondary clusters, set Primary Node Accepts Read Requests to No and Consistency Level to Eventual Consistency (Weak) to minimize the impact of replication delay on your application.
View a cluster endpoint
Log on to the PolarDB console. In the left navigation pane, click Global Database Network (GDN).
On the Global Database Network (GDN) page, click the Global Database Network ID of your GDN to go to the details page.
In the Clusters section, find the target secondary cluster and click View in the Cluster Endpoint column to see the endpoint details.

Only the default cluster endpoint (private and public network addresses) is shown here. To view additional endpoints, click View the Overview page of the cluster and go to the Database Connection section on the cluster details page.
Connect to a cluster
Applications in different regions can each connect to the nearest cluster endpoint. Choose one of the following methods.
Use DMS
Data Management (DMS) is a graphical data management tool provided by Alibaba Cloud. It provides various data management services, including data management, schema management, user management, security audit, data trends, data tracking, business intelligence (BI) charts, performance optimization, and server management. You can manage your PolarDB cluster directly by using DMS without using other tools.
Log on to the PolarDB console. In the cluster list, click the target cluster ID to go to its Basic Information page. In the upper-right corner, click Log On To Database.

Enter the database account and password for the cluster, then click Login.

After logging in, go to Database Instances > Instances Connected in the left navigation pane to manage the cluster.

Use pgAdmin
The following steps use pgAdmin 4 v9.0.
Download and install the pgAdmin 4 client.
Open pgAdmin 4, right-click Servers, and select Register > Server....

On the General tab, set a connection name. On the Connection tab, enter the cluster connection details and click Save.


| Parameter | Description |
|---|---|
| Host name/address | The endpoint and port of the PolarDB cluster. Use the Private endpoint if connecting from an ECS instance in the same VPC; use the Public endpoint if connecting from an on-premises environment. The default port is 5432. |
| Port | The port number of the cluster endpoint. |
| Username | The database account of the PolarDB cluster. |
| Password | The password for the database account. |
Verify the connection. A successful connection displays the cluster tree in the pgAdmin interface.

postgres is the default system database. Do not perform operations on it.Use psql
Download psql from PostgreSQL Downloads, or use the psql included in PolarDB-Tools. The connection method is the same on Windows and Linux.
For more information about how to use psql, see psql.
Syntax
psql -h <host> -p <port> -U <username> -d <dbname>| Parameter | Description |
|---|---|
host | The cluster endpoint. Use the Private endpoint from an ECS instance in the same VPC, or the Public endpoint from an on-premises environment. The default port is 5432. |
port | The port number. |
username | The database account. |
dbname | The database name. |
Example
psql -h pc-xxx.rwlb.rds.aliyuncs.com -p 5432 -U testusername -d postgresConnect using a programming language
PolarDB for PostgreSQL is wire-compatible with standard PostgreSQL. Use any PostgreSQL-compatible driver and replace the connection parameters with your cluster endpoint, port, database account, and password.
Java
Use the PostgreSQL JDBC driver in a Maven project.
Add the driver dependency to
pom.xml:<dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <version>42.2.18</version> </dependency>Connect to the cluster. Replace
<HOST>,<PORT>,<USER>,<PASSWORD>,<DATABASE>,<YOUR_TABLE_NAME>, and<YOUR_TABLE_COLUMN_NAME>with your actual values.import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.Statement; public class PolarDBConnection { public static void main(String[] args) { String url = "jdbc:postgresql://<HOST>:<PORT>/<DATABASE>"; String user = "<USER>"; String password = "<PASSWORD>"; try { // Load the PostgreSQL JDBC driver. Class.forName("org.postgresql.Driver"); // Establish the connection. Connection conn = DriverManager.getConnection(url, user, password); // Execute a query. Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery("SELECT * FROM <YOUR_TABLE_NAME>"); // Process results. while (rs.next()) { System.out.println(rs.getString("<YOUR_TABLE_COLUMN_NAME>")); } // Close resources. rs.close(); stmt.close(); conn.close(); } catch (Exception e) { e.printStackTrace(); } } }
Python
Use the psycopg2 library with Python 3.
Install the library:
pip3 install psycopg2-binaryConnect to the cluster. Replace
<HOST>,<PORT>,<USER>,<PASSWORD>,<DATABASE>, and<YOUR_TABLE_NAME>with your actual values.import psycopg2 try: conn = psycopg2.connect( host="<HOST>", # Cluster endpoint database="<DATABASE>", user="<USER>", password="<PASSWORD>", port="<PORT>" ) cursor = conn.cursor() cursor.execute("SELECT * FROM <YOUR_TABLE_NAME>") records = cursor.fetchall() for record in records: print(record) except Exception as e: print("Error:", e) finally: if 'cursor' in locals(): cursor.close() if 'conn' in locals(): conn.close()
Go
Use the database/sql package with the lib/pq driver in Go 1.23.0.
Install the driver:
go get -u github.com/lib/pqConnect to the cluster. Replace
<HOST>,<PORT>,<USER>,<PASSWORD>,<DATABASE>, and<YOUR_TABLE_NAME>with your actual values.package main import ( "database/sql" "fmt" "log" _ "github.com/lib/pq" // Initialize the PostgreSQL driver. ) func main() { connStr := "user=<USER> password=<PASSWORD> dbname=<DATABASE> host=<HOST> port=<PORT> sslmode=disable" db, err := sql.Open("postgres", connStr) if err != nil { log.Fatal(err) } defer db.Close() // Verify the connection. if err = db.Ping(); err != nil { log.Fatal(err) } fmt.Println("Connected to PostgreSQL!") rows, err := db.Query("SELECT * FROM <YOUR_TABLE_NAME>") if err != nil { log.Fatal(err) } defer rows.Close() }
API reference
| API | Description |
|---|---|
| DescribeDBClusterEndpoints | Queries the endpoint information of a PolarDB cluster. |
| ModifyDBClusterEndpoint | Modifies endpoint properties, including read/write mode, auto-add new nodes, consistency level, transaction splitting, primary node read requests, and connection pool settings. |