PolarDB for PostgreSQL clusters support read/write splitting. Read and write requests sent to a cluster endpoint are automatically forwarded to the relevant nodes.

Background information

When there is a large number of read requests but few write requests to a database, a single node may not be able to handle the workload. This may cause core services to be affected. Cluster endpoints automatically forward write requests to the primary node, while read requests are automatically forwarded to read-only nodes. This way, the read capability can be elastically scaled to handle a large number of read requests that are sent to databases.Schematic diagram

Benefits

  • One endpoint, simplified maintenance

    If you do not send requests to the cluster endpoint, you must configure the endpoints of the primary node and each read-only node in the application to send write requests to the primary node and read requests to the read-only nodes. ApsaraDB for PolarDB provides a cluster endpoint. After you connect to this endpoint, read and write requests are automatically forwarded to the primary node and read-only nodes. This reduces maintenance costs. You can expand the capacity of an ApsaraDB for PolarDB cluster by adding read-only nodes, which saves you from making any modifications to applications.

  • Session-level read consistency

    When a client connects to the backend through the cluster endpoint, the built-in proxy for read/write splitting automatically establishes a connection with the primary node and each read-only node. In the same session, the built-in proxy first selects an appropriate node based on the data synchronization progress of each database node. Then, the proxy forwards read and write requests to the nodes whose data is up-to-date and correct, balancing the load between read and write requests.

  • Load balancing of PREPARE statements

    The built-in proxy automatically finds the database nodes that have previously executed PREPARE statements based on the information in EXECUTE statements, balancing the load of extended queries.

  • Support for native high security links, improving performance

    You can use a user-created proxy on the cloud to achieve read/write splitting. However, excessive latency may occur because data is parsed and forwarded by multiple components before arriving at a database. ApsaraDB for PolarDB utilizes a built-in proxy for read/write splitting, which offers reduced latency and enhanced query performance when compared with external components.

  • Node health checks to enhance database availability

    The read/write splitting module of ApsaraDB for PolarDB performs health checks on the primary node and read-only nodes of a cluster. When a node fails or its latency exceeds a specified threshold, ApsaraDB for POLARD stops distributing read requests to this node and redirects these requests to other healthy nodes. This ensures that applications can access the ApsaraDB for PolarDB cluster even if a read-only node fails. When the node is repaired, the node is automatically added to the request distribution system.

Limits

  • The following commands or functions are not supported:
    • Connecting to a cluster through the replication-mode method. If you need to set up dual-node clusters based on a primary/secondary replication architecture, use the endpoint of the primary node.
    • The name of the temporary table cannot be used to declare the %ROWTYPE attribute.
      create temp table fullname (first text, last text);
      select '(Joe,von Blow)'::fullname, '(Joe,d''Blow)'::fullname;
    • Creating temporary resources by using functions.
      • Executing an SQL statement to query a temporary table that is created by a function may receive an error message indicating that the table does not exist.
      • Executing a function that contains the PREPARE statement may return an error message indicating that the PREPARE statement name does not exist.
  • Routing-related restrictions:
    • Multi-statements are routed to the primary node, and all subsequent requests within this session are routed to the primary node.
    • A request message that is greater than or equal to 16 MB is routed to the primary node, and all subsequent requests within this session are routed to the primary node.
    • Requests in the transaction are routed to the primary node, and load balancing is resumed after the transaction terminates.
    • All statements that use functions (except aggregate functions such as COUNT and SUM) are routed to the primary node.

Apply for or change a cluster endpoint

  1. Log on to the ApsaraDB for PolarDB console.
  2. In the upper-left corner of the console, select a region.
  3. Click the ID of the target cluster.
  4. On the Overview page, find Cluster Endpoints in the Connection Information section.
  5. Click Apply. In the dialog box that appears, click Confirm. Refresh the page to view the cluster endpoint.
    Note If an existing cluster does not have a cluster endpoint, you must manually apply for a cluster endpoint. A cluster endpoint is automatically assigned to newly purchased clusters. If an ApsaraDB for PolarDB cluster has a cluster endpoint, you can skip to Step 6 to change the endpoint.
  6. Click Modify. In the Modify Endpoint dialog box, enter a new cluster endpoint and click Submit.Cluster endpoint