All Products
Search
Document Center

PolarDB:FAQ

Last Updated:Mar 26, 2024

This topic provides answers to frequently asked questions about PolarDB for Oracle.

Basics

  • What is PolarDB?

    PolarDB is a cloud-based relational database service. PolarDB has been deployed in data centers in more than 10 regions around the world. PolarDB provides out-of-the-box online database services. PolarDB supports three independent engines. This allows PolarDB to be fully compatible with MySQL and PostgreSQL, and highly compatible with Oracle syntax. A PolarDB cluster supports a maximum storage space of 200 TB. For more information, see What is PolarDB for Oracle?

  • Why does PolarDB outperform traditional databases?

    Compared with traditional databases, PolarDB can store hundreds of terabytes of data. It also provides a wide array of features, such as high availability, high reliability, rapid elastic upgrades and downgrades, and lock-free backups. For more information, see Benefits.

  • When was PolarDB released? When was it available for commercial use?

    PolarDB was released for public preview in September 2017, and available for commercial use in March 2018.

  • What are clusters and nodes?

    PolarDB Cluster Edition uses a multi-node cluster architecture. A cluster has one primary node and multiple read-only nodes. A single PolarDB cluster can be deployed across zones but not across regions. The PolarDB service is managed based on clusters, and you are billed for the service based on clusters. For more information, see Glossary.

  • Which programming languages are supported?

    PolarDB supports programming languages, including Java, Python, PHP, Golang, C, C++, .NET, and Node.js.

  • After I purchase PolarDB, do I need to purchase PolarDB-X database middleware to implement sharding?

    Yes.

  • Does PolarDB support table partitioning?

    Yes.

  • Does PolarDB automatically include a partitioning mechanism?

    PolarDB implements partitioning at the storage layer. This is transparent and imperceptible to users.

  • Can I change the edition of my cluster?

    Yes, the edition of your cluster can be changed. The following table describes the editions to which an edition can be changed.

    Edition

    Destination edition

    Cluster Edition

    Single Node Edition

    X-Engine Edition

    Source edition

    Cluster Edition

    N/A

    Not supported

    Not supported

    Single Node Edition

    Not supported

    N/A

    Not supported

    X-Engine Edition

    Not supported

    Not supported

    N/A

  • How do Single Node Edition ensure service availability and data reliability?

    Single Node Edition is used to store data for a specific purpose and contains only one compute node. Single Node Edition uses new technologies such as computing scheduling within seconds and distributed multi-replica storage to ensure high service availability and high data reliability.

Pricing

  • What are the billable items of a PolarDB cluster?

    The billable items include the storage space, compute nodes, data backup feature (with a free quota), and SQL Explorer feature (optional). For more information, see Specifications and pricing.

  • Which files are stored in the storage space that incurs fees?

    The storage space that incurs fees stores database table files, index files, undo log files, redo log files, slowlog files, and a few system files. For more information, see Specifications and pricing.

  • How do I use storage plans of PolarDB?

    You can purchase storage plans to deduct the storage fees of clusters that use the subscription or pay-as-you-go billing method. For example, if you have three clusters and each cluster has a storage capacity of 40 GB, the total storage capacity is 120 GB. The three clusters can share a 100 GB storage plan. You are charged for the excess 20 GB of storage space on a pay-as-you-go basis. For more information, see Purchase a storage plan.

  • What is the price if I add a read-only node?

    The price of a read-only node is the same as that of a primary node. For more information, see Pricing of compute nodes.

  • Is the storage capacity doubled after I add a read-only node?

    No, the storage capacity is not doubled after you add a read-only node. PolarDB uses an architecture in which computing is decoupled from storage. The read-only node that you purchase is used as a computing resource. Therefore, the storage capacity is not increased.

    A serverless architecture is used for storage. Therefore, you do not need to specify the storage capacity when you purchase clusters. The storage capacity is automatically scaled out when the amount of data increases. You are charged only for the storage that you use. The maximum storage capacity varies based on cluster specifications. To increase the maximum storage capacity, change the specifications of a PolarDB cluster.

  • How can I no longer be charged for a pay-as-you-go cluster?

    If the cluster is no longer needed, you can release the cluster. For more information, see Release a cluster. After you release the cluster, you are no longer charged for the cluster.

  • Can I change the specifications of a cluster when the temporary upgrade is still effective?

    You can perform a manual upgrade when the temporary upgrade is still effective (the cluster is in the running state). For more information, see Manually upgrade or downgrade a PolarDB cluster. However, the following operations are not supported: manual downgrades, automatic configuration changes, and add or remove read-only nodes.

  • What is the maximum public bandwidth of PolarDB clusters? Do I pay for the public bandwidth?

    PolarDB clusters have no limits on public bandwidth. The maximum public bandwidth of PolarDB clusters depends on the public bandwidth of the SLB service that you use. You are not charged for the Internet connections to PolarDB clusters.

  • Why do I pay for subscription clusters every day?

    The billable items of PolarDB clusters mainly include compute nodes (primary nodes and read-only nodes), storage space, data backups, SQL Explorer and Audit (optional), and GDNs (optional). For data backups, you are charged only for the storage space that exceeds the free quota. For more information, see Billable items. If you use the subscription billing method, you must pay for the compute nodes when you purchase clusters. You are separately charged for the used storage, data backups, and SQL Explorer and Audit on an hourly basis. When you use clusters, fees for the storage space occupied by clusters are billed on an hourly basis. Therefore, pay-as-you-go bills are still generated when the subscription billing method is used.

  • Am I charged when ApsaraDB RDS instances are migrated to PolarDB clusters?

    No, you are not charged when ApsaraDB RDS instances are migrated to PolarDB clusters. You are charged only for ApsaraDB RDS instances and PolarDB clusters.

  • Why am I still charged for the storage space after I execute the DELETE statement to delete the data of tables in PolarDB?

    The DELETE statement only adds delete markers to the tables. The table space is not released.

Cluster access (read/write splitting)

  • How do I implement read/write splitting in PolarDB

    You need only to use a cluster endpoint in your application so that read/write splitting can be implemented based on the specified read/write mode. For more information, see Create and modify a custom cluster endpoint.

  • How many read-only nodes are supported in a PolarDB cluster?

    PolarDB uses a distributed cluster architecture. A cluster consists of one primary node and a maximum of 15 read-only nodes. At least one read-only node is required to ensure high availability.

  • Why are loads unbalanced among read-only nodes?

    One of the possible reasons is that only a small number of connections to read-only nodes exist. Another possible reason is that one of the read-only nodes is not specified as a read-only node when you create a custom cluster endpoint.

  • What are the causes of heavy or light loads on the primary node?

    Heavy loads on the primary node may occur due to the following causes: 1. The primary endpoint is used to connect your applications to the cluster. 2. The primary node accepts read requests. 3. A large number of transaction requests exist. 4. Requests are routed to the primary node because of a high primary/secondary replication delay. 5. Read requests are routed to the primary node due to read-only node exceptions.

    The possible cause of light loads on the primary node is that the Offload Reads from Primary Node feature is enabled.

  • How do I reduce the loads on the primary node?

    You can reduce the loads on the primary node by using the following methods:

    • You can use a cluster endpoint to connect to a PolarDB cluster. For more information, see Create and modify a custom cluster endpoint.

    • If a large number of transactions cause heavy loads on the primary node, you can enable the transaction splitting feature in the console. This way, part of queries in the transactions are routed to read-only nodes. For more information, see Transaction splitting.

    • If requests are routed to the primary node because of a replication delay, you can decrease the consistency level. For example, you can use the eventual consistency level. For more information, see Consistency levels.

    • If the primary node accepts read requests, the loads on the primary node may also become heavy. In this case, you can disable the feature that allows the primary node to accept read requests in the console. This reduces the number of read requests that are routed to the primary node.

  • Why am I unable to immediately retrieve the newly inserted data?

    It may be that the specified consistency level does not allow you to immediately retrieve the newly inserted data. The cluster endpoints of PolarDB support the following consistency levels:

    • Eventual consistency: This consistency level does not ensure that you can immediately retrieve the newly inserted data regardless of whether based on the same session (connection) or different sessions.

    • Session consistency: This consistency level ensures that you can immediately retrieve the newly inserted data based on the same session.

    Note

    A high consistency level results in heavy loads on the primary node. This compromises the performance of the primary node. Use caution when you select the consistency level. In most scenarios, the session consistency level can ensure service availability. For a few SQL statements that require strong consistency, you can add the /* FORCE_MASTER */ hint to the SQL statements to meet the consistency requirements. For more information, see Consistency levels.

  • How do I force an SQL statement to be executed on the primary node?

    If you use a cluster endpoint, add /* FORCE_MASTER */ or /* FORCE_SLAVE */ before an SQL statement to forcibly specify where the SQL statement is routed. For more information, see Best practices for consistency levels.

    • /* FORCE_MASTER */ is used to forcibly route requests to the primary node. This method applies to a few scenarios in which strong consistency is required for read requests.

    • /* FORCE_SLAVE */ is used to forcibly route requests to a read-only node. This method applies to scenarios in which the PolarDB proxy requests that special syntax be routed to a read-only node to ensure accuracy. For example, if you use this method, statements that call stored procedures and use multistatement are routed to the read-only node by default.

    Note
    • Hints are assigned the highest priority for routing and are not limited by consistency levels or transaction splitting. Before you use hints, evaluate the impacts on your business.

    • The hints cannot contain the statements that change GUC parameters, such as *FORCE_SLAVE*/ set enable_hashjoin = off;. This kind of statements may cause unexpected query results.

  • Can I assign different endpoints to different services? Can I use different endpoints to isolate my services?

    Yes, you can create multiple custom endpoints and assign them to different services. If the underlying nodes are different, the custom cluster endpoints can be used to isolate the services and do not affect each other. For more information about how to create a custom endpoint, see Create a custom cluster endpoint.

  • How do I separately create a single-node endpoint for one of the read-only nodes if multiple read-only nodes exist?

    You can create a single-node endpoint only if the Read/write Mode parameter for the cluster endpoint is set to Read Only and the cluster has three or more nodes. For more information, see Create a custom cluster endpoint.

    Warning

    However, if you create a single-node endpoint for a read-only node and the read-only node becomes faulty, the single-node endpoint may be unavailable for up to 1 hour. We recommend that you do not create single-node endpoints in your production environment.

  • What is the maximum number of single-node endpoints that I can create in a cluster?

    If your cluster has three nodes, you can create a single-node endpoint for only one of the read-only nodes. If your cluster has four nodes, you can create single-node endpoints for two of the read-only nodes, one for each. Similar rules apply if your cluster has five or more nodes.

  • Read-only nodes have loads when I use only the primary endpoint. Does the primary endpoint support read/write splitting?

    No, the primary endpoint does not support read/write splitting. The primary endpoint is always connected to only the primary node. Read-only nodes may have a small number of queries per second (QPS). This is a normal case and is irrelevant to the primary endpoint.

Management and maintenance

  • Does a replication delay occur when I replicate data from the primary node to the read-only nodes?

    Yes, a replication delay of a few milliseconds occurs.

  • When does a replication delay increase?

    A replication delay increases in the following scenarios:

    • The primary node processes a large number of write requests and generates excess redo logs. As a result, these redo logs cannot be replayed on the read-only nodes in time.

    • To process heavy loads, the read-only nodes occupy a large number of resources that are used to replay redo logs.

    • The system reads and writes redo logs at a low rate due to I/O bottlenecks.

  • How do I ensure the consistency of query results if a replication delay occurs?

    You can use a cluster endpoint and select an appropriate consistency level for the cluster endpoint. The following consistency levels are listed in descending order: session consistency, and eventual consistency. For more information, see Create and modify a custom cluster endpoint.

  • Can the recovery point objective (RPO) be zero if a single node fails?

    Yes.

  • How are node specifications upgraded in the backend, for example, upgrading node specifications from 2 cores and 8 GB memory to 4 cores and 16 GB memory? How does the upgrade impact my services?

    The proxy and database nodes of PolarDB must be upgraded to the latest configurations. A rolling upgrade method is used to upgrade multiple nodes to minimize the impacts on your services. Each upgrade takes about 10 to 15 minutes. The impacts on your services last for no more than 30 seconds. During this period, one to three transient connection errors may occur. For more information, see Change the specifications of a PolarDB cluster.

  • How long does it take to add a node? Are my services affected when the node is added?

    It takes about 5 minutes to add a node. Your services are not affected when the node is added. For more information about how to add a node, see Add a read-only node.

    Note

    After you add a read-only node, a read/write splitting connection is established to forward requests to the read-only node. A read/write splitting connection that is created before a read-only node is added does not forward requests to the read-only node. You must close the connection and establish the connection again. For example, you can restart the application to establish the connection.

  • How long does it take to upgrade a kernel minor version to the latest revision version? Are my services affected when the upgrade is complete?

    PolarDB uses a rolling upgrade method to upgrade multiple nodes to minimize the impacts on your services. In most cases, an upgrade requires less than 30 minutes to complete. PolarProxy or the database engine is restarted during the upgrade. This may interrupt services. We recommend that you perform the upgrade during off-peak hours. Make sure that your application can automatically reconnect to your database. For more information, see Version management.

  • How is an automatic failover implemented?

    PolarDB uses an active-active high-availability cluster architecture. This architecture supports automatic failovers between the primary node that supports reads and writes and the read-only nodes. The system automatically elects a new primary node. Each node in a PolarDB cluster has a failover priority. This priority determines the probability at which a node is elected as the primary node during a failover. If multiple nodes have the same failover priority, they all have the same probability of being elected as the primary node. For more information, see Automatic failover and manual failover.

Backup and restoration

  • How does PolarDB back up data?

    PolarDB uses snapshots to back up data. For more information, see Backup method 2: Manual backup.

  • How fast can a database be restored?

    It takes 40 minutes to restore or clone 1 TB of data in a database based on backup sets or snapshots. If you want to restore data to a specific time point, you must include the time required to replay the redo logs. It takes about 20 to 70 seconds to replay 1 GB of redo log data. The total restoration time is the sum of the time required to restore data based on backup sets and the time required to replay the redo logs.

Performance and capacity

  • What is the maximum number of tables? What is the upper limit for the number of tables if I want to ensure that the performance is not compromised?

    The maximum number of tables depends on the number of files. For more information, see Limits.

  • Can table partitioning improve the query performance of PolarDB?

    In most cases, if the SQL query statement falls into a partition, the performance can be improved.

  • Can I create 10,000 databases in PolarDB? What is the maximum number of databases in PolarDB?

    Yes, you can create 10,000 databases in PolarDB. The maximum number of databases you can create depends on the number of files. For more information, see Limits.

  • How are the input/output operations per second (IOPS) limited and isolated? Do the multiple nodes of a PolarDB cluster compete for I/O resources?

    The IOPS is specified for each node of a PolarDB cluster based on the node specifications. The IOPS of each node is isolated from that of the other nodes and does not affect each other.

  • Is the primary node affected if the performance of the read-only nodes is compromised?

    Yes, the memory consumption of the primary node is slightly increased if the loads on the read-only nodes are excessively heavy and the replication delay increases.

  • What is the impact on the database performance if I enable the SQL Explorer (full SQL log audit) feature?

    The database performance is not affected if you enable the SQL Explorer feature.

  • Which high-speed network protocol does PolarDB use?

    PolarDB uses dual-port Remote Direct Memory Access (RDMA) to ensure high I/O throughput between compute nodes and storage nodes, and between data replicas. Each port provides a data rate of up to 25 Gbit/s at a low latency.

  • What is the maximum bandwidth that I can use if I access PolarDB from the Internet?

    If you access PolarDB from the Internet, the maximum bandwidth is 10 Gbit/s.