All Products
Search
Document Center

PolarDB:FAQ

Last Updated:Jan 17, 2026

This topic answers frequently asked questions about PolarDB for MySQL.

Basic questions

  • Q: What is PolarDB?

    A: PolarDB is a relational database cloud service deployed in data centers across more than 10 regions worldwide. It provides an out-of-the-box online database service. PolarDB supports three independent engines that are 100% compatible with MySQL, 100% compatible with PostgreSQL, and highly compatible with Oracle syntax. It offers a storage capacity of up to 200 TB. For more information, see What is PolarDB for MySQL Enterprise Edition, , or .

  • Q: Why is the cloud-native database PolarDB better than traditional databases?

    A: Compared to traditional databases, the cloud-native database PolarDB supports hundreds of terabytes of data storage. It provides high availability, high reliability, rapid elastic scaling, and lock-free backups. For more information, see Benefits, , or .

  • Q: When was PolarDB released and when did it become commercially available?

    A: The public preview was released in September 2017. It became commercially available in March 2018.

  • Q: What are clusters and nodes?

    A: PolarDB Cluster Edition uses a multi-node cluster architecture. A cluster contains one primary node and multiple read-only nodes. A single PolarDB cluster can span across zones but not across regions. Management and billing are performed at the cluster level. For more information, see Glossary, , or .

  • Q: What programming languages are supported?

    A: PolarDB supports programming languages such as Java, Python, PHP, Golang, C, C++, .NET, and Node.js. Any programming language that supports native MySQL can be used directly with PolarDB for MySQL. For more information, see the official MySQL website.

  • Q: What storage engines are supported?

    A: PolarDB supports two product series. The storage engines supported by each series are as follows:

    PolarDB for MySQL Cluster Edition uses the InnoDB storage engine for all tables. When you create a table, PolarDB for MySQL automatically converts non-InnoDB engines, such as MyISAM, Memory, and CSV, to the InnoDB engine. Therefore, you can migrate tables to PolarDB for MySQL even if they do not use the InnoDB engine.

  • Q: Is PolarDB a distributed database?

    A: Yes. PolarDB is a distributed storage cluster based on the Parallel Raft consensus protocol. The compute engine consists of 1 to 16 compute nodes distributed across different servers. It provides up to 200 TB of storage capacity and supports up to 88 cores and 710 GB of memory. You can dynamically scale out storage and compute resources online without affecting your services.

  • Q: After purchasing PolarDB, do I still need to purchase the PolarDB-X database middleware for sharding?

    A: Yes.

  • Q: Does PolarDB support table partitioning?

    A: Yes.

  • Q: Can I change the region of a PolarDB cluster after purchase?

    A: You cannot change the region of a cluster after you purchase it.

  • Q: Does PolarDB automatically include a partitioning mechanism?

    A: PolarDB performs partitioning at the storage layer. This process is transparent to users.

  • Q: How does the Single Node series ensure service availability and data reliability?

    A: The Single Node series is a database product that uses a single compute node for specific purposes. Although it has only one node, the Single Node series uses technologies such as second-level compute scheduling and distributed multi-replica storage to ensure high service availability and data reliability.

  • Q: How can I purchase a single-node PolarDB cluster?

    A: The Single Node product series is no longer available. However, you can create a single-node PolarDB cluster by purchasing a cluster and setting the number of read-only nodes to 0.

Compatibility

  • Q: Is it compatible with community edition MySQL?

    A: PolarDB for MySQL is 100% compatible with community edition MySQL.

  • Q: What transaction isolation levels are supported?

    A: PolarDB for MySQL supports three isolation levels: READ_UNCOMMITTED, READ_COMMITTED (default), and REPEATABLE_READ. It does not support the SERIALIZABLE isolation level.

  • Q: Is there a difference between SHOW PROCESSLIST and community edition MySQL?

    A: If you query using the primary endpoint, there is no difference. However, if you query using the cluster endpoint, there is a slight difference. Multiple records with the same thread ID may appear, corresponding to each node in the PolarDB for MySQL cluster.

  • Q: Is there a difference in the metadata lock (MDL) mechanism between PolarDB for MySQL and community edition MySQL?

    A: The MDL mechanism of PolarDB for MySQL is the same as that of community edition MySQL. However, PolarDB for MySQL uses a shared storage architecture. This can cause data inconsistency if a read-only node queries intermediate data while the primary node is performing a DDL operation. To prevent this, PolarDB for MySQL synchronizes the exclusive MDLs involved in the DDL operation to the read-only nodes through Redo logs. This blocks other user threads on the read-only nodes from accessing the table data during the DDL operation. In some scenarios, this may block the DDL operation. You can run the show processlist command to view the execution status of the DDL operation. If the status is Wait for syncing with replicas, this indicates that the issue has occurred. For solutions, see View the execution status of DDL operations and the status of MDLs.

  • Q: Is there a difference between the binary logging format and the native MySQL format?

    A: There are no differences.

  • Q: Are performance schema and sys schema supported?

    A: Yes.

  • Q: Is there a difference in table statistics collection compared to community edition MySQL?

    A: The table statistics on the primary node of PolarDB for MySQL are the same as in community edition MySQL. To ensure consistent execution plans between the primary and read-only nodes, the primary node synchronizes statistics to the read-only nodes whenever they are updated. In addition, read-only nodes can use the ANALYZE TABLE operation to actively load the latest statistics from the disk.

  • Q: Does PolarDB support XA transactions, and is there a difference from official MySQL?

    A: Yes, it is supported. There is no difference.

  • Q: Does PolarDB support full-text indexes?

    A: Yes.

    Note

    Currently, when you use full-text indexes, you may experience data latency in the index cache on read-only nodes. Use the primary endpoint for both read and write operations on full-text indexes to ensure you read the latest data.

  • Q: Are Percona tools supported?

    A: Yes, but use online DDL.

  • Q: Is gh-ost supported?

    A: Yes, but use online DDL.

Billing

  • Q: What are the billable items for PolarDB?

    A: Billable items include storage space, compute nodes, backups (with a free quota), and SQL Explorer (optional). For more information, see Billable items, , or .

  • Q: What does the charged storage space include?

    A: It includes database table files, index files, undo log files, Redo log files, binary logging files, slow log files, and a small number of system files. For more information, see Overview, , or .

  • Q: If I add a read-only node, how is the price calculated?

    A: The price of a read-only node is the same as the price of a primary node. For more information, see Pricing details for compute nodes, , or .

  • Q: If I add a read-only node, will the storage capacity double?

    A: PolarDB uses a compute-storage decoupled architecture. The read-only nodes you purchase are compute resources, so the storage capacity does not increase.

    Storage space is serverless. You do not need to select a capacity when you make a purchase. The storage space automatically scales out online as your data grows, and you are charged only for the actual amount of data you use. Each cluster specification has a corresponding maximum storage capacity. To increase the storage capacity limit, upgrade the cluster specifications, , or .

  • Q: For a pay-as-you-go cluster, how can I stop incurring charges?

    A: If you are sure you no longer need the cluster, you can release the cluster. No more fees are generated after the cluster is released.

  • Q: Can I change the specifications of a cluster during a temporary upgrade?

    A: During a temporary upgrade (when the cluster is in the running state), you can manually upgrade the specifications. However, you cannot manually downgrade the specifications, automatically change specifications, or add or remove nodes.

  • Q: What is the public bandwidth of PolarDB? Are there any fees?

    A: PolarDB itself has no public bandwidth limit. The limit mainly depends on the bandwidth of the SLB service you use. PolarDB does not charge for public network connections.

  • Q: Why are there still daily charges for a subscription cluster?

    A: The main billable items for PolarDB include compute nodes (primary and read-only nodes), storage space, data backups (charged only when the free quota is exceeded), SQL Explorer (optional), and Global Database Network (GDN) (optional). For more information, see Billable items. The subscription billing method means you prepay for the cluster's compute nodes when you create the database cluster. However, fees for storage space, data backups, and SQL Explorer are not included. While the database is in use, your account is charged hourly based on the storage space that the cluster occupies. Therefore, pay-as-you-go bills are still generated for subscription clusters.

  • Q: Is there an extra charge for one-click migration from RDS to PolarDB?

    A: The one-click migration process is free. You are only charged for the RDS instance and the PolarDB cluster itself.

  • Q: Why am I still charged for storage space after deleting data from a PolarDB table using delete?

    A: The delete operation only marks the data for deletion and does not release the tablespace.

Cluster access (read/write splitting)

  • Q: How do I implement read/write splitting for PolarDB?

    A: Use the cluster endpoint in your application to implement read/write splitting based on the configured read/write mode. For more information, see Configure a database proxy, , or .

  • Q: What is the maximum number of read-only nodes that a PolarDB cluster can support?

    A: PolarDB uses a distributed cluster architecture. A cluster contains one primary node and up to 15 read-only nodes (at least one is required to ensure high availability).

  • Q: What causes an unbalanced load among multiple read-only nodes?

    A: An unbalanced load among read-only nodes can be caused by a small number of connections to the read-only nodes or by a custom cluster endpoint that does not include a specific read-only node in its allocation.

  • Q: What causes a high or low load on the primary node?

    A: A high load on the primary node can be caused by direct connections to the primary endpoint, the primary node accepting read requests, many transaction requests, high replication delay causing requests to be routed to the primary node, or a read-only node failure causing read requests to be routed to the primary node.

    A low load on the primary node may be because the option to offload reads from the primary node is enabled.

  • Q: How can I reduce the load on the primary node?

    A: You can use the following methods to reduce the load on the primary node:

    • Use the cluster endpoint to connect to the PolarDB cluster. For more information, see Configure a database proxy, , or .

    • If the primary node is under pressure due to many transactions, you can enable transaction splitting in the console to route some queries within transactions to read-only nodes. For more information, see Transaction splitting, , or .

    • If requests are routed to the primary database due to replication delay, you can consider lowering the consistency level (for example, using eventual consistency). For more information, see Consistency level, , or .

    • If the primary database accepts read requests, it may also lead to a high load. You can enable the feature to offload reads from the primary node in the console to reduce the number of read requests routed to the primary database. For more information, see Offload reads from primary node.

  • Q: Why can't I read data that was just inserted?

    A: This issue may be caused by the consistency level configuration. The cluster endpoint of PolarDB supports the following consistency levels:

    • Eventual consistency: Eventual consistency does not guarantee that you can immediately read newly inserted data, either in the same session or in different sessions.

    • Session consistency: Guarantees that you can read data inserted within the same session.

    • Global consistency: Guarantees that you can read the latest data in both the same and different sessions.

    Note

    The higher the consistency level, the lower the performance and the greater the pressure on the primary database. Choose carefully. For most application scenarios, session consistency can ensure that services work properly. For a few statements that require strong consistency, you can use the Hint /* FORCE_MASTER */. For more information, see Consistency level, , or .

  • Q: How can I force an SQL statement to be executed on the primary node?

    A: When you use a cluster endpoint, add /* FORCE_MASTER */ or /* FORCE_SLAVE */ before the SQL statement to force the routing direction for that statement. For more information, see HINT syntax, , or .

    • /* FORCE_MASTER */ forces the request to be routed to the primary database. This can be used for a small number of read requests that have high consistency requirements.

    • /* FORCE_SLAVE */ forces the request to be routed to a secondary database. This can be used in scenarios where the PolarProxy requires special syntax to be routed to a secondary database to ensure correctness (for example, calls to stored procedures and the use of multistatement are routed to the primary database by default).

    Note
    • Hints have the highest routing priority and are not constrained by consistency levels or transaction splitting. Evaluate the impact before using them.

    • Do not include statements that modify GUC parameters in hint statements, such as /*FORCE_SLAVE*/ set enable_hashjoin = off; . Such statements may cause unexpected query results.

  • Q: Can I assign different endpoints to different services? Can these different endpoints achieve isolation?

    A: You can create multiple custom endpoints for different services. If the underlying nodes are different, the custom endpoints can also be isolated from each other and will not interfere with each other. For information about how to create a custom endpoint, see Add a custom cluster endpoint, , or .

  • Q: If there are multiple read-only nodes, how can I create a separate single-node endpoint for one of them?

    A: You can create a single-node endpoint only when the cluster endpoint's read/write mode is read-only and the cluster has three or more nodes. For detailed steps, see Set a cluster endpoint, , or .

    Warning

    After you create a single-node endpoint, if this node fails, the endpoint may be unavailable for up to 1 hour. Do not use it in a production environment.

  • Q: What is the maximum number of single-node endpoints that can be created in a cluster?

    A: If your cluster has 3 nodes, you can create a single-node endpoint for only 1 of the read-only nodes. If the cluster has 4 nodes, you can create separate single-node endpoints for 2 of the read-only nodes, and so on.

  • Q: I only use the primary endpoint, but I find that the read-only nodes also have a load. Does the primary endpoint also support read/write splitting?

    A: The primary endpoint does not support read/write splitting. It always connects only to the primary node. It is normal for read-only nodes to have a small number of queries per second (QPS), which is unrelated to the primary endpoint.

Management and maintenance

  • Q: How can I add fields and indexes online?

    A: You can use native online DDL, pt-osc, or gh-ost tools. We recommend that you use the native online DDL operation.

    Note

    When using the pt-osc tool, do not use parameters related to master-slave detection, such as the recursion-method parameter. This is because the pt-osc tool performs master-slave detection based on binary logging replication, but PolarDB uses physical replication internally and does not have replication information based on binary logging.

  • Q: Is the bulk insert feature supported?

    A: Yes.

  • Q: Do write-only nodes support bulk inserts? What is the maximum number of values for a single insert operation?

    A: Yes. The maximum number of values supported at one time is determined by the value of the max_allowed_packet parameter. For more information, see Replication and max_allowed_packet.

  • Q: Can I perform a bulk insert operation using the cluster endpoint?

    A: Yes.

  • Q: Is there a replication delay between the primary node and the read-only nodes?

    A: Yes, there is a millisecond-level delay between them.

  • Q: What causes the replication delay to increase?

    A: The replication delay increases in the following situations:

    • A high write load on the primary node generates Redo logs faster than the read-only nodes can apply them.

    • A high load on the read-only nodes consumes resources that are required to apply Redo logs.

    • An I/O bottleneck slows down the reading and writing of Redo logs.

  • Q: How can I ensure query consistency when there is a replication delay?

    A: You can use a cluster endpoint and select an appropriate consistency level for it. Currently, the consistency levels from highest to lowest are global consistency (strong consistency), session consistency, and eventual consistency. For more information, see Consistency level, , or .

  • Q: Can a Recovery Point Objective (RPO) of 0 be guaranteed in the event of a single node failure?

    A: Yes.

  • Q: How is a specification upgrade (for example, from 2 cores and 8 GB to 4 cores and 16 GB) implemented on the backend? What is the impact on services?

    A: Both the proxy and database nodes of PolarDB are upgraded to the new specifications. A rolling upgrade is performed on the nodes to minimize the impact on services. The upgrade takes about 10 to 15 minutes, with a service impact of no more than 30 seconds. During this time, one to three transient disconnections may occur. For more information, see Manually change specifications, , or .

  • Q: How long does it take to add a node? Will it affect services?

    A: Adding a node takes 5 minutes and does not affect your services. For more information about how to add a node, see Add a node, , or .

    Note

    After a read-only node is added, new read/write splitting connections forward requests to the new node. Existing read/write splitting connections do not forward requests to the new node. You must re-establish these connections, for example, by restarting your application.

  • Q: How long does it take to upgrade to the latest revision? Will it affect services?

    A: PolarDB uses a rolling upgrade of multiple nodes to minimize the impact on services. A version upgrade typically takes no more than 30 minutes. During the upgrade, the database proxy or the DB kernel engine is restarted, which may cause transient database disconnections. Perform the upgrade during off-peak hours and ensure your application has an automatic reconnection mechanism. For more information, see Minor version management, , or .

  • Q: How does automatic failover work?

    A: PolarDB uses an active-active high availability architecture. It performs automatic failover between the primary read-write node and the read-only nodes. During this process, the system automatically elects a new primary node. PolarDB assigns a failover priority to each node, which determines its probability of being elected as the new primary node. If multiple nodes have the same priority, they have an equal probability of being elected as the primary node. For more information, see Automatic or manual primary/standby node switchover, , or .

  • Q: What permissions are required to terminate a connection in PolarDB for MySQL?

    A: In MySQL, terminating a connection using the KILL command requires specific permissions. Specifically, you need the PROCESS permission to terminate the connections of other regular users.

    Note
    • Terminate your own connection: Any user can terminate their own connection without any additional permissions.

    • Terminate other sessions of the same user: You need the PROCESS permission.

    • Terminate connections of other regular users: In PolarDB for MySQL, high-privilege accounts should use the KILL command with caution.

  • Q: The operational log shows a [ERROR] InnoDB: fil_space_extend space_name:xxx error. Does this affect my current services?

    A: No, this does not affect your services. This log indicates that after the read/write node of a PolarDB cluster extends a file, the read-only nodes synchronize this file size information in their memory. In MySQL 5.7 clusters, this operation is logged at the ERROR level. Therefore, on a read-only node, you can consider it an INFO level message that does not affect your services.

  • Q: What is the architecture of the database proxy? Does it have a failover mechanism? How is its high availability ensured?

    A: The database proxy uses a dual-node high availability architecture and distributes traffic evenly between the two proxy nodes. The system continuously monitors the health of the proxy nodes. If a node fails, the system proactively disconnects its connections, and the remaining healthy node automatically takes over all traffic to ensure uninterrupted service. At the same time, the system automatically rebuilds and recovers the failed proxy node. This process is typically completed in about 2 minutes, during which the database cluster remains accessible.

    In rare cases, connections to a failed node may not be disconnected promptly and may become unresponsive. To handle this, configure a timeout policy on the client, such as the JDBC socketTimeout and connectTimeout parameters. This allows the application layer to promptly detect and terminate suspended connections, improving the system's fault tolerance and response efficiency.

  • Q: How can I view the error log of a PolarDB for MySQL cluster?

    A: Go to the PolarDB console. On the details page of the target cluster, choose Diagnostics and Optimization > Log Management from the navigation pane on the left. On the Operational Log tab, view the corresponding error log.

  • Q: Does PolarDB for MySQL automatically create an implicit primary key for a table without a primary key?

    A: By default, PolarDB for MySQL creates an implicit primary key for a table without a primary key.

    View the implicit primary key

    Log on to the cluster, run the SET show_ipk_info = 1 command, and then run the SHOW CREATE TABLE command.

    -- Set the parameter to show the implicit primary key
    SET show_ipk_info = 1;
    
    -- View the table schema
    SHOW CREATE TABLE t;

    In the returned table schema, the __#alibaba_rds_row_id#__ column is the implicit primary key.

    +-------+------------------------------------------------------------------------------------------------------------+
    | Table | Create Table                                                                                               |
    +-------+------------------------------------------------------------------------------------------------------------+
    | t     | CREATE TABLE `t` (
      `id` int(11) DEFAULT NULL,
      `__#alibaba_rds_row_id#__` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'Implicit Primary Key by RDS',
      KEY `__#alibaba_rds_row_id#__` (`__#alibaba_rds_row_id#__`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
    +-------+------------------------------------------------------------------------------------------------------------+
  • Q: Why does my service report a Lock wait timeout exceeded error and show a transaction with a trx_mysql_thread_id of 0?

    A: When your application interacts with PolarDB for MySQL, a service interruption caused by a lock wait that is accompanied by a thread with a thread_id of 0 in the database usually indicates that an incomplete XA (distributed) transaction is holding a lock. This topic describes how to resolve this issue.

    Symptoms

    • The application or client receives an ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction error.

    • After you log on to the database and run the SELECT * FROM information_schema.innodb_trx; command, you find a transaction in the output where the trx_mysql_thread_id field has a value of 0. This transaction has been running for a long time and is blocking other transactions.

    image

    Cause

    In the InnoDB storage engine, a trx_mysql_thread_id of 0 indicates an XA transaction. The issue usually occurs during the two-phase commit protocol of an XA transaction. After the transaction successfully executes XA PREPARE and enters the prepared state, the transaction remains in the prepared state if the external Transaction Manager fails to issue an XA COMMIT or XA ROLLBACK instruction. This failure can occur because of network issues, program exceptions, or other reasons. A transaction in this state continues to hold its acquired lock resources. This blocks other transactions that need the same resources and eventually leads to a lock wait timeout.

    Solution

    You must manually commit or roll back the XA transaction that is in the prepared state, as needed.

    1. Find the uncommitted XA transaction: Run the XA RECOVER; command to query for current uncommitted XA transactions. Record the values of the formatID, gtrid_length, bqual_length, and data fields for the target transaction. This information is crucial for the next step.image

    2. Manually commit or roll back the XA transaction: After you find the XA transaction, you can choose to roll it back or commit it, as needed.

      1. Obtain the unique identifier (xid) of the XA transaction: The xid consists of three parts: gtrid, bqual, and formatID. You need to construct the xid based on the information that you queried in the previous step.

        • gtrid: A string with a length of gtrid_length, extracted from the beginning of the data field.

        • bqual: A string with a length of bqual_length, extracted from the end of the data field.

        • formatID: The value of the formatID field.

        Based on the example in the previous step, you can construct the three parts of the xid. You can use the substring function to split the data field.

        SELECT substring('192.168.1.2_app_name_test',1,11) AS gtrid, substring('192.168.1.2_app_name_test',-14) AS bqual;
        +-------------+----------------+
        | gtrid       | bqual          | 
        +-------------+----------------+ 
        | 192.168.1.2 | _app_name_test | 
        +-------------+----------------+
        • gtrid: '192.168.1.2'

        • bqual: '_app_name_test'

        • formatID: 10000

      2. Commit or roll back the XA transaction: Manually committing or rolling back an XA transaction can cause its final state to be inconsistent with the original intent of the transaction coordinator. This poses a risk of data inconsistency. Only execute the following commands after you fully understand the business context of the transaction and confirm that it is safe to do so.

        1. Commit: If you determine that the transaction must be committed, run the following command:

          XA COMMIT '192.168.1.2', '_app_name_test', 10000;
        2. Rollback: If you determine that the transaction must be rolled back, run the following command:

          XA ROLLBACK '192.168.1.2', '_app_name_test', 10000;
    3. After the command is successfully executed, the locks held by the uncommitted XA transaction are released, and the database service returns to normal.

    For more information about XA transaction syntax, see MySQL official documentation: XA Transaction Statements.

Backup and recovery

  • Q: What backup method does PolarDB use?

    A: PolarDB uses snapshot backups. For more information, see Backup method 1: Automatic backup and Backup method 2: Manual backup, , or .

  • Q: How fast is database recovery?

    A: The recovery speed from a backup set (snapshot) is approximately 40 minutes per TB. For a point-in-time recovery, the total time also includes the time required to apply redo logs, which takes an additional 20 to 70 seconds per GB.

Performance and capacity

  • Q: Why is the performance improvement of PolarDB for MySQL not significant compared to ApsaraDB RDS for MySQL?

    A: To ensure an accurate and fair performance comparison between PolarDB for MySQL and ApsaraDB RDS for MySQL, consider the following:

    • Use instances of PolarDB for MySQL and ApsaraDB RDS for MySQL that have the same specifications.

    • Use the same version of PolarDB for MySQL and ApsaraDB RDS for MySQL for the performance comparison.

      Different versions have different implementation mechanisms. For example, MySQL 8.0 is optimized for multi-core CPUs and has separate threads, such as Log_writer, log_flusher, log_checkpoint, and log_write_notifier. However, its performance is not as good as that of MySQL 5.6 or 5.7 on CPUs with fewer cores. Do not compare PolarDB for MySQL 5.6 with ApsaraDB RDS for MySQL 5.7 or 8.0 because the optimizer in MySQL 5.6 is older and less effective than the optimizers in newer versions.

    • Simulate an online pressure scenario or use sysbench for the performance comparison. This provides data that is more representative of an actual online scenario.

    • When you compare read performance, do not use a single SQL statement.

      Because PolarDB has a compute-storage decoupled architecture, the performance of a single statement is affected by network latency. This can result in lower read performance than RDS. The cache hit ratio of an online database is typically above 99%. Only the first read operation requires an I/O call, which can reduce read performance. Subsequent data is read from the buffer pool without requiring I/O calls, so the performance is the same.

    • When you compare write performance, also do not use a single SQL statement. Instead, simulate an online environment for a stress test.

      To compare the performance with RDS, use a PolarDB cluster that consists of a primary node and a read-only node, and an RDS instance that consists of a primary instance and a semi-synchronous read-only instance. This is because the PolarDB architecture uses the Quorum mechanism by default for write operations. This means that data is written to three replicas by default, and the write operation is considered successful if it succeeds on two or more of the three replicas. PolarDB provides data redundancy at the storage layer and ensures high reliability with three-replica strong synchronization. Therefore, it is more reasonable to compare it with an ApsaraDB RDS for MySQL instance that uses semi-synchronous replication instead of asynchronous replication.

    For more information about the performance comparison results between PolarDB for MySQL and ApsaraDB RDS for MySQL, see Performance comparison between PolarDB for MySQL and ApsaraDB RDS for MySQL.

  • Q: What is the maximum number of tables? At what number of tables might performance degrade?

    A: The maximum number of tables is limited by the number of files. For more information, see Limits, , or .

  • Q: Can table partitioning improve the query performance of PolarDB?

    A: Yes. If a query can be processed within a specific partition, performance can be improved.

  • Q: Does PolarDB support creating 10,000 databases? What is the maximum number of databases?

    A: PolarDB supports creating 10,000 databases. The maximum number of databases is limited by the number of files. For more information, see Limits, , or .

  • Q: Is the number of read-only nodes related to the maximum number of connections? Can I increase the maximum number of connections by adding read-only nodes?

    A: No, the number of read-only nodes is not related to the maximum number of connections. The maximum number of connections for a PolarDB cluster is determined by the node specifications. For more information, see Limits. If you need more connections, upgrade the specifications.

  • Q: How are IOPS limited and isolated? Will there be I/O contention among multiple PolarDB cluster nodes?

    A: The IOPS for each node in a PolarDB cluster are determined by its specifications. The IOPS of each node are isolated and do not affect each other.

  • Q: Will slow performance of a read-only node affect the primary node?

    A: A high load or an increased replication delay on a read-only node may slightly increase the memory consumption of the primary node.

  • Q: What is the performance impact of enabling binary logging?

    A: Enabling binary logging does not affect query (SELECT) performance. It affects only write and update (INSERT, UPDATE, and DELETE) performance. In a database with a balanced read/write load, enabling binary logging can decrease performance by up to 10%.

  • Q: What is the performance impact of enabling SQL Explorer (full SQL log auditing)?

    A: There is no performance impact.

  • Q: What high-speed network protocol does PolarDB use?

    A: PolarDB uses dual 25 Gbps RDMA technology between its database compute nodes and storage nodes, and between its storage data replicas. This provides strong I/O performance with low latency and high throughput.

  • Q: What is the bandwidth limit for an external connection to PolarDB?

    A: The bandwidth limit for an external connection to PolarDB is 10 Gbit/s.

Large table issues

  • Q: What are the advantages of storing large tables in PolarDB for MySQL compared to a traditional database with local disks?

    A: In PolarDB for MySQL, a single table is physically split and stored across multiple storage servers. Therefore, the I/O for a single table is distributed across multiple storage disks. The overall I/O read throughput (not I/O latency) is far superior to that of a centralized database with local disks.

  • Q: How can I optimize large tables?

    A: Use partitioned tables.

  • Q: In which scenarios is it appropriate to use partitioned tables?

    A: Partitioned tables are ideal for scenarios where you need to crop large tables to limit the data accessed by queries. This cropping is transparent to the application and does not require code changes. For example, you can use partitioned tables to regularly purge historical data, such as by deleting the oldest monthly partition and creating a new one to maintain a rolling six-month window of data.

  • Q: What is the recommended way to copy a very large table, such as from table A to table B, within the same PolarDB for MySQL database?

    A: You can copy it using the following SQL statement:

    create table B as select * from A

Stability

  • Q: Can PHP short-lived connections under high concurrency be optimized?

    A: Yes. You can optimize them by enabling the session-level connection pool in the cluster endpoint. For more information, see Set a cluster endpoint, , or .

  • Q: How can I prevent a few inefficient SQL statements from slowing down the entire database?

    A: If your PolarDB for MySQL cluster is version 5.6 or 8.0, you can use the Concurrency Control feature to limit the flow of specified statements.

  • Q: Does PolarDB support idle session timeout?

    A: Yes. You can customize the timeout period for idle sessions by modifying the wait_timeout parameter. For more information, see Set cluster and node parameters.

  • Q: How can I find slow SQL statements?

    A: You can find slow SQL statements in the following two ways:

    • Search for slow SQL statements directly in the console. For more information, see Slow SQL statements.

    • After connecting to the database cluster, run show processlist; to find SQL statements that are taking too long to execute. For information about how to connect to a database cluster, see Connect to a database cluster.发现慢SQL

  • Q: How can I terminate a slow SQL statement?

    A: After you find a slow SQL statement, you can view its ID and then run the kill <Id> command to terminate it.终止慢SQL

Data lifecycle

  • Q: How does PolarDB for MySQL archive hot and warm data to cold storage?

    A: PolarDB for MySQL can archive hot data from the InnoDB engine in PolarStore and warm data from the X-Engine. The data is archived to an OSS cold storage medium in CSV or ORC format based on a specified DDL policy. This process releases storage space in PolarStore and reduces the overall database storage cost. For more information, see Manually archive cold data.

  • Q: Does PolarDB for MySQL support automatic tiered storage for hot, warm, and cold data? How is this implemented?

    A: Yes. PolarDB for MySQL supports automatic tiered storage for hot, warm, and cold data. You can implement this by specifying a DLM policy. This policy automatically archives data from PolarStore to a low-cost OSS storage medium, which reduces your database storage costs. For more information, see Automatically archive cold data.