This topic provides answers to commonly asked questions about PolarDB for MySQL.

Basic information

  • Q: What is PolarDB?

    A: PolarDB is a cloud-based relational database service. PolarDB has been deployed in various data centers in more than 10 regions around the world. PolarDB provides out-of-the-box online database services. PolarDB supports three database engines. This allows PolarDB to be fully compatible with MySQL and PostgreSQL, and allows PolarDB databases to be compatible with the Oracle syntax. Each PolarDB cluster supports a maximum storage space of 100 TB. You can purchase the PolarDB service based on your business requirements by using the pay-as-you-go billing method. The unit price of the PolarDB service is as low as USD 0.2 per hour. After you purchase the service, you can use all the features of the service. For more information, see Overview of Apsara PolarDB.

  • Q: What are the advantages of PolarDB over traditional databases?

    A: Compared with traditional databases, PolarDB can store hundreds of terabytes of data. PolarDB also provides a wide array of features, such as high availability, high reliability, fast elastic scaling, and lock-free backups. For more information, see Benefits.

  • Q: When was PolarDB released? When was PolarDB available for commercial use?

    A: PolarDB was released for public preview in September 2017, and available for commercial use in March 2018.

  • Q: What are clusters and nodes in PolarDB?

    A: PolarDB uses a cluster-based architecture. Each cluster consists of one primary node (read/write node) and multiple read-only nodes. A single PolarDB cluster can be deployed across zones but not across regions. The PolarDB service is managed based on clusters, and you are billed for the service based on clusters. For more information, see Glossary.

  • Q: What are the development methods that are supported by PolarDB?

    A: The development methods that are supported by PolarDB include Java, Python, PHP, Golang, C, C++, .NET, and Node.js. PolarDB for MySQL supports all the development methods that are supported by the native MySQL system. For more information, visit the MySQL official website.

  • Q: Which storage engines are supported by PolarDB for MySQL?

    A: PolarDB for MySQL supports only the InnoDB storage engine. All the tables in PolarDB for MySQL are stored in this engine. If you migrate the tables that are not stored in the InnoDB storage engine to PolarDB for MySQL, PolarDB for MySQL automatically replaces the other storage engines with the InnoDB engine. This ensures that the tables that are not stored in the InnoDB storage engine can be migrated to PolarDB for MySQL as expected. The examples of the other storage engines include the MyISAM, MEMORY, and CSV engines.

  • Q: Can I synchronize data from a PolarDB for MySQL database instance to a user-created database instance? How do I implement a primary/secondary architecture?

    A: Yes, you can synchronize data from a PolarDB for MySQL database instance to a user-created database instance. The user-created database functions as the secondary database instance. To implement a primary/secondary architecture, you can enable the binary log feature to synchronize data from a PolarDB for MySQL database instance to a user-created MySQL database instance. The PolarDB for MySQL database instance functions as the primary database instance, and the user-created MySQL database instance functions as the secondary database instance. To facilitate subsequent maintenance, we recommend that you use Data Transmission Service (DTS) to synchronize data. For more information, see Synchronize data from an Apsara PolarDB for MySQL cluster to an ApsaraDB RDS for MySQL instance.

Compatibility

  • Q: Is PolarDB for MySQL compatible with MySQL Community Edition?

    A: Yes, PolarDB for MySQL is fully compatible with MySQL Community Edition.

  • Q: Which transaction isolation levels are supported by PolarDB for MySQL?

    A: PolarDB for MySQL supports three isolation levels: READ_UNCOMMITTED, READ_COMMITTED, and REPEATABLE_READ. The default isolation level is READ_COMMITTED. PolarDB for MySQL does not support the SERIALIZABLE isolation level.

  • Q: Are the query results of the SHOW PROCESSLIST statement in PolarDB for MySQL the same as those in MySQL Community Edition?

    A: If you use a primary endpoint to execute the SHOW PROCESSLIST statement, the query results are the same. If you use a cluster endpoint to execute the SHOW PROCESSLIST statement, the query results are different between PolarDB for MySQL and MySQL Community Edition. In the query results of the statement in PolarDB for MySQL, you can find multiple records that have the same thread ID. Each of these records corresponds to a node that is included in the PolarDB for MySQL cluster.

  • Q: Is the lock mechanism of PolarDB for MySQL different from that of MySQL Community Edition?

    A: Yes, the lock mechanism is different between PolarDB for MySQL and MySQL Community Edition. PolarDB for MySQL uses redo log files to synchronize the exclusive metadata locks (MDLs) that are involved in data definition language (DDL) operations to read-only nodes. The read-only nodes hold the locks until the DDL operations are complete. This prevents other user threads on the read-only nodes from accessing the data stored in the tables when the DDL operations are in progress. PolarDB for MySQL is different from MySQL Community Edition in terms of data storage. In PolarDB for MySQL, primary nodes and read-only nodes share the stored data. When the primary nodes perform DDL operations, the read-only nodes may retrieve the intermediate data generated by the DDL operations. This results in exceptions.

  • Q: Is the binary log format of PolarDB for MySQL same as the native binary log format of MySQL?

    A: Yes, the binary log format of PolarDB for MySQL is the same as the native binary log format of MySQL.

  • Q: Does PolarDB support the Performance Schema feature and the sys schema feature?

    A: Yes, PolarDB supports the Performance Schema feature and the sys schema feature.

  • Q: Are the table statistics in PolarDB for MySQL consistent with those in MySQL Community Edition?

    A: Yes, the table statistics for the primary node in PolarDB for MySQL are consistent with those in MySQL Community Edition. Each update of table statistics in the primary node of a PolarDB for MySQL cluster is synchronized to the read-only nodes of the cluster. This ensures that the SQL execution plans are consistent between the primary node and the read-only nodes. You can also execute the ANALYZE TABLE statement on the read-only nodes to obtain the latest table statistics from disks.

  • Q: Does PolarDB for MySQL support extended architecture (XA) transactions? Does PolarDB for MySQL support XA transactions in the same way as the native MySQL system?

    A: Yes, P PolarDB for MySQL supports XA transactions in the same way as the native MySQL system.

  • Q: Does PolarDB support full-text indexing?
    A: Yes, PolarDB supports full-text indexing.
    Note When you query data based on full-text indexes, index caches are used on read-only nodes. Due to the index caches, you cannot retrieve the latest data based on the indexes. We recommend that you use primary endpoints to read and write data based on full-text indexes. This ensures that you can retrieve the latest data.
  • Q: Does PolarDB support Percona Toolkit?

    A: Yes, PolarDB supports Percona Toolkit. However, we recommend that you use the online DDL tool.

  • Q: Does PolarDB support the gh-ost tool?

    A: Yes, PolarDB supports the gh-ost tool. However, we recommend that you use the online DDL tool.

Billing

  • Q: What are the billing items for a PolarDB cluster?

    A: The billing items for a PolarDB cluster include the storage space, primary node, read-only nodes, data backup feature, and SQL Explorer feature. The data backup feature is available for free, and you are billed only for the storage space that is occupied by the data backup files. A certain amount of storage space is provided for free to store the data backup files. Note that the SQL Explorer feature is optional. For more information, see Specifications and pricing.

  • Q: Which files are stored in the storage space that incurs fees?

    A: The storage space that incurs fees stores database table files, index files, undo log files, redo log files, binary log files, slow query log files, and a few system files. For more information, see Specifications and pricing.

  • Q: How do I use storage packages for PolarDB?

    A: You can purchase storage packages to deduct the storage fees of your PolarDB clusters. The PolarDB clusters can use the subscription or the pay-as-you-go billing method. For example, if you have three PolarDB clusters and each cluster has a storage capacity of 40 GB, the total storage capacity is 120 GB. You can purchase a 100 GB storage package for the three clusters. You are billed for the excess 20 GB storage space based on the pay-as-you-go billing method. For more information, see Use storage packages.

Cluster access policies based on read/write splitting

  • Q: How do I implement read/write splitting in PolarDB?

    A: To implement read/write splitting, use a custom cluster endpoint in your applications to connect to database services. You can create a custom cluster endpoint in the PolarDB console. When you create the custom cluster endpoint, set the Read/write Mode parameter to Read and Write (Automatic Read-write Splitting), and specify the reader nodes for processing read requests. For more information, see Create a custom cluster endpoint.

  • Q: How many read-only nodes are supported in a PolarDB cluster?

    A: A PolarDB cluster supports a maximum of 15 read-only nodes. PolarDB uses a distributed cluster-based architecture. Each PolarDB cluster consists of a primary node and a maximum of 15 read-only nodes. At least one read-only node must be used to implement failovers to ensure high availability.

  • Q: Why are loads unbalanced among read-only nodes in a PolarDB cluster?

    A: One of the possible reasons is that only a small number of connections are established to certain read-only nodes. A large number of connections may be established to the other read-only nodes. Another possible reason is that one of the read-only nodes is not specified as a reader node when you create a custom cluster endpoint.

  • Q: What are the causes of heavy or light loads on the primary node of my PolarDB cluster?

    A: Heavy loads on the primary node may occur due to the following causes: 1. The primary endpoint is used to connect your applications to the PolarDB cluster. 2. The offload reads from the primary node feature is disabled. 3. The primary node receives a large number of transaction requests. 4. Requests are routed to the primary node because of a high replication delay. The replication delay occurs when you replicate data from the primary node to the read-only nodes. 5. Read requests are routed to the primary node due to read-only node failures.

    The possible cause of light loads on the primary node is that the offload reads from the primary node feature is enabled.

  • Q: How do I reduce the loads on the primary node of my PolarDB cluster?
    A: You can reduce the loads on the primary node by using the following methods:
    • You can use a cluster endpoint to connect your application to a PolarDB cluster. For more information, see Modify and delete a cluster endpoint.
    • If a large number of transactions cause heavy loads on the primary node, you can enable the transaction splitting feature in the PolarDB console. After you enable this feature, PolarDB distributes the read requests in the transactions to the read-only nodes for load balancing. For more information, see Configure transaction splitting.
    • If requests are routed to the primary node because of a high replication delay, you can decrease the consistency level to reduce the loads on the primary node. For example, you can use the eventual consistency level. For more information, see Data consistency levels.
    • If you disable the offload reads from the primary node feature, the loads on the primary node may become heavy. In this scenario, you can enable the offload reads from the primary node feature in the PolarDB console. This helps you reduce the number of read requests that are routed to the primary node of your PolarDB cluster. For more information, see Offload reads from the primary node.
  • Q: Why am I unable to immediately retrieve the newly inserted data?
    A: The possible cause is that the specified consistency level does not allow you to immediately retrieve the newly inserted data. The cluster endpoints of PolarDB support the following consistency levels:
    • Eventual consistency: This consistency level does not ensure that you can immediately retrieve the newly inserted data based on the same session or connection or different sessions.
    • Session consistency: This consistency level ensures that you can immediately retrieve the newly inserted data based on the same session.
    • Global consistency: This consistency level ensures that you can immediately retrieve the latest data based on the same session or different sessions.
    Note A high consistency level results in heavy loads on the primary node of your PolarDB cluster. This compromises the performance of the primary node. Use caution when you specify the consistency level. In most scenarios, the session consistency level can ensure service availability. For a few SQL statements that require strong consistency, you can add the /* FORCE_MASTER */ hint syntax to the SQL statements to meet the consistency requirements. For more information, see Data consistency levels.
  • Q: How do I force an SQL statement to be executed on the primary node of my PolarDB cluster?
    A: If you use a cluster endpoint, add /* FORCE_MASTER */ before an SQL statement to execute the SQL statement on the primary node. You can add /* FORCE_SLAVE */ before an SQL statement to execute the SQL statement on a read-only node. For more information, see Hint syntax.
    • /* FORCE_MASTER */ is added to an SQL statement to forcibly route the SQL statement to the primary node. In only a few scenarios, strong consistency is required for read requests. This method applies to these scenarios.
    • /* FORCE_SLAVE */ is added to an SQL statement to forcibly route the SQL statement to a read-only node. In only a few scenarios such as calling stored procedures and using multi-statements, SQL statements that use specific syntax patterns are routed to the primary node. The purpose is to ensure that these SQL statements are executed as expected. In these scenarios, if the SQL statements do not change the environment variables of sessions, you can use this method to forcibly route the SQL statements to read-only nodes.
  • Q: Can I assign different cluster endpoints to different services? Can I use cluster endpoints to isolate my services?

    A: Yes, you can create custom cluster endpoints and assign them to different services. If the custom endpoints are associated with different nodes, the custom cluster endpoints can be used to isolate the services. For more information about how to create a custom cluster endpoint, see Create a custom cluster endpoint.

  • Q: How do I create an endpoint for a single read-only node if I have multiple read-only nodes?
    A: You can create a single-node endpoint only if the Read/write Mode parameter for the cluster endpoint is set to Read Only and the cluster has at least three nodes. For more information, see Create a custom cluster endpoint.
    Warning However, if you create a single-node endpoint for a read-only node and the read-only node becomes faulty, the endpoint may be unavailable for up to 1 hour. We recommend that you do not create or use single-node endpoints in production environments.
  • Q: What is the maximum number of single-node endpoints that I can create in a cluster?

    A: The maximum number of single-node endpoints that are allowed in a cluster varies based on the number of nodes in the cluster. For example, if your cluster has three nodes, you can create a single-node endpoint for only one of the read-only nodes. If your cluster has four nodes, you can create a single-node endpoint for each of the two read-only nodes. Similar rules apply if your cluster has five or more nodes.

  • Q: Read-only nodes have loads when I use only the primary endpoint. Does the primary endpoint support read/write splitting?

    A: No, the primary endpoint does not support read/write splitting. The primary endpoint is always connected to the primary node. In certain scenarios, only a small number of queries per second (QPS) run on read-only nodes. This does not indicate that service errors occur. The QPS for read-only nodes is not affected regardless of whether you use the primary endpoint.

Management and maintenance

  • Q: How do I add fields online?

    A: You can use tools such as the native online DDL tool of MySQL, pt-online-schema-change (pt-osc), and gh-ost to add fields online. We recommend that you use the native online DDL tool of MySQL.

  • Q: How do I add indexes online?

    A: You can use tools such as the native online DDL tool of MySQL, pt-osc, and gh-ost to add indexes online. We recommend that you use the native online DDL tool of MySQL.

  • Q: Does a replication delay occur when I replicate data from the primary node to the read-only nodes of my PolarDB cluster?

    A: Yes, a replication delay of a few milliseconds exists.

  • Q: When does a replication delay increase?
    A: A replication delay increases in the following scenarios:
    • The primary node of your PolarDB cluster processes a large number of write requests and generates large amounts of redo log files. As a result, these redo log files cannot be replayed on the read-only nodes in time.
    • To process heavy loads, the read-only nodes occupy a large number of resources that are used to replay redo log files.
    • The system reads and writes redo log files at a low rate due to I/O bottlenecks.
  • Q: How do I ensure the consistency of query results if a replication delay occurs?

    A: You can create a cluster endpoint and specify an appropriate consistency level for the cluster endpoint. The following consistency levels are listed in descending order: global consistency, session consistency, and eventual consistency. For more information, see Create a customer cluster endpoint.

  • Q: Can PolarDB ensure that the recovery point objective (RPO) is zero if a single node fails?

    A: Yes, PolarDB can ensure that the RPO is zero if a single node fails.

  • Q: How are node specifications upgraded in the backend, for example, upgrading node specifications from 2 cores and 8 GB memory to 4 cores and 16 GB memory? What are the impacts of the upgrade on my services?

    A: Both PolarProxy and PolarDB nodes must be upgraded to the latest configurations. PolarDB uses a rolling upgrade method to minimize the impacts on your services. It takes about 10 to 15 minutes for each upgrade. The impacts on your services last for no more than 30 seconds. During this period, 1 to 3 intermittent disconnections may occur. For more information, see Change specifications.

  • Q: How much time does it take to add a node? Are my services affected when the node is added?
    A: It takes about 5 minutes to add a node. Your services are not affected when the node is added. For more information about how to add a node, see Add or remove a node.
    Note After you add a read-only node, a read/write splitter forwards the subsequent read requests to the read-only node. The read requests that are sent before you add the read-only node cannot be forwarded to the read-only node. To enable the read-only node to process these read requests, you must close the current connection and establish the connection again. For example, you can restart the application to establish the connection.
  • Q: How much time does it take to upgrade a minor kernel version for a PolarDB cluster to the latest version? Does the version upgrade affect my services?

    A: PolarDB uses a rolling upgrade method to minimize the impacts on your services. It takes about 10 to 15 minutes for each upgrade. The impacts on your services last for no more than 30 seconds. During this period, 1 to 3 intermittent disconnections may occur. For more information, see Upgrade the minor version.

  • Q: How is an automated failover implemented?

    A: A PolarDB cluster uses an active-active architecture to ensure high availability. This architecture allows for automated failovers between the primary node and the read-only nodes. Each node in a PolarDB cluster has a failover priority. If a failover occurs, a node may function as the primary node. This priority determines the probability at which a node functions as the primary node. If multiple nodes have the same failover priority, the probabilities for the nodes to function as the primary node are the same. For more information, see Switch over services between primary and read-only nodes.

Backup and restoration

  • Q: How does PolarDB back up data?

    A: PolarDB uses snapshots to back up data. For more information, see Back up data.

  • Q: How fast can a database be restored?

    A: It takes 40 minutes to restore or clone 1 TB of data in a database based on backup sets or snapshots. If you want to restore data to a specific time point, you must include the time required to replay the redo log files. It takes about 20 to 70 seconds to replay 1 GB of redo log data. The total restoration time is the sum of the time required to restore data based on backup sets and the time required to replay the redo log files.

Performance and capacity

  • Q: Why does PolarDB for MySQL fail to show significant performance improvements when I compare PolarDB for MySQL with ApsaraDB RDS for MySQL?
    A: Before you compare the performance of PolarDB for MySQL with that of ApsaraDB RDS for MySQL, pay attention to the following considerations to obtain accurate comparison results:
    • PolarDB for MySQL and ApsaraDB RDS for MySQL have the same specifications.
    • PolarDB for MySQL and ApsaraDB RDS for MySQL use the same version.

      Implementation mechanisms vary based on versions. For example, MySQL 8.0 incorporates optimizations for multi-core environments based on the threads such as log_writer, log_fluser, log_checkpoint, and log_write_notifier. However, if only a few CPU cores are used, the performance of MySQL 8.0 is lower than that of MySQL 5.6 or MySQL 5.7. We recommend that you do not compare PolarDB for MySQL 5.6 with ApsaraDB RDS for MySQL 5.7 or 8.0. This is because the optimizer of PolarDB for MySQL 5.6 is not as excellent as that of the later versions of PolarDB for MySQL.

    • We recommend that you simulate the loads in actual online environments or use the sysbench benchmark suite to compare the performance. This improves the accuracy of the obtained performance data.
    • We recommend that you do not use a single SQL statement to compare the read performance between PolarDB for MySQL and ApsaraDB RDS for MySQL.

      PolarDB for MySQL decouples computing from storage, and the network latency affects the response time of a single SQL statement. Therefore, the read performance of PolarDB for MySQL is lower than that of ApsaraDB RDS for MySQL. However, the cache hit ratio for an online database is greater than 99% in most cases. Only the first read request consumes I/O resources, and the read performance is compromised. The subsequent read requests do not consume I/O resources because the data is stored in a buffer pool. For the subsequent read requests, PolarDB for MySQL and ApsaraDB RDS for MySQL offer the same read performance.

    • We recommend that you do not use a single SQL statement to compare the write performance. Instead, we recommend that you simulate a production environment and perform stress testing.

      We recommend that you compare the primary nodes and read-only nodes in PolarDB for MySQL with the primary instances and read-only instances in ApsaraDB RDS for MySQL for performance comparison. Note that semi-synchronous replication is implemented for the read-only instances in ApsaraDB RDS for MySQL. By default, PolarDB for MySQL uses the quorum mechanism for data writes. If the data is written to two of the three replicas or all of the three replicas, the system determines that the write operation is successful. PolarDB for MySQL implements data redundancy at the storage layer, and ensures strong consistency and high reliability for the three replicas. Therefore, an appropriate comparison method is to compare the PolarDB for MySQL service with the ApsaraDB RDS for MySQL service where semi-synchronous replication instead of asynchronous replication is implemented for the read-only instances.

    For more information about the performance comparison results, see Comparison with ApsaraDB RDS for MySQL.

  • Q: Why does a deleted database occupy a large amount of storage space?

    A: This is because the redo log files of the deleted database occupy a certain amount of storage space. In most cases, the redo log files occupy 2 GB to 11 GB storage space. If a total of 11 GB storage space is occupied, the 8 GB storage space is occupied by the 8 redo log files in the buffer pool. The remaining 3 GB storage space is evenly occupied by the redo log file that is being generated, the default redo log file, and the latest redo log file.

    The loose_innodb_polar_log_file_max_reuse parameter specifies the number of redo log files in the buffer pool. The default value of this parameter is 8. You can change the value of this parameter to reduce the storage space that is occupied by log files. In this case, periodic performance fluctuations may occur if heavy loads need to be processed.

    loose_innodb_polar_log_file_max_reuse
    Note After you change the value of the loose_innodb_polar_log_file_max_reuse parameter, the system does not immediately clear the data in the buffer pool. The amount of the data in the buffer pool decreases only if data manipulation language (DML) operations are performed. If you need to immediately clear the data in the buffer pool, submit a ticket to contact Customer Services.
  • Q: What is the maximum number of tables for a single PolarDB cluster? What is the upper limit for the number of tables if I need to ensure that the performance is not compromised?

    A: The maximum number of tables depends on the number of files. For more information, see Limits.

  • Q: How are the input/output operations per second (IOPS) limited and isolated? Do the nodes of PolarDB clusters compete for I/O resources?

    A: The IOPS is specified for each node of a PolarDB cluster based on the node specifications. The IOPS of each node is isolated from that of the other nodes. Therefore, the nodes of PolarDB clusters do not compete for I/O resources.

  • Q: Is the performance of the primary node of my PolarDB cluster affected if the performance of the read-only nodes is compromised?

    A: If the loads on the read-only nodes are excessively heavy and the replication delay increases, you may find a slight increase in the memory consumed by the primary node.

  • Q: How is the database performance affected if I enable the binary log feature?

    A: The performance of SELECT queries is not affected. The performance of write operations such as INSERT, UPDATE, and DELETE decreases by 10% to 30%. In most cases, SELECT queries account for more than 90% of queries in a database. Therefore, the overall performance decreases by less than 5%.

  • Q: How is the database performance affected if I enable the SQL Explorer feature that allows you to analyze the performance based on SQL audit logs?

    A: The database performance is not affected.

  • Q: Which high-speed network protocol does PolarDB use?

    A: PolarDB uses the dual-port Remote Direct Memory Access (RDMA) technology to ensure high I/O throughputs between compute nodes and storage nodes, and between data replicas. Each port provides a data rate of up to 25 Gbit/s at a low latency.

  • Q: What is the maximum bandwidth that I can use if I access PolarDB from the Internet?

    A: If you access PolarDB from the Internet, the maximum bandwidth is 10 Gbit/s.

Large tables

  • Q: What are the advantages of the large tables in PolarDB for MySQL over the local disks of traditional databases?

    A: A large table in a PolarDB for MySQL database is split and stored across physical storage servers. The I/O operations for the large table are allocated to multiple disks. Therefore, the overall throughput of I/O read operations in the PolarDB for MySQL database is higher than that of the database where all I/O operations are scheduled to the local disk. However, the response time for the I/O operations in the PolarDB for MySQL database is not shorter than that of the database where all I/O operations are scheduled to the local disk.

  • Q: How do I optimize large tables?

    A: We recommend that you use partitioned tables.

  • Q: What are the application scenarios of partitioned tables?

    A: You can use partitioned tables when you need to tailor large tables to control the amount of scanned data for queries and do not want to modify the service code. For example, you can use partitioned tables to clear the data records of your services at regular intervals. You can delete the partitions that are created in the earliest month and create partitions for the next month, and retain only the data of the latest six months.

Stability

  • Q: Can I optimize PHP short-lived connections in high concurrency scenarios?

    A: Yes, you can optimize PHP short-lived connections in high concurrency scenarios. To optimize PHP short-lived connections, enable the session-level connection pool in the settings of cluster endpoints. For more information, see Modify and delete a cluster endpoint.

  • Q: How do I prevent slow SQL queries from decreasing the performance of the entire database?

    A: If you use PolarDB for MySQL 5.6 or 8.0, you can use the Statement Concurrency Control feature to implement rate limiting and throttling on the specified SQL statements.

  • Q: Does PolarDB support the idle session time-out feature?

    A: Yes, PolarDB supports the idle session time-out feature. You can change the value of the wait_timeout parameter to specify a time-out period for idle sessions. For more information, see Set cluster parameters.

  • Q: How do I identify slow SQL queries?
    A: You can identify slow SQL queries by using the following methods:
  • Q: How do I terminate slow SQL queries?
    A: After you identify a slow SQL query, find the ID of the slow SQL query and run the kill <Id> command to terminate the SQL query.Terminate a slow SQL query