All Products
Search
Document Center

Database Autonomy Service:SQL Explorer

Last Updated:Jun 03, 2024

Database Autonomy Service (DAS) provides the SQL Explorer feature. You can use SQL Explorer to check the health status of SQL statements and troubleshoot performance issues. This topic describes how to use the SQL Explorer feature in the SQL Explorer and Audit module.

Prerequisites

  • The database instance that you want to manage is connected to DAS and is in the Normal Access state.

  • The SQL Explorer and Audit module is enabled for the database instance. For more information, see the Enable SQL Explorer and Audit section of the "Overview" topic.

Feature description

The SQL Explorer feature records the information about all executed data query language (DQL), DML, and DDL statements. DAS obtains the information from database kernels, which consumes only a small amount of CPU resources.

Supported databases and regions

You can use the SQL Explorer and Audit module only after DAS Enterprise Edition is enabled. The databases and regions that are supported vary with the versions of DAS Enterprise Edition. For more information about the databases and regions that are supported by different versions of DAS Enterprise Edition, see the Supported database types and regions section of the "Editions" topic.

Usage notes

  • After SQL Explorer is enabled for a database instance, the analytical and statistical data (excluding SQL details) generated by SQL Explorer can be stored for 30 days.

  • The storage duration of SQL details generated by SQL Explorer is the same as that specified when you enable DAS Enterprise Edition for the database instance.

  • After you disable the SQL Explorer and Audit module, your business is not affected. However, all data generated by SQL Explorer and Audit is cleared. We recommend that you export and save the data to your computer before you disable this feature. For more information, see the Disable SQL Explorer and Audit section of the "Overview" topic.

  • When an SQL statement is executed on an ApsaraDB RDS for MySQL instance that is attached to PolarDB-X 1.0, multiple SQL logs are generated on the ApsaraDB RDS for MySQL instance due to sharding.

  • Transient connections may occur during data migration of a database instance. During data migration, data loss is normal for SQL Explorer.

  • If the load of a database instance is high, data loss may occur. Therefore, the statistics that are collected by SQL Explorer on incremental data may be inaccurate.

  • An SQL statement that is recorded in the SQL logs can be up to 8,192 bytes in length. You can configure parameters to specify the maximum length of an SQL statement that is recorded in an ApsaraDB RDS for MySQL instance or a PolarDB for MySQL cluster.

    • If you specify the maximum length of an SQL statement to a length that is less than or equal to 8,192 bytes, the specified length is used as the upper limit and the excess part of an SQL statement is not recorded. A prefix is added to the SQL statement during data collection and processing. As a result, the maximum length of an SQL statement in an SQL log is slightly less than the specified length.

    • If you specify the maximum length of an SQL statement to a length that is greater than 8,192 bytes, the upper limit is 8,192 bytes by default. If the actual length of an SQL statement exceeds the upper limit, the excess part is not recorded. A prefix is added to the SQL statement during data collection and processing. As a result, the maximum length of an SQL statement in an SQL log is slightly less than 8,192 bytes.

    Note
    • For ApsaraDB RDS for MySQL instances and PolarDB for MySQL clusters that run MySQL 5.6 or 5.7, the maximum length of an SQL statement is specified by the loose_rds_audit_max_sql_size parameter.

    • For ApsaraDB RDS for MySQL instances and PolarDB for MySQL clusters that run MySQL 8.0, the maximum length of an SQL statement is specified by the loose_rds_audit_log_event_buffer_size parameter.

  • If PgBouncer is enabled for an ApsaraDB RDS for PostgreSQL instance, SQL Explorer does not record the SQL statements that are executed by using PgBouncer.

Procedure

  1. Log on to the DAS console.

  2. In the left-side navigation pane, click Instance Monitoring.

  3. On the page that appears, find the database instance that you want to manage and click the instance ID. The instance details page appears.

  4. In the left-side navigation pane, choose Request Analysis > SQL Explorer and Audit. On the page that appears, click the SQL Explorer tab.

  5. On the SQL Explorer tab, use the following features based on your business requirements.

    Note

    When you select a time range, make sure that the end time is later than the start time and that the interval between the start time and the end time does not exceed 24 hours. The time range to query data must be later than the time when DAS Enterprise Edition is enabled and must fall within the data storage duration of SQL Explorer.

    • Display by Time Range: Select the time range of the executed SQL statements whose SQL Explorer results you want to query. You can view the Execution Duration Distribution, Execution Duration, and Executions information about all SQL statements over the time range. You can view the details of all SQL statements over the time range and export the details in the Full Request Statistics section.

      Note

      You can export up to 1,000 SQL logs. If you want to obtain a larger number of SQL logs within a larger time range, you can use the search (audit) feature.

    • Display by Comparison: Select the date and time range of the executed SQL statements whose SQL Explorer results you want to compare. You can view the Execution Duration Distribution, Execution Duration, and Executions comparison results of all SQL statements over the time range. You can view the details of the comparison results in the Requests by Comparison section.

    • Source Statistics: Select the time range of the executed SQL statements whose access sources you want to collect. Then, you can view all request sources over the time range.

    • SQL Review: The SQL review feature performs workload analysis on database instances within the specified time range and the baseline time range, and performs in-depth analysis on running SQL queries in database instances. This feature displays index optimization suggestions, SQL rewrite suggestions, top resource-consuming SQL statements, new SQL statements, failed SQL statements, SQL feature analysis, SQL statements with high execution variation, SQL statements with deteriorated performance, and top tables that generate the most traffic for database instances. For more information, see SQL Review.

    • Related SQL Identification: Select the metrics that you want to view and click Analysis. It takes 1 to 5 minutes to identify the SQL statements that best fit the performance of the specified metrics.

    Important
    • Data that was generated by the SQL Explorer and Audit module that uses a combination of hot storage and cold storage mode more than seven days ago is stored in cold storage mode. When you analyze SQL details that were generated more than seven days ago, DAS creates a task for calculation and analysis. You can click Task list to view the task progress and historical tasks.

    • If you query data generated by the SQL Explorer and Audit module more than seven days ago, you are charged for the query on a pay-as-you-go basis. For more information, see Billing.

Description

  • Execution Duration Distribution: On the Execution Duration Distribution tab, you can view the distribution of execution durations of SQL queries based on the time range that you specify. The statistical data is collected every minute. The execution durations are divided into seven ranges:

    • [0,1] ms: indicates that the execution duration ranges from 0 ms to 1 ms. The chart shows the percentage of SQL queries whose execution durations fall within this range.

    • (1,2] ms: indicates that the execution duration is greater than 1 ms and less than or equal to 2 ms. The chart shows the percentage of SQL queries whose execution durations fall within this range.

    • (2,3] ms: indicates that the execution duration is greater than 2 ms and less than or equal to 3 ms. The chart shows the percentage of SQL queries whose execution durations fall within this range.

    • (3,10] ms: indicates that the execution duration is greater than 3 ms and less than or equal to 10 ms. The chart shows the percentage of SQL queries whose execution durations fall within this range.

    • (10,100] ms: indicates that the execution duration is greater than 10 ms and less than or equal to 100 ms. The chart shows the percentage of SQL queries whose execution durations fall within this range.

    • (0.1,1]s: indicates that the execution duration is greater than 0.1s and less than or equal to 1s. The chart shows the percentage of SQL queries whose execution durations fall within this range.

    • > 1s: indicates that the execution duration is greater than 1s. The chart shows the percentage of SQL queries whose execution durations fall within this range.

    Note

    The section on the Execution Duration Distribution tab shows the execution time of SQL statements on the instance over time. The larger the blue area of the chart is, the healthier the instance is when the SQL statements are executed on the instance. The larger the orange and red areas of the chart are, the less healthy the instance is when the SQL statements are executed on the instance.

  • Execution Duration: On the Execution Duration tab, you can specify a time range to view the execution durations of SQL queries.

  • Full Request Statistics: You can view the details of SQL statements based on the time range that you specify. The details include the SQL text, execution duration percentage, average execution duration, and execution trend for each SQL statement.

    Note

    You can calculate the execution duration percentage for the SQL statements that use a specific SQL template based on the following formula: Execution duration percentage = (Execution duration of the SQL statements that use the SQL template × Number of executions of the SQL statements)/(Total execution duration of all SQL statements × Total number of executions) × 100%. Higher execution duration percentages indicate that the database instance uses a larger number of MySQL resources to execute the corresponding SQL statements.

  • SQL ID: You can click an SQL ID to view the performance trend and sample data of the SQL statements that use the corresponding SQL template.

  • SQL Sample: On the SQL Sample tab, you can view the client that initiated each sample SQL request.

    Note

    The UTF-8 character set is used to encode SQL samples.

FAQ

Q: What does the logout! statement in the Full Request Statistics section on the SQL Explorer tab indicate?

A: The logout! statement indicates a disconnection. The execution duration of the logout! statement is the difference between the last interaction time and the time when the disconnection occurs. During the time difference, the connection remains idle. The 1158 code displayed in the Status column indicates network disconnection that may be caused by the following reasons:

  • The client connection times out.

  • The server is disconnected.

  • The connection to the server is reset if the duration of the connection exceeds the value specified by the interactive_timeout or wait_timeout parameter.

Q: Why does a percent sign (%) appear in the Access Source column on the Source Statistics tab of the SQL Explorer tab?

A: When you use a stored procedure, a percent sign (%) may be displayed in the Access Source column on the Source Statistics tab of the SQL Explorer tab. You can perform the following operations to reproduce this situation.

Note

In this example, the database instance is an ApsaraDB RDS for MySQL instance, the test account is test_user, and the test database is testdb.

  1. In the ApsaraDB RDS console, create a database and a standard account and grant permissions on the database to the standard account. For more information, see Create accounts and databases.

  2. Use the test_user account to connect to the database instance by using the CLI. For more information, see Use a database client or the CLI to connect to an ApsaraDB RDS for MySQL instance.

  3. Switch to the testdb database and execute the following statements to create a stored procedure:

    -- Switch to the testdb database.
    USE testdb;
    
    -- Create a stored procedure.
    DELIMITER $$
    DROP PROCEDURE IF EXISTS `das` $$
    CREATE DEFINER=`test_user`@`%` PROCEDURE `das`()
    BEGIN
    SELECT * FROM information_schema.processlist WHERE Id = CONNECTION_ID();
    END $$
    DELIMITER;
  4. Use a privileged account to connect to the database instance. For more information, see Use a database client or the CLI to connect to an ApsaraDB RDS for MySQL instance.

  5. Call the stored procedure that you created.

    -- Switch to the testdb database.
    USE testdb;
    
    -- Call the stored procedure.
    CALL das();
    
    +--------+-----------+--------+--------+---------+------+-----------+-------------------------------------------------------------------------+
    | ID     | USER      | HOST   | DB     | COMMAND | TIME | STATE     | INFO                                                                    |
    +--------+-----------+--------+--------+---------+------+-----------+-------------------------------------------------------------------------+
    | 487818 | test_user | %:2065 | testdb | Query   |    0 | executing | SELECT * FROM information_schema.processlist WHERE Id = CONNECTION_ID() |
    +--------+-----------+--------+--------+---------+------+-----------+-------------------------------------------------------------------------+

Q: Why is the database name displayed in the Logs section inconsistent with that in SQL statements?

A: The database name displayed in the Log section is obtained from sessions, while the database name in SQL statements is specified by a user and depends on the input or query design of the user, such as cross-database query and dynamic SQL. This causes inconsistent database names that are displayed in SQL statements and in the Logs section.

Related API operations

Operation

Description

GetErrorRequestSample

Asynchronously queries information about failed SQL queries in the SQL Explorer data of a database instance. You can query up to 20 failed SQL queries within a specific time range.

GetAsyncErrorRequestStatResult

Asynchronously queries the number of failed executions of SQL templates in the SQL Explorer data of a database instance.

GetAsyncErrorRequestListByCode

Asynchronously queries the IDs of SQL statements that generate a MySQL error code in the SQL Explorer data of a database instance.

GetAsyncErrorRequestStatByCode

Asynchronously queries the MySQL error codes in the SQL Explorer data of a database instance and the number of SQL queries corresponding to each error code.

GetFullRequestOriginStatByInstanceId

Queries the full request statistics in the SQL Explorer data of a database instance by access source.

GetFullRequestStatResultByInstanceId

Asynchronously queries the full request statistics in the SQL Explorer data of a database instance by SQL ID.

GetFullRequestSampleByInstanceId

Queries sample SQL statements in the SQL Explorer data of a database instance by SQL ID. You can query up to 20 sample SQL statements.

GetDasSQLLogHotData

Queries the details of the hot storage data that the SQL Explorer and Audit module generates for a database instance within the previous seven days.

Best practice

Troubleshoot slow queries