All Products
Search
Document Center

Database Autonomy Service:SQL Explorer

Last Updated:Mar 28, 2026

SQL Explorer gives you deep visibility into SQL activity on your database instances — capturing every DQL, DML, and DDL statement so you can pinpoint performance bottlenecks, audit access patterns, and compare workloads across time periods.

With SQL Explorer, you can:

  • Identify which SQL statements consume the most database resources

  • Visualize execution duration distributions to assess instance health at a glance

  • Compare SQL workloads between two time periods to detect regressions

  • Trace the access source of each request for auditing and troubleshooting

Prerequisites

Before you begin, ensure that:

  • The database instance is connected to DAS and in the Normal Access state

  • The SQL Explorer and Audit module is enabled for the instance. For details, see the Enable SQL Explorer and Audit section in "Overview"

How it works

SQL Explorer captures execution data for all SQL statements directly from the database kernel. This approach adds minimal overhead — CPU consumption is negligible and there is no impact on network, disk, or storage space on the instance. Audit data is stored in DAS, not on the instance itself.

Data is split into two tiers:

  • Analytical and statistical data (excluding SQL details): retained for 30 days.

  • SQL details: retained for the duration you specified when enabling DAS Enterprise Edition.

Data older than seven days is moved to cold storage. When you query SQL details from cold storage, DAS creates an asynchronous calculation task and charges the query on a pay-as-you-go basis. Click Task list to monitor progress.

Important

Disabling the SQL Explorer and Audit module clears all data immediately and does not affect your business. Export the data before disabling the module. See the Disable SQL Explorer and Audit section in "Overview".

For supported databases and regions, see Editions and supported features.

Analyze SQL with SQL Explorer

Open SQL Explorer

  1. Log on to the DAS console.

  2. In the left-side navigation pane, choose Intelligent O&M Center > Instance Monitoring.

  3. Find the target database instance and click its instance ID.

  4. Choose Request Analysis > SQL Explorer and Audit, then click the SQL Explorer tab.

Choose an analysis mode

Pick the mode that matches your goal:

GoalMode to use
Understand overall SQL patterns in a time windowDisplay by Time Range
Detect regressions between two time periodsDisplay by Comparison
Audit which clients are sending requestsSource Statistics
Get optimization recommendations for your workloadSQL Review
Correlate SQL with a specific performance metricRelated SQL Identification
When selecting a time range, the end time must be later than the start time, the interval must not exceed 24 hours, and the range must fall within the data storage duration and must be later than when DAS Enterprise Edition was enabled.

Diagnose performance issues

A typical workflow for diagnosing SQL performance problems:

  1. Open Display by Time Range and review the Execution Duration Distribution chart. A large blue area indicates healthy execution times. Growing orange or red areas signal degraded performance.

  2. If you spot a problem period, switch to Display by Comparison to confirm whether the degradation is new by comparing it against a healthy baseline window.

  3. In the Full Request Statistics section, sort by execution duration percentage to find the SQL templates consuming the most resources.

  4. Click an SQL ID to view the performance trend and sample data for that SQL template.

  5. On the SQL Sample tab, identify the client that initiated each request.

SQL samples are encoded using the UTF-8 character set. SQL Explorer collects statistical data every minute. The execution duration percentage for a SQL template is calculated as: (Execution duration of the template × Number of executions) / (Total execution duration × Total executions) × 100%. Higher percentages indicate greater MySQL resource consumption.

Export SQL logs

In the Full Request Statistics section, export up to 1,000 SQL logs at a time. For larger exports or longer time ranges, use the search (audit) feature instead.

Feature reference

Execution duration distribution

The Execution Duration Distribution tab shows how SQL query execution times are distributed within the selected time window. Statistical data is collected every minute and grouped into seven ranges:

RangeMeaning
[0,1] ms0 ms to 1 ms
(1,2] msGreater than 1 ms, up to 2 ms
(2,3] msGreater than 2 ms, up to 3 ms
(3,10] msGreater than 3 ms, up to 10 ms
(10,100] msGreater than 10 ms, up to 100 ms
(0.1,1] sGreater than 0.1 s, up to 1 s
> 1sGreater than 1 s

The chart uses color to indicate instance health: a large blue area means most queries are completing quickly. Expanding orange and red areas indicate a growing proportion of slow queries.

SQL Review

SQL Review performs workload analysis on the database instance within a specified time range and a baseline time range. It surfaces:

  • Index optimization suggestions

  • SQL rewrite suggestions

  • Top resource-consuming SQL statements

  • New SQL statements

  • Failed SQL statements

  • SQL feature analysis

  • SQL statements with high execution variation

  • SQL statements with deteriorated performance

  • Top tables by traffic

For details, see SQL Review.

Related SQL Identification

Select the metrics you want to analyze and click Analysis. DAS identifies the SQL statements that best correlate with the selected metric performance. This typically takes 1 to 5 minutes.

Important

Queries on data older than seven days are charged on a pay-as-you-go basis. See Billing details.

Usage notes

  • Sharding behavior: When an SQL statement runs on an ApsaraDB RDS for MySQL instance attached to PolarDB-X 1.0, multiple SQL logs are generated on the RDS instance due to sharding.

  • Data migration: Transient connections during data migration can cause data loss in SQL Explorer. This is expected behavior.

  • High-load instances: If the instance load is high, incremental data statistics may be inaccurate.

  • PgBouncer: SQL statements executed through PgBouncer on an ApsaraDB RDS for PostgreSQL instance are not recorded.

SQL statement length limits

SQL statements in SQL logs are capped at 8,192 bytes. A prefix is added during data collection, so the actual recorded length is slightly less than the configured limit.

For ApsaraDB RDS for MySQL instances and PolarDB for MySQL clusters, configure the maximum SQL statement length using the following parameters:

MySQL versionParameter
5.6 or 5.7loose_rds_audit_max_sql_size
8.0loose_rds_audit_log_event_buffer_size
  • If you set the parameter to a value less than or equal to 8,192 bytes, that configured value is used as the upper limit. The excess part of an SQL statement beyond the configured limit is not recorded. Because a prefix is added during data collection, the actual recorded length is slightly less than the configured value.

  • If you set the parameter to a value greater than 8,192 bytes, the limit defaults to 8,192 bytes. If an SQL statement exceeds 8,192 bytes, the excess part is not recorded. Because a prefix is added during data collection, the actual recorded length is slightly less than 8,192 bytes.

FAQ

What does the `logout!` statement in Full Request Statistics indicate?

logout! marks a disconnection event. The execution duration shown is the idle time between the last interaction and when the connection dropped. Status code 1158 indicates a network disconnection, which can happen because the client connection timed out, the server disconnected, or the connection exceeded the interactive_timeout or wait_timeout threshold.

Why does a percent sign (%) appear in the Access Source column on the Source Statistics tab?

This happens with stored procedures. When a stored procedure is defined with DEFINER=\user\@\%\`, the % wildcard from the definer appears in the Access Source column. The example below reproduces this behavior using an ApsaraDB RDS for MySQL instance with test account test_user and database testdb`.

  1. In the ApsaraDB RDS console, create a database and a standard account, and grant the account permissions on the database. See Create accounts and databases.

  2. Connect to the instance as test_user using the CLI. See Connect to an ApsaraDB RDS for MySQL instance.

  3. Switch to testdb and create the stored procedure:

    -- Switch to the testdb database.
    USE testdb;
    
    -- Create a stored procedure.
    DELIMITER $$
    DROP PROCEDURE IF EXISTS `das` $$
    CREATE DEFINER=`test_user`@`%` PROCEDURE `das`()
    BEGIN
    SELECT * FROM information_schema.processlist WHERE Id = CONNECTION_ID();
    END $$
    DELIMITER;
  4. Connect to the instance with a privileged account. See Connect to an ApsaraDB RDS for MySQL instance.

  5. Call the stored procedure:

    -- Switch to the testdb database.
    USE testdb;
    
    -- Call the stored procedure.
    CALL das();

    The result shows the host as %:2065, which is the value that appears in the Access Source column.

    +--------+-----------+--------+--------+---------+------+-----------+-------------------------------------------------------------------------+
    | ID     | USER      | HOST   | DB     | COMMAND | TIME | STATE     | INFO                                                                    |
    +--------+-----------+--------+--------+---------+------+-----------+-------------------------------------------------------------------------+
    | 487818 | test_user | %:2065 | testdb | Query   |    0 | executing | SELECT * FROM information_schema.processlist WHERE Id = CONNECTION_ID() |
    +--------+-----------+--------+--------+---------+------+-----------+-------------------------------------------------------------------------+

Why is the database name in the Logs section different from the one in SQL statements?

The Logs section reads the database name from the session. The database name in SQL statements comes from user input — for example, cross-database queries or dynamic SQL can reference a different database than the one the session is connected to.

Does SQL Explorer and Audit affect database performance?

The impact is minor and barely noticeable. CPU and memory consumption is negligible. There is no impact on network, disk performance, or instance storage space because audit data is stored in DAS.

Why is a captured SQL statement incomplete? How do I view the full statement?

The statement exceeds the length limit. In SQL Explorer, search by the thread ID or transaction ID to view the full details.

API reference

OperationDescription
GetErrorRequestSampleAsynchronously queries up to 20 failed SQL queries within a time range from SQL Explorer data
GetAsyncErrorRequestStatResultQueries the number of failed executions per SQL template
GetAsyncErrorRequestListByCodeQueries IDs of SQL statements that generated a specific MySQL error code
GetAsyncErrorRequestStatByCodeQueries MySQL error codes and the number of SQL queries per code
GetFullRequestOriginStatByInstanceIdQueries full request statistics by access source
GetFullRequestStatResultByInstanceIdQueries full request statistics by SQL ID
GetFullRequestSampleByInstanceIdQueries up to 20 sample SQL statements by SQL ID
GetDasSQLLogHotDataQueries hot storage data generated within the previous seven days

What's next