SQL Explorer gives you deep visibility into SQL activity on your database instances — capturing every DQL, DML, and DDL statement so you can pinpoint performance bottlenecks, audit access patterns, and compare workloads across time periods.
With SQL Explorer, you can:
Identify which SQL statements consume the most database resources
Visualize execution duration distributions to assess instance health at a glance
Compare SQL workloads between two time periods to detect regressions
Trace the access source of each request for auditing and troubleshooting
Prerequisites
Before you begin, ensure that:
The database instance is connected to DAS and in the Normal Access state
The SQL Explorer and Audit module is enabled for the instance. For details, see the Enable SQL Explorer and Audit section in "Overview"
How it works
SQL Explorer captures execution data for all SQL statements directly from the database kernel. This approach adds minimal overhead — CPU consumption is negligible and there is no impact on network, disk, or storage space on the instance. Audit data is stored in DAS, not on the instance itself.
Data is split into two tiers:
Analytical and statistical data (excluding SQL details): retained for 30 days.
SQL details: retained for the duration you specified when enabling DAS Enterprise Edition.
Data older than seven days is moved to cold storage. When you query SQL details from cold storage, DAS creates an asynchronous calculation task and charges the query on a pay-as-you-go basis. Click Task list to monitor progress.
Disabling the SQL Explorer and Audit module clears all data immediately and does not affect your business. Export the data before disabling the module. See the Disable SQL Explorer and Audit section in "Overview".
For supported databases and regions, see Editions and supported features.
Analyze SQL with SQL Explorer
Open SQL Explorer
Log on to the DAS console.
In the left-side navigation pane, choose Intelligent O&M Center > Instance Monitoring.
Find the target database instance and click its instance ID.
Choose Request Analysis > SQL Explorer and Audit, then click the SQL Explorer tab.
Choose an analysis mode
Pick the mode that matches your goal:
| Goal | Mode to use |
|---|---|
| Understand overall SQL patterns in a time window | Display by Time Range |
| Detect regressions between two time periods | Display by Comparison |
| Audit which clients are sending requests | Source Statistics |
| Get optimization recommendations for your workload | SQL Review |
| Correlate SQL with a specific performance metric | Related SQL Identification |
When selecting a time range, the end time must be later than the start time, the interval must not exceed 24 hours, and the range must fall within the data storage duration and must be later than when DAS Enterprise Edition was enabled.
Diagnose performance issues
A typical workflow for diagnosing SQL performance problems:
Open Display by Time Range and review the Execution Duration Distribution chart. A large blue area indicates healthy execution times. Growing orange or red areas signal degraded performance.
If you spot a problem period, switch to Display by Comparison to confirm whether the degradation is new by comparing it against a healthy baseline window.
In the Full Request Statistics section, sort by execution duration percentage to find the SQL templates consuming the most resources.
Click an SQL ID to view the performance trend and sample data for that SQL template.
On the SQL Sample tab, identify the client that initiated each request.
SQL samples are encoded using the UTF-8 character set. SQL Explorer collects statistical data every minute. The execution duration percentage for a SQL template is calculated as: (Execution duration of the template × Number of executions) / (Total execution duration × Total executions) × 100%. Higher percentages indicate greater MySQL resource consumption.Export SQL logs
In the Full Request Statistics section, export up to 1,000 SQL logs at a time. For larger exports or longer time ranges, use the search (audit) feature instead.
Feature reference
Execution duration distribution
The Execution Duration Distribution tab shows how SQL query execution times are distributed within the selected time window. Statistical data is collected every minute and grouped into seven ranges:
| Range | Meaning |
|---|---|
[0,1] ms | 0 ms to 1 ms |
(1,2] ms | Greater than 1 ms, up to 2 ms |
(2,3] ms | Greater than 2 ms, up to 3 ms |
(3,10] ms | Greater than 3 ms, up to 10 ms |
(10,100] ms | Greater than 10 ms, up to 100 ms |
(0.1,1] s | Greater than 0.1 s, up to 1 s |
> 1s | Greater than 1 s |
The chart uses color to indicate instance health: a large blue area means most queries are completing quickly. Expanding orange and red areas indicate a growing proportion of slow queries.
SQL Review
SQL Review performs workload analysis on the database instance within a specified time range and a baseline time range. It surfaces:
Index optimization suggestions
SQL rewrite suggestions
Top resource-consuming SQL statements
New SQL statements
Failed SQL statements
SQL feature analysis
SQL statements with high execution variation
SQL statements with deteriorated performance
Top tables by traffic
For details, see SQL Review.
Related SQL Identification
Select the metrics you want to analyze and click Analysis. DAS identifies the SQL statements that best correlate with the selected metric performance. This typically takes 1 to 5 minutes.
Queries on data older than seven days are charged on a pay-as-you-go basis. See Billing details.
Usage notes
Sharding behavior: When an SQL statement runs on an ApsaraDB RDS for MySQL instance attached to PolarDB-X 1.0, multiple SQL logs are generated on the RDS instance due to sharding.
Data migration: Transient connections during data migration can cause data loss in SQL Explorer. This is expected behavior.
High-load instances: If the instance load is high, incremental data statistics may be inaccurate.
PgBouncer: SQL statements executed through PgBouncer on an ApsaraDB RDS for PostgreSQL instance are not recorded.
SQL statement length limits
SQL statements in SQL logs are capped at 8,192 bytes. A prefix is added during data collection, so the actual recorded length is slightly less than the configured limit.
For ApsaraDB RDS for MySQL instances and PolarDB for MySQL clusters, configure the maximum SQL statement length using the following parameters:
| MySQL version | Parameter |
|---|---|
| 5.6 or 5.7 | loose_rds_audit_max_sql_size |
| 8.0 | loose_rds_audit_log_event_buffer_size |
If you set the parameter to a value less than or equal to 8,192 bytes, that configured value is used as the upper limit. The excess part of an SQL statement beyond the configured limit is not recorded. Because a prefix is added during data collection, the actual recorded length is slightly less than the configured value.
If you set the parameter to a value greater than 8,192 bytes, the limit defaults to 8,192 bytes. If an SQL statement exceeds 8,192 bytes, the excess part is not recorded. Because a prefix is added during data collection, the actual recorded length is slightly less than 8,192 bytes.
FAQ
API reference
| Operation | Description |
|---|---|
| GetErrorRequestSample | Asynchronously queries up to 20 failed SQL queries within a time range from SQL Explorer data |
| GetAsyncErrorRequestStatResult | Queries the number of failed executions per SQL template |
| GetAsyncErrorRequestListByCode | Queries IDs of SQL statements that generated a specific MySQL error code |
| GetAsyncErrorRequestStatByCode | Queries MySQL error codes and the number of SQL queries per code |
| GetFullRequestOriginStatByInstanceId | Queries full request statistics by access source |
| GetFullRequestStatResultByInstanceId | Queries full request statistics by SQL ID |
| GetFullRequestSampleByInstanceId | Queries up to 20 sample SQL statements by SQL ID |
| GetDasSQLLogHotData | Queries hot storage data generated within the previous seven days |