All Products
Search
Document Center

AnalyticDB:DescribeSparkSQLDiagnosisList

Last Updated:Nov 10, 2025

Queries the diagnostic information about Spark SQL queries.

Debugging

You can run this interface directly in OpenAPI Explorer, saving you the trouble of calculating signatures. After running successfully, OpenAPI Explorer can automatically generate SDK code samples.

Authorization information

The following table shows the authorization information corresponding to the API. The authorization information can be used in the Action policy element to grant a RAM user or RAM role the permissions to call this API operation. Description:

  • Operation: the value that you can use in the Action element to specify the operation on a resource.
  • Access level: the access level of each operation. The levels are read, write, and list.
  • Resource type: the type of the resource on which you can authorize the RAM user or the RAM role to perform the operation. Take note of the following items:
    • For mandatory resource types, indicate with a prefix of * .
    • If the permissions cannot be granted at the resource level, All Resources is used in the Resource type column of the operation.
  • Condition Key: the condition key that is defined by the cloud service.
  • Associated operation: other operations that the RAM user or the RAM role must have permissions to perform to complete the operation. To complete the operation, the RAM user or the RAM role must have the permissions to perform the associated operations.
OperationAccess levelResource typeCondition keyAssociated operation
adb:DescribeSparkSQLDiagnosisListget
*DBClusterLakeVersion
acs:adb:{#regionId}:{#accountId}:dbcluster/{#DBClusterId}
    none
none

Request parameters

ParameterTypeRequiredDescriptionExample
DBClusterIdstringYes

The cluster ID.

Note You can call the DescribeDBClusters operation to query the information about all AnalyticDB for MySQL Data Lakehouse Edition clusters within a region, including cluster IDs.
amv-2zez35ww415xjwk5
RegionIdstringYes

The region ID.

Note You can call the DescribeRegions operation to query the most recent region list.
cn-hangzhou
StatementIdlongNo

The unique ID of the code block in the Spark job.

123
PageNumberintegerNo

The page number.

1
PageSizeintegerNo

The number of entries per page.

30
OrderstringNo

The order by which to sort query results. Specify the parameter value in the JSON format. Example: [{"Field":"MaxExclusiveTime","Type":"Asc"}].

  • Field specifies the field by which to sort the query results. Valid values:

    • MaxExclusiveTime: the maximum execution duration.
    • PeakMemory: the peak memory.
    • QueryStartTime: the start time of the query.
    • QueryWallclockTime: the execution duration of the query.
  • Type specifies the sorting order. Valid values:

    • Asc: ascending order.
    • Desc: descending order.
Note
  • If you do not specify this parameter, query results are sorted by MaxExclusiveTime in ascending order.
[{\"Field\":\"QueryStartTime\",\"Type\":\"Desc\"}]
MinStartTimestringNo

The earliest start time.

2024-11-28 22:00:00
MaxStartTimestringNo

The latest start time.

2024-11-28 23:00:00

Response parameters

ParameterTypeDescriptionExample
object

The response parameters.

TotalCountinteger

The total number of entries returned.

1343
RequestIdstring

The request ID.

A91C9D07-7462-5F35-BB47-83629CE6CCAC
PageNumberinteger

The page number.

1
PageSizeinteger

The number of entries per page.

30
SQLDiagnosisListarray<object>

The queried diagnostic information.

SQLDiagnosissobject

The queried diagnostic information.

Statestring

The execution status of the query. Valid values:

  • COMPLETED
  • CANCELED
  • ABORTED
  • FAILED
COMPLETED
SQLstring

The SQL statement.

select * from device where name = '105506012112790031'
AppIdstring

The application ID.

Note You can call the ListSparkApps operation to query a list of Spark application IDs.
s202404291020bjd448ad40002122
StatementIdlong

The unique ID of the code block in the Spark job.

1
InnerQueryIdlong

The ID of the query executed within the Spark application.

1
StartTimestring

The start time of the query. The time follows the ISO 8601 standard in the yyyy-MM-ddTHH:mmZ format. The time is displayed in UTC.

2024-11-20 09:09:00
ElapsedTimelong

The execution duration of the query. Unit: milliseconds.

100
MaxExclusiveTimelong

The maximum operator execution duration. Unit: milliseconds.

90
PeakMemorylong

The maximum operator memory used. Unit: bytes.

1024
ScanRowCountlong

The number of entries scanned.

100
AccessDeniedDetailstring

The information about the request denial.

{}

Examples

Sample success responses

JSONformat

{
  "TotalCount": 1343,
  "RequestId": "A91C9D07-7462-5F35-BB47-83629CE6CCAC",
  "PageNumber": 1,
  "PageSize": 30,
  "SQLDiagnosisList": [
    {
      "State": "COMPLETED",
      "SQL": "select * from device where name = '105506012112790031'",
      "AppId": "s202404291020bjd448ad40002122",
      "StatementId": 1,
      "InnerQueryId": 1,
      "StartTime": "2024-11-20 09:09:00",
      "ElapsedTime": 100,
      "MaxExclusiveTime": 90,
      "PeakMemory": 1024,
      "ScanRowCount": 100
    }
  ],
  "AccessDeniedDetail": {}
}

Error codes

HTTP status codeError codeError messageDescription
404InvalidDBCluster.NotFoundThe DBClusterId provided does not exist in our records.The specified DBClusterId parameter does not exist. Make sure that the DBClusterId value is valid.

For a list of error codes, visit the Service error codes.