All Products
Search
Document Center

AnalyticDB:ExecuteSparkReplStatement

Last Updated:Nov 10, 2025

Executes part of the code in a Spark job.

Debugging

You can run this interface directly in OpenAPI Explorer, saving you the trouble of calculating signatures. After running successfully, OpenAPI Explorer can automatically generate SDK code samples.

Authorization information

The following table shows the authorization information corresponding to the API. The authorization information can be used in the Action policy element to grant a RAM user or RAM role the permissions to call this API operation. Description:

  • Operation: the value that you can use in the Action element to specify the operation on a resource.
  • Access level: the access level of each operation. The levels are read, write, and list.
  • Resource type: the type of the resource on which you can authorize the RAM user or the RAM role to perform the operation. Take note of the following items:
    • For mandatory resource types, indicate with a prefix of * .
    • If the permissions cannot be granted at the resource level, All Resources is used in the Resource type column of the operation.
  • Condition Key: the condition key that is defined by the cloud service.
  • Associated operation: other operations that the RAM user or the RAM role must have permissions to perform to complete the operation. To complete the operation, the RAM user or the RAM role must have the permissions to perform the associated operations.
OperationAccess levelResource typeCondition keyAssociated operation
adb:ExecuteSparkReplStatementnone
*DBClusterLakeVersion
acs:adb:{#regionId}:{#accountId}:dbcluster/{#dbClusterId}/resourcegroup/{#resourceGroupName}/sparkapp/{#sparkAppId}
    none
none

Request parameters

ParameterTypeRequiredDescriptionExample
AppIdstringNo

The application ID.

Note You can call the ListSparkApps operation to query Spark application IDs.
s202411071444hzdvk486d9d2001****
CodestringYes

The code that you want to execute.

print(1+1)
CodeTypestringYes

The language type of the code. Valid values:

  • SCALA
  • PYTHON
PYTHON
SessionIdlongYes

The ID of the session that you want to use to execute the code.

123

Response parameters

ParameterTypeDescriptionExample
object
RequestIdstring

The request ID.

1AD222E9-E606-4A42-BF6D-8A4442913CEF
Dataobject

The returned data.

StatementIdlong

The unique ID of the code block in the Spark job.

123
Codestring

The code that is executed.

print(1+1)
CodeTypestring

The code type. Valid values:

  • SCALA
  • PYTHON
PYTHON
CodeStatestring

The code execution status. Valid values:

  • CANCELLED
  • RUNNING
  • SUCCEEDED
  • ERROR
RUNNING
AliyunUidlong

The ID of the Alibaba Cloud account that owns the cluster.

17108278707****
OutputTypestring

The execution result type, which is in the JSON format. Valid values:

  • TEXT: the text content that conforms to Apache Livy.
  • TABLE: the table content that conforms to Apache Livy.
TEXT
Outputstring

The code execution result, which is a JSON string that conforms to Apache Livy.

{"text/plain": 2}
Columnsarray

The column names.

columnstring

The column name.

col1
Errorstring

The error message.

StackOverflow Exception
StartTimelong

The start time of the execution. This value is a UNIX timestamp representing the number of milliseconds that have elapsed since January 1, 1970, 00:00:00 UTC.

1730968125000
EndTimelong

The end time of the execution. This value is a UNIX timestamp representing the number of milliseconds that have elapsed since January 1, 1970, 00:00:00 UTC.

1730968125000

Examples

Sample success responses

JSONformat

{
  "RequestId": "1AD222E9-E606-4A42-BF6D-8A4442913CEF",
  "Data": {
    "StatementId": 123,
    "Code": "print(1+1)",
    "CodeType": "PYTHON",
    "CodeState": "RUNNING",
    "AliyunUid": 0,
    "OutputType": "TEXT",
    "Output": {
      "text/plain": 2
    },
    "Columns": [
      "col1"
    ],
    "Error": "StackOverflow Exception",
    "StartTime": 1730968125000,
    "EndTime": 1730968125000
  }
}

Error codes

HTTP status codeError codeError messageDescription
400Spark.App.ExceedQuotaLimitationThe requested resource exceeds the maximum limit: %s-
400Spark.App.InvalidAppTypeWhenSubmitThe specified AppType is INVALID or NULL. Please refer to the documentation to correct the parameters for %s.The AppType type is illegal, please refer to using the document to specify the AppType suitable for your business.
400Spark.App.InvalidResourceSpecThe requested resource type is not supported:\n %s-
400Spark.App.ParameterConflictConflicting parameters submitted:\n %s-
400Spark.App.ResourceNegativeErrorThe requested resource MUST be positive: %s-
400Spark.Config.InvalidAppFilePathThe main driver file MUST be a valid OSS path. Current is %s.-
400Spark.Config.invalidConnectorsThe spark.adb.connectors configuration is invalid: %s-
400Spark.Config.InvalidDiskConfigThe requested disk mount parameter is invalid: %s-
400Spark.Config.InvalidHostAliasConfigThe requested host alias parameter %s is invalid. Example: 127.0.0.1 localhost local-
400Spark.Config.InvalidLogOssPathThe OSS address for log storage is illegal: %s-
400Spark.Config.InvalidRoleArnFormatConfigure RoleARN %s invalid format. It should match acs:ram::uid_number:role/role_name-
400Spark.Config.InvalidTimeZoneUnable to parse time zone configuration %s : %s-
400Spark.Config.MainAppFileNotExistsThe main driver file is missing, [file] or [sqls] must be configured.-
400Spark.Config.OSSPathAccessErrorThe configured OSS address %s cannot be accessed.-
400Spark.Config.OSSPathNotExistsThe OSS address %s requested does not exist or the permissions are insufficient.-
400Spark.Config.RoleArnVerifyFailedRoleARN parameter verification failed. Error msg: %s when verify RoleArn %s-
400Spark.Config.SecurityGroupNotFoundThe security group in the configuration does not exist or cannot be accessed. %s.The configured resource group does not exist or cannot be accessed due to insufficient permissions. %s.
400Spark.Config.VswitchNotFoundThe vswitch in the configuration does not exist or cannot be accessed. %s.-
400Spark.InvalidParameterInvalid parameter value: %sIncorrect input parameter:%s.
400Spark.InvalidStateThe object of the operation is in an invalid state: %sThe operation object is invalid.
400Spark.RoleArn.Invalid%s is not found, or the RAM role has not been authorized.-
400Spark.SQL.BlankErrorInput sql can not be blank string.-
400Spark.SQL.MultipleSQLErrorElement in field [sqls] can not contain more than one sql statement: %s.-
400Spark.SQL.NotFoundExecutableSQLErrorNo executable statements are submitted. Please check the input SQL.-
400Spark.SQL.NotFoundExecutableSQLErrorThe execution part is not included in the current submitted SQL, please check the input SQL.-
400Spark.SQL.ParserErrorSubmit spark app failed when parser SQL %s. Error message: %s.-
403Spark.ForbiddenNo permissions to access the resources: %sInsufficient permissions to access the related resources. Information that you want to access: %s.
404Spark.App.ContentNotFoundThe requested content %s of the Spark application is not found.-
404Spark.App.NotFoundThe Spark application %s is not found.-
404Spark.ObjectNotFoundThe object is not found. More information: %s-
500Spark.ServerErrorThe Spark control component system encountered an error, please create a ticket to solve the problem or concat the supported engineer on duty. Error message: %sAn error occurred on the Spark control component system. Submit a ticket or contact technical support.

For a list of error codes, visit the Service error codes.