All Products
Search
Document Center

AnalyticDB:Spark editor

Last Updated:Mar 28, 2026

The Spark editor is a browser-based development environment in the AnalyticDB for MySQL console where you create, configure, and run Spark batch, streaming, and SQL engine applications. You can view driver logs, submission details, and SQL execution logs from the same interface.

Prerequisites

Before you begin, ensure that you have:

To configure the log storage path: Log on to the AnalyticDB for MySQL console. Find the cluster and click its cluster ID. In the left-side navigation pane, choose Job Development > Spark JAR Development, then click Log Settings. Select the default path or enter a custom path. The custom path cannot be the root directory of OSS — it must include at least one folder level.

Create and run a Spark application

  1. Log on to the AnalyticDB for MySQL console. In the upper-left corner, select a region. In the left-side navigation pane, click ClustersData Lakehouse Edition. On the Data Lakehouse Edition tab, find the cluster and click its cluster ID.

  2. In the left-side navigation pane, choose Job Development > Spark JAR Development.

  3. On the Spark JAR Development page, click the 1 icon to the right of Applications.

  4. In the Create Application panel, configure the following parameters.

    ParameterDescription
    NameName of the application or directory. File names are case-insensitive.
    TypeApplication: creates a file-based Spark template. Directory: creates a folder to organize applications.
    Parent LevelThe parent directory for the file or folder.
    Job TypeThe type of Spark job: Batch for batch processing, Streaming for streaming applications, or SQL Engine for Spark distributed SQL engine workloads.
  5. Click OK.

  6. In the Spark editor, configure the application. See Overview for configuration details.

  7. Before running the application, select a job resource group and an application type in the editor. Then choose one of the following actions:

    • Click Save to save the application for later use.

    • Click Run Now to run the application immediately. The status updates in real time on the Applications tab.

    Note

    By default, no retry is performed after a failure. To configure retry behavior, set the spark.adb.maxAttempts and spark.adb.attemptFailuresValidityInterval parameters before running. See Spark application configuration parameters for details.

Monitor a Spark application

On the Applications tab, search for an application by application ID. Use the following actions in the Actions column based on what you need to investigate.

ActionWhat it shows
LogsDriver logs for the current application, or the execution log of SQL statements — useful for debugging runtime errors.
UIThe Spark UI for performance analysis and task-level diagnostics. Access has a validity period; if it expires, open the UI again.
DetailsSubmission details including the log path, web UI URL, cluster ID, and resource group name — useful for verifying how the application was submitted.
More > StopStops the running application.
More > HistoryRetry history for the current application.

To view retry history across all applications, click the Execution History tab.