All Products
Search
Document Center

Realtime Compute for Apache Flink:Start a job

Last Updated:Mar 26, 2026

After deploying a job, start it on the Job O&M page. You also need to start a job to recover a stopped job or apply updated parameter settings that are not dynamically applied.

Prerequisites

Before you begin, ensure that you have:

Limitations

Only stream jobs support start options.

Usage notes

If you start a job from the latest state or a specified state, the system runs a state compatibility check. A job with state incompatibilities may fail to start or produce unexpected results. See Flink state compatibility reference.

Start a job

  1. Log on to the Flink development console as a member with the owner role.

  2. At the top of the page, select the name of the target project.

    image.png

  3. Go to Operation Center > Job O&M, then select Stream Job or Batch Job from the drop-down list.

    image.png

  4. In the Actions column for the target job, click Start.

  5. (Optional) For a stream job, configure the start options.

    • Stateless Start — Use for a new job or when you cannot reuse a state.

      Option

      Description

      Specify Source Table Start Time

      Select Specify Source Table Start Time and specify a time. The read time configured here takes priority over the startTime set in the job's Data Definition Language (DDL) code. Supported connectors: Kafka, Simple Log Service (SLS), DataHub, ApsaraMQ for RocketMQ, Hologres, Paimon data lakehouse for streaming, and MySQL. Note: startTime takes effect only when starting a job from scratch with startTime specified. It does not apply when starting from a system checkpoint or a snapshot. Not all connectors support startTime — check whether the connector's WITH parameters include startTime. See Simple Log Service (SLS) WITH parameters for an example. Kafka versions earlier than 0.11 may not be supported due to connector compatibility issues.

      Configure Automatic Tuning

      Turn on this switch and select a tuning mode: Intelligent tuning automatically adjusts resource allocation based on usage — scaling down when usage is low and scaling up when it reaches a threshold. See Enable and configure intelligent tuning. Scheduled tuning applies resource configurations on a time-based schedule. A schedule can contain multiple resource-to-time mappings. See Configure and apply a scheduled tuning plan.

    • Stateful Start — Select a recovery policy and optionally enable automatic tuning.

      Policy

      Description

      Recover from the Latest State

      Recovers the job from the latest snapshot or system checkpoint. If changes to the SQL code, Flink runtime parameters, or database engine version are detected, click Check next to State Compatibility Check before proceeding. See Compatibility for the meaning of compatibility results and recommended actions.

      Recover from a Specified State

      Select a specific snapshot to restore from. To create a snapshot before stopping a job, see Manage job state sets.

      Recover from Another Job

      Specify the target job and its snapshot. Snapshots can be shared between jobs, but the job states must be compatible. See Manage job state sets.

      Allow Non-Restored State (JAR jobs only)

      By default, the Flink system tries to match the entire snapshot with the job being submitted. If modifications to the job cause changes in the operator state, the task may not be recoverable. In this case, you can turn on this switch. The system skips states that cannot be matched, allowing the job to start. See Allow Non-Restored State.

      Configure Automatic Tuning

      Same options as in Stateless Start: intelligent tuning or scheduled tuning.

  6. Click Start.

To check the current status of the job, go to Operation Center > Job O&M. See View the running status of a job.

What's next