All Products
Search
Document Center

Realtime Compute for Apache Flink:Quick start with Flink SQL jobs

Last Updated:Mar 26, 2026

This guide walks through the end-to-end workflow for a Flink SQL job: create, write SQL, deploy, start, and verify results.

In this guide, you complete the following steps:

  1. Create a job

  2. Write the SQL job and view configuration information

  3. (Optional) Perform a deep check and debug the job

  4. Deploy the job

  5. Start the job and view the results

  6. (Optional) Stop the job

Prerequisites

Before you begin, ensure that you have:

Step 1: Create a job

  1. Navigate to the SQL job creation page.

    1. Log on to the Realtime Compute for Apache Flink console.

    2. Click Console in the Actions column for the target workspace.

    3. In the navigation pane on the left, choose Data Development > ETL.

  2. Click the image icon, select New Stream Job, enter a File Name, and select an Engine Version.

    Realtime Compute for Apache Flink provides code and data synchronization templates. Each template covers a specific scenario with code examples and instructions, making it easier to learn Flink features and syntax. For more information, see Code Template and Data synchronization templates.
    Parameter Description Example
    File Name The name of the job. Must be unique within the current project. flink-test
    Engine Version The Flink engine version for the job. Select a version tagged Recommended or Stable — these versions offer higher reliability and performance. For details, see Release notes and Engine versions. vvr-8.0.8-flink-1.17

    image

  3. Click Create.

Step 2: Write the SQL job and view configuration information

  1. Write the SQL job. Copy the following SQL into the SQL editor. This example uses the Datagen connector to generate a random data stream and the Print connector to print results to the development console. For a full list of supported connectors, see Supported connectors.

    INSERT INTO supports writing to a single sink or multiple sinks. For more information, see INSERT INTO statement.
    For production jobs, minimize the use of temporary tables. Instead, use tables that are registered in Data Management. For more information, see Data Management.
    -- Create a temporary source table named datagen_source.
    CREATE TEMPORARY TABLE datagen_source(
      randstr VARCHAR
    ) WITH (
      'connector' = 'datagen' -- Datagen connector
    );
    
    -- Create a temporary sink table named print_table.
    CREATE TEMPORARY TABLE print_table(
      randstr  VARCHAR
    ) WITH (
      'connector' = 'print',   -- Print connector
      'logger' = 'true'        -- Display the results in the console.
    );
    
    -- Truncate the randstr field and print the result.
    INSERT INTO print_table
    SELECT SUBSTRING(randstr,0,8) from datagen_source;
  2. Review the configuration tabs. On the tabs to the right of the SQL editor, view or upload additional configurations.

    Tab Description
    More Configurations Configure Engine Version, Additional Dependencies (such as temporary functions), and Kerberos authentication. Engine version tags: Recommend (latest minor version of the latest major version), Stable (latest minor version of a major version still in service), Normal (other versions still in service), Deprecated (out of service). For details on version management, see Engine versions and Lifecycle policy. To enable Kerberos authentication, configure the registered Kerberos cluster and principal. See Register a Hive Kerberos cluster if needed.
    Code Structure View the Data Flow or Tree Structure of your SQL statements.
    Version Information View job version history and manage versions. See Manage job versions.

Step 3 (Optional): Perform a deep check and debug the job

Run a deep check

A deep check verifies SQL semantics, network connectivity, and the metadata of tables referenced by the job. After the check, click SQL Optimization in the results area to review risk alerts and optimization suggestions.

  1. In the upper-right corner of the SQL editor, click Deep Check.

  2. In the Deep Check dialog box, click Confirm.

If the deep check times out, you may see the following error: To resolve this, add the following line at the top of your job:
The RPC times out maybe because the SQL parsing is too complicated. Please consider enlarging the `flink.sqlserver.rpc.execution.timeout` option in flink-configuration, which by default is `120 s`.
SET 'flink.sqlserver.rpc.execution.timeout' = '600s';

Debug the job

The debug feature simulates a job run so you can check output and verify the logic of your SELECT or INSERT statements — without writing data to any downstream sink table.

  1. In the upper-right corner of the SQL editor, click Debug.

  2. In the Debug dialog box, select a debug cluster and click Next. The debug cluster must be a session cluster using the same engine version as the SQL job and in the Running state. If no cluster is available, create one first. See Step 1: Create a session cluster.

  3. Configure the debugging data and click OK. For configuration details, see Step 2: Debug a job.

Step 4: Deploy the job

In the upper-right corner of the SQL editor, click Deploy. In the Deploy New Version dialog box, configure the parameters and click OK.

Set Deployment Target based on your environment:

Deployment target Environment Key characteristics
Resource Queue Production Exclusive resources (not preempted), resource isolation, suitable for long-running or high-priority jobs
Session Cluster Development and testing Shared Job Manager (JM) across jobs for faster startup; suitable for development, testing, and lightweight jobs
Important

Logs are not available for jobs running on a session cluster.

Step 5: Start the job and view the results

  1. In the navigation pane on the left, choose Operation Center > Job Operations.

  2. Click Start in the Actions column for the target job.

  3. Select Stateless Start and click Start. The job is running when its status shows Running. For details on startup parameters, see Start a job.

  4. View the computation results.

    1. Go to Operation Center > Job O&M and click the target job.

    2. On the Job Logs tab, click the Running Task Managers tab, then click a task in the Path, ID column.

    3. Click Logs and search for PrintSinkOutputWriter. flinksql作业快速启动.jpg

Step 6 (Optional): Stop the job

Stop and restart a job when:

  • The SQL code changes

  • WITH parameters are added or removed

  • The job version changes

  • The job cannot reuse its state

  • You want to start a new job

  • Parameters that do not take effect dynamically are updated

To stop the job:

  1. On the Operation Center > Job O&M page, click Stop in the Actions column for the target job.

  2. Click OK.

What's next