All Products
Search
Document Center

Realtime Compute for Apache Flink:Develop an SQL draft

Last Updated:May 18, 2023

This topic describes how to develop an SQL draft in the console of fully managed Flink and also describes the limits on draft development.

Description

When you write code for an SQL draft, you can use catalogs, variables, user-defined functions (UDFs), and custom connectors. For more information about the usage scenarios and methods, see the following topics:

Limits

  • SQL drafts that are published by using the SQL editor support only Flink 1.11, Flink 1.12, and Flink 1.13.

  • For more information about the upstream and downstream storage that is supported by Flink SQL, see Upstream and downstream storage.

Procedure

  1. Log on to the console of fully managed Flink and create a draft.

    1. Log on to the Realtime Compute for Apache Flink console.

    2. On the Fully Managed Flink tab, find the workspace that you want to manage and click Console in the Actions column.

    3. In the left-side navigation pane, click SQL Editor. In the upper-left corner of the SQL Editor page, click New.

    4. On the SQL Scripts tab of the New Draft dialog box, click Blank Stream Draft.

      Fully managed Flink provides various code templates and supports data synchronization. Each code template provides specific scenarios, code samples, and instructions for you. You can click the desired template to quickly learn about the features and related syntax of Flink and implement your business logic. For more information, see Code templates and Data synchronization templates.

    5. Click Next.

    6. In the New Draft dialog box, configure the parameters of the draft. The following table describes the parameters.

      Parameter

      Description

      Name

      The name of the draft that you want to create.

      Note

      The draft name must be unique in the current project.

      Location

      The folder in which the code file of the draft is saved.

      You can also click the New Folder icon to the right of an existing folder to create a subfolder.

      Engine Version

      The engine version of Flink that is used by the draft. For more information about engine versions, version mappings, and important time points in the lifecycle of each version, see Engine version.

    7. Click Create.

  2. Write DDL and DML statements.

    Sample statements:

    -- Create a source table named datagen_source. 
    CREATE TEMPORARY TABLE datagen_source(
      name VARCHAR
    ) WITH (
      'connector' = 'datagen'
    );
    
    -- Create a result table named blackhole_sink. 
    CREATE TEMPORARY TABLE blackhole_sink(
      name  VARCHAR
    ) WITH (
      'connector' = 'blackhole'
    );
    
    -- Insert data from the source table datagen_source into the result table blackhole_sink. 
    INSERT INTO blackhole_sink
    SELECT
      name
    from datagen_source;
  3. On the right-side of the SQL Editor page, click a desired tab to view or enter the configuration information on the tab based on your business requirements.

    Tab name

    Configuration description

    Configurations

    • Engine Version: the version of the Flink engine that you select when you create the draft. We recommend that you use the latest version. For more information about engine versions, see Engine versions and Lifecycle policies.

      Important

      In VVR 3.0.3 and later versions, Ververica Platform (VVP) allows you to run SQL jobs that use different engine versions at the same time. The version of the Flink engine that uses VVR 3.0.3 is Flink 1.12. If the engine version of your job is Flink 1.12 or earlier, you can perform the following operations to update the engine version based on the engine version that your job uses:

      • Flink 1.12: Stop and then restart your job. Then, the system automatically updates the engine version of your job to vvr-3.0.3-flink-1.12.

      • Flink 1.11 or Flink 1.10: Manually update the engine version of your job to vvr-3.0.3-flink-1.12 or vvr-4.0.8-flink-1.13, and then restart the job. Otherwise, a timeout error occurs when you start the job.

    • Additional Dependencies: the additional dependencies that are used in the draft, such as temporary functions.

    Structure

    • Flow Diagram: the flow diagram that allows you to view the directions in which data flows.

    • Tree Diagram: the tree diagram that allows you to view the source from which data is processed.

    Versions

    You can view the engine version of the deployment. For more information about the operations that you can perform in the Actions column in the Draft Versions panel, see Manage job versions.

  4. Click Save.

  5. Click Validate.

    Check the SQL semantics of the draft and the metadata information of the tables that are used by the draft.

  6. Optional: Click Debug.

    You can enable the debugging feature to simulate deployment running, check outputs, and verify the business logic of SELECT and INSERT statements. This feature improves development efficiency and reduces the risks of poor data quality. For more information, see Debug a deployment.

  7. Click Deploy.

    After the draft development and syntax check are complete, you can deploy the draft to publish the data to the production environment.