All Products
Search
Document Center

Realtime Compute for Apache Flink:Configure a deployment

Last Updated:Nov 02, 2023

You must configure a deployment before you start the deployment. This topic describes how to configure a deployment.

Prerequisites

Procedure

  1. Log on to the Realtime Compute for Apache Flink console.

  2. On the Fully Managed Flink tab, find the workspace that you want to manage and click Console in the Actions column.

  3. In the left-side navigation pane, click Deployments. On the Deployments page, click the name of the desired deployment.

  4. In the upper-right corner of the desired section on the Configuration tab, click Edit.

    Note

    You must go back to the SQL Editor page to edit and deploy the deployment when you configure the basic configuration for the deployment. After you click Edit in the upper-right corner of the Basic section, a message appears. If you want to edit the deployment, click OK in the message.

  5. Modify the configuration of the deployment.

    You can modify the deployment configuration in the following sections:

  6. In the upper-right corner of the desired section, click Save.

Basic section

Deployment type

Description

SQL deployment

You can write SQL code and configure the Engine Version, Additional Dependencies, Description, and Label parameters. For more information about the parameters, see Develop an SQL draft.

Note

After you click Edit in the upper-right corner of the Basic section, a message appears. If you want to modify the deployment configuration, click OK in the message. Then, you are redirected to the SQL Editor page to edit and deploy the deployment.

JAR deployment

You can configure the Engine Version, JAR Uri, Entry Point Class, Entry Point Main Arguments, Additional Dependencies, Description, Kerberos Name, and Label parameters. For more information about the parameters, see Create a JAR deployment.

Python deployment

You can configure the Engine Version, Python Uri, Entry Module, Entry Point Main Arguments, Python Libraries, Python Archives, Additional Dependencies, Description, Kerberos Name, and Label parameters. For more information about the parameters, see Create a Python deployment.

Parameters section

Parameter

Description

Checkpointing Interval

The interval at which a checkpoint is generated. If you do not configure this parameter, the checkpointing feature is disabled.

Checkpointing Timeout time

Default value: 10. Unit: minutes. If the checkpointing timeout time exceeds the default value of this parameter, checkpoints fail to be generated.

Min Interval Between Checkpoints

The minimum interval between two checkpoints. If the maximum parallelism of checkpoints is 1, this parameter specifies the minimum interval between the two checkpoints.

State Expiration Time

If the state data expires the time that is specified by this parameter, the system automatically removes the expired state data. This way, disk space is released.

Flink Restart Policy

If a task fails and the checkpointing feature is disabled, the JobManager cannot be restarted. If the checkpointing feature is enabled, the JobManager is restarted. Valid values:

  • Failure Rate: The JobManager is restarted if the number of failures within the specified interval exceeds the upper limit.

    If you select Failure Rate from the Flink Restart Policy drop-down list, you must set the Failure Rate Interval, Max Failures per Interval, and Delay Between Restart Attempts parameters.

  • Fixed Delay: The JobManager is restarted at a fixed interval.

    If you select Fixed Delay from the Flink Restart Policy drop-down list, you must set the Number of Restart Attempts and Delay Between Restart Attempts parameters.

  • No Restarts: The JobManager is not restarted. This is the default value.

Other Configuration

Other Flink settings, such as akka.ask.timeout: 10.

Logging section

Parameter

Description

Log Archiving

By default, Allow Log Archives is turned on. After you turn on Allow Log Archives in the Logging section, you can view the logs of a historical job on the Logs tab. For more information, see View the logs of a historical job.

Note
  • In Ververica Runtime (VVR) 3.X, only VVR 3.0.7 and later minor versions allow you to turn on Allow Log Archives for a deployment.

  • In VVR 4.X, only VVR 4.0.11 and later minor versions allow you to turn on Allow Log Archives for a deployment.

Log Archives Expires

By default, the archived log files are valid for seven days.

Root Log Level

You can specify the following log levels. The levels are listed in ascending order of urgency.

  1. TRACE: records finer-grained information than DEBUG logs.

  2. DEBUG: records the status of the system.

  3. INFO: records important system information.

  4. WARN: records the information about potential issues.

  5. ERROR: records the information about errors and exceptions that occur.

Log Levels

Enter the log name and log level.

Logging Profile

You can set this parameter to default or Custom Template.