You can deploy or debug jobs in a session cluster. This improves the resource utilization of JobManager and spins up the job startup.
Fully managed Flink supports two cluster types: Per-Job Clusters and Session Clusters. Differences between the two cluster types:
- Per-Job Clusters: This is the default value. Each job requires a separate JobManager to achieve resource isolation between jobs. Therefore, the resource utilization of JobManagers for jobs that involve a small amount of data is low. Therefore, this type of cluster is suitable for jobs that consume a large number of resources or jobs that run in a continuous and stable manner.
- Session Clusters: This type of cluster allows multiple jobs to use the same JobManager, which increases
the utilization of JobManager resources. Therefore, this type of cluster is suitable
for jobs that consume few resources or jobs that start and stop frequently.
- You can configure multiple session clusters for each project. However, you can enable Use for SQL Editor previews for only one session cluster.
- You cannot enable the Autopilot feature for a session cluster.
- When you create a session cluster, the cluster resources are consumed regardless of whether you use the session cluster. The cluster resources are consumed based on the specified resource configuration when you create the cluster.
Create a session cluster
- Log on to the Realtime Compute for Apache Flink console.
- On the Fully Managed Flink tab, find the workspace that you want to enter, and click Console in the Actions column.
- In the left-side navigation pane, choose .
- In the upper-right corner of the page, click Create Session Cluster.
- Configure the parameters. The following table describes the parameters.
Section Parameter Description Standard Name. The name of the cluster. State The desired state of the cluster. Valid values:
- STOPPED: The cluster is stopped after it is configured, and the jobs in the cluster are also terminated.
- RUNNING: The cluster keeps running after it is configured.
Use for SQL Editor previews Specifies whether to use this session cluster for SQL previews.Note You can enable Use for SQL Editor previews for only one session cluster. If you turn on this switch for the current cluster, the setting of another cluster for which this feature was enabled becomes invalid. Label key You can configure labels for jobs in the Labels section. This allows you to find a job on the Overview page in an efficient manner. Label value N/A. Configuration Engine Version Valid values: 1.10, 1.11, 1.12, and 1.13.Note For Python API jobs, you must select 1.11 or later. Flink Restart Strategy Configuration Valid values:
Note If you leave this parameter empty, the default Apache Flink restart strategy is used. In this case, if a task fails and checkpointing is disabled, JobManager is not restarted. If you enable checkpointing, JobManager is restarted.
- No Restarts: No jobs are restarted.
- Fixed Delay: A delayed restart at a fixed interval. If you select this option, you must also configure Number of Restart Attempts and Delay between Restart Attempts.
- Failure Rate: the failure rate. If you select this option, you must also configure Failure Rate Interval, Max Failures per Interval, and Delay between Restart Attempts.
Additional Configuration Configure other Flink settings. such as
Resources Number of Task Managers By default, the value is the same as the parallelism. Job Manager CPUs Default value: 1. Job Manager Memory Minimum value: 1 GiB. We recommend that you use GiB or MiB as the unit. For example, you can set this parameter to 1024 MiB or 1.5 GiB. Task Manager CPUs Default value: 1. Task Manager Memory Minimum value: 1 GiB. We recommend that you use GiB or MiB as the unit. For example, you can set this parameter to 1024 MiB or 1.5 GiB. Logging Root Log Level Valid values: TRACE, DEBUG, INFO, WARN, and ERROR. Logger level The level of the log. Logging Profile The log template. You can use the system template or configure a custom template.
- Click Create Session Cluster. After the session cluster is created, you can select the created session cluster from the Deployment Target drop-down list when you create a DataStream job or deploy an SQL job.