Deployment separates your development environment from production. A deployment doesn't affect running jobs and only takes effect after you start or restart the job. This topic covers how to deploy SQL, YAML, JAR, and Python jobs.
Prerequisites
Before you begin, ensure that you have:
Upload resources
Before you deploy a job, you must upload the required JAR packages, Python job files, or Python dependencies to the Flink development console.
-
Log on to the Realtime Compute for Apache Flink console.
-
Find the target workspace and click Console in the Actions column.
-
In the left navigation pane, click File Management.
-
Click Upload Resource and select the JAR package, Python job file, or Python dependency to upload.
For Python API jobs, upload the official PyFlink JAR package. Download URLs: PyFlink V1.11 and PyFlink V1.12.
Deploy a job
The deployment steps vary by job type. Follow the steps for your job type.
-
Log on to the Realtime Compute for Apache Flink console.
-
Find the target workspace and click Console in the Actions column.
-
Follow the steps for your job type:
Deploy an SQL job
-
Go to Data Development > ETL and open your SQL job. For more information, see Job development map.
-
Click Deploy.
-
Configure the deployment parameters.
Session clusters don't support monitoring and alerting or auto-tuning. Use them only as a staging environment, not in production. For details, see Debug a job.
| Parameter |
Description |
| Note |
(Optional) Enter a description. |
| Job Tags |
Tag the job so you can filter it by Tag Key and Tag Value on Operation Center > Job O&M. Maximum 3 tags per job. |
| Deployment Target |
Select a resource queue or a session cluster. For details, see Manage resource queues and Step 1: Create a session cluster. |
| Skip deep check before deployment |
Select this option to skip the pre-deployment deep check. |
-
Click OK.
On the Operation Center > Job O&M page, you can view the deployed SQL job and start it as needed.
Deploy a YAML job
YAML job deployment requires Ververica Runtime (VVR) 8.0.9 or later.
-
Go to Data Development > Data Ingestion and open your YAML job. For more information, see Develop a Flink CDC data ingestion job (public preview).
-
Click Deploy.
-
Configure the deployment parameters.
| Parameter |
Description |
| Description |
(Optional) Enter a description. |
| Job Tags |
Tag the job so you can filter it by Tag Key and Tag Value on Operation Center > Job O&M. Maximum 3 tags per job. |
| Deployment Target |
Select a resource queue. For details, see Manage resource queues. |
| Skip deep check before deployment |
Select this option to skip the pre-deployment deep check. |
-
Click OK.
On the Operation Center > Job O&M page, you can view the deployed YAML job and start it as needed.
Deploy a JAR job
-
Go to Operation Center > Job O&M, then choose Deploy Job > JAR Job.
-
Configure the Basic settings.
Session clusters don't support monitoring and alerting or auto-tuning. Use them only as a staging environment, not in production. For details, see Debug a job.
| Parameter |
Description |
| Deployment Mode |
Select Stream or Batch. |
| Deployment Name |
Enter a name for the job. |
| Engine Version |
Select a VVR version. Use a Recommended or Stable version for production. Version tags: Recommended (latest minor version of the latest major version), Stable (latest minor version of a major version still in service; includes bug fixes from previous versions), Normal (other minor versions still in service), EOS (end of service). For details, see Engine versions and Lifecycle policy. |
| JAR URI |
Select an uploaded file or upload a new one. VVR 8.0.6 and later can only access the OSS bucket bound to your Flink workspace. For Python API jobs, specify the official PyFlink JAR package. Download URLs: PyFlink V1.11 and PyFlink V1.12. |
| Entry Point Class |
The main class of your JAR program. Required if your JAR package doesn't specify a main class. For Python API jobs, set this to org.apache.flink.client.python.PythonDriver. |
| Entry Point Main Arguments |
Parameters passed to the main method. Maximum 1,024 characters. Avoid complex values with line breaks, spaces, or special characters — use additional dependency files for those instead. For Python API jobs, set this to -py /flink/usrlib/<your-python-file>.py. The /flink/usrlib/ prefix is required. For example, if your Python job file is word_count.py, enter -py /flink/usrlib/word_count.py. |
| Additional Dependency Files |
Extra files loaded into the /flink/usrlib/ directory of JobManager (JM) and TaskManager (TM) pods at runtime. Cannot be configured when Deployment Target is a session cluster. Specify files using any of these methods: (1) Select from previously uploaded files (recommended) — upload via Resource Management in the left navigation pane; files are stored at oss://<bucket-name>/artifacts/namespaces/<namespace-name>. (2) Enter the OSS path of the file — must point to the OSS bucket associated with your Flink workspace. (3) Enter a URL to a file in an external storage system accessible to Realtime Compute for Apache Flink (public-read or no authentication required) — only URLs ending with a filename are supported, for example, http://xxxxxx/<file>. |
| Deployment Target |
Select a resource queue or a session cluster. For details, see Manage resource queues and Step 1: Create a session cluster. |
| Description |
(Optional) Enter a description. |
| Job Tags |
Tag the job for filtering on Job O&M. Maximum 3 tags per job. |
-
(Optional) Turn on More Settings to configure Kerberos authentication.
| Parameter |
Description |
| Kerberos Cluster |
Select a Kerberos cluster. For details on creating one, see Create a Kerberos cluster. |
| principal |
The Kerberos principal (user or service) that uniquely identifies an identity in the Kerberos encryption system. |
-
Click Deploy.
On the Operation Center > Job O&M page, you can view the deployed JAR job and start it as needed.
Deploy a Python job
-
Go to Operation Center > Job O&M, then choose Deploy Job > Python Job.
-
Configure the Basic settings.
Session clusters don't support monitoring and alerting or auto-tuning. Use them only as a staging environment, not in production. For details, see Debug a job.
Important
If your job uses JAR package dependencies, configure the pipeline.classpaths parameter after deployment to reference them. For details, see Use JAR package dependencies.
| Parameter |
Description |
| Deployment Mode |
Select stream mode or batch mode. |
| Deployment Name |
Enter a name for the job. |
| Engine Version |
Select a VVR version. Use a Recommended or Stable version for production. Version tags: Recommended (latest minor version of the latest major version), Stable (latest minor version of a major version still in service), Normal (other minor versions still in service), EOS (end of service). For details, see Engine versions and Lifecycle policy. |
| Python File Path |
Select the Python job file. Accepts .py or .zip files. If Entry Module is blank, this must be a .py file. |
| Entry Module |
The entry class of the program, for example, example.word_count. Required when Python File Path is a .zip file. |
| Entry Point Main Arguments |
Job parameters. |
| Python Libraries |
Third-party Python packages added to PYTHONPATH of the Python worker process, directly accessible in Python user-defined functions (UDFs). For details, see Use third-party Python packages. |
| Python Archives |
Archive files in ZIP format (.zip, .jar, .whl, .egg). Files are decompressed to the working directory of the Python worker process. For details, see Use a custom Python virtual environment and Use data files. If your archive file is mydata.zip, access its contents in a Python UDF as follows: open("mydata.zip/mydata/data.txt"). |
| Additional Dependency Files |
Python job files and dependent data files loaded into /flink/usrlib/ at runtime. Cannot be configured when Deployment Target is a session cluster. Specify files using any of these methods: (1) Select from previously uploaded files (recommended) — upload via Resource Management in the left navigation pane; files are stored at oss://<bucket-name>/artifacts/namespaces/<namespace-name>. (2) Enter the OSS path of the file — must point to the OSS bucket associated with your Flink workspace. (3) Enter a URL to a file in an external storage system accessible to Realtime Compute for Apache Flink (public-read or no authentication required) — only URLs ending with a filename are supported, for example, http://xxxxxx/<file>. |
| Deployment Target |
Select a resource queue or a session cluster. For details, see Manage resource queues and Step 1: Create a session cluster. |
| Note |
(Optional) Enter a description. |
| Job Tags |
Tag the job for filtering on Job O&M. Maximum 3 tags per job. |
-
(Optional) Turn on More Settings to configure Kerberos authentication.
| Parameter |
Description |
| Kerberos Cluster |
Select a Kerberos cluster. For details on creating one, see Create a Kerberos cluster. |
| principal |
The Kerberos principal (user or service) that uniquely identifies an identity in the Kerberos encryption system. |
-
Click Deploy.
On the Operation Center > Job O&M page, you can view the deployed Python job and start it as needed.