This guide walks you through deploying and running a Flink JAR job on Realtime Compute for Apache Flink. You'll complete the workflow for both stream and batch modes, from uploading your JAR to verifying results.
This guide covers the following steps:
Prerequisites
Before you begin, ensure that you have:
-
Flink console access: If you're using a Resource Access Management (RAM) user or RAM role, confirm that you have the required permissions. See Permission management.
-
A Flink workspace: Create one if you haven't already. See Activate Realtime Compute for Apache Flink.
Step 1: Develop a JAR package
The Flink console does not include a development environment. Build and package your code locally, then upload the resulting JAR.
For guidance on configuring dependencies, using connectors, and reading additional files from Object Storage Service (OSS), see Develop a JAR job.
Match the Flink version used in local development with the engine version you select in Step 3. Also verify the scope of your dependency packages.
To follow along with this guide, download the test resources below. They contain a word-frequency counter that runs in both stream and batch mode.
-
FlinkQuickStart-1.0-SNAPSHOT.jar — the test JAR package
-
FlinkQuickStart.zip — source code (optional, if you want to inspect or recompile)
-
Shakespeare — the input data file
Step 2: Upload the JAR package and data file
-
Log on to the Realtime Compute for Apache Flink console.
-
In the Actions column for your workspace, click Console.
-
In the left navigation pane, click File Management.
-
Click Upload Resource, then upload both the JAR package and the Shakespeare data file. For details on file storage paths, see File management.
Step 3: Deploy a JAR job
Stream job
-
Go to Job O&M > Operation Center, click Deploy Job, and select JAR Job.
-
Fill in the deployment parameters. The table below lists the key parameters — use the example values to follow this guide. For all other parameters, see Deploy a job.
NoteVerverica Runtime (VVR) 8.0.6 and later support access only to the bucket that was attached when you activated the Flink workspace. Access to other buckets is not supported.
ImportantSession clusters do not support monitoring and alerts, auto-tuning configuration, or production workloads. Use session clusters for development and testing only. See Debug jobs.
Parameter Description Example Deployment mode Select Stream for a streaming job. Stream Deployment name A name for this job deployment. flink-streaming-test-jar Engine version The Flink engine version to run the job. Select a version tagged Recommended or Stable for higher reliability. See Release notes and Engine versions. vvr-8.0.9-flink-1.17 JAR URI The JAR file to run. Select the FlinkQuickStart-1.0-SNAPSHOT.jarfile you uploaded in Step 2. If the file is already in File Management, select it directly — no re-upload needed.— Entry point class The main class of the program. If your JAR does not declare a main class in its manifest, enter the full class name. The test JAR contains both stream and batch code, so you must specify the stream entry point explicitly. org.example.WordCountStreaming Entry point main arguments Runtime arguments passed to the main method. Enter the OSS path of the Shakespeare file you uploaded. Copy the full path from File Management. --input oss://<your-attached-OSS-Bucket-name>/artifacts/namespaces/<project-name>/Shakespeare--input oss://<your attached OSS bucket name>/artifacts/namespaces/<project name>/Shakespeare--output oss://<your attached OSS bucket name>/artifacts/namespaces/<project name>/batch-quickstart-test-output.txtYou can copy the full path of the Shakespeare file directly in File Management.
--input oss://<your-attached-OSS-Bucket-name>/artifacts/namespaces/<project-name>/ShakespeareYou can copy the full path of the Shakespeare file from File Management.
Deployment target The resource queue or session cluster to run the job on. See Manage resource queues and Create a session cluster. default-queue 
-
Click Deploy.
Batch job
-
Go to Job O&M > Operation Center, click Deploy Job, and select JAR Job.
-
Fill in the deployment parameters for the batch job. For all other parameters, see Deploy a job.
ImportantSession clusters do not support monitoring and alerts, auto-tuning configuration, or production workloads. Use session clusters for development and testing only. See Debug jobs.
Parameter Description Example Deployment mode Select Batch for a batch job. Batch Deployment name A name for this job deployment. flink-batch-test-jar Engine version The Flink engine version to run the job. Select a version tagged Recommended or Stable. See Release notes and Engine versions. vvr-8.0.9-flink-1.17 JAR URI The JAR file to run. Select the FlinkQuickStart-1.0-SNAPSHOT.jarfile you uploaded in Step 2.— Entry point class The main class of the program. The test JAR contains both stream and batch code, so specify the batch entry point explicitly. org.example.WordCountBatch Entry point main arguments Runtime arguments passed to the main method. Enter the OSS paths for both the input file (Shakespeare) and the output file. Copy the input file path from File Management. You only need to specify the full path and name of the output file — you do not need to create it in the storage service beforehand. Keep the output file in the same directory as the input file. --input oss://<your-attached-OSS-Bucket-name>/artifacts/namespaces/<project-name>/Shakespeare--output oss://<your-attached-OSS-Bucket-name>/artifacts/namespaces/<project-name>/batch-quickstart-test-output.txtDeployment target The resource queue or session cluster to run the job on. See Manage resource queues and Create a session cluster. default-queue 
-
Click Deploy.
Step 4: Start the job and view results
Stream job
-
In Job O&M > Operation Center, find your stream job and click Start in the Actions column.

-
Select Stateless Start and click Start. For other start options, see Start a job.
-
When the job status changes to Running, open the TaskManager log file that ends with
.outand search for "shakespeare" to view the word-frequency results.
Batch job
-
In Job O&M > Operation Center, find your batch job and click Start.

-
In the Start Job dialog box, click Start. For other start options, see Start a job.
-
When the job status changes to Finished, log on to the OSS console and open the output file at:
Log on to the OSS console. You can view the file in the oss://<Your attached OSS Bucket name>/artifacts/namespaces/<Project name>/batch-quickstart-test-output.txt path.
oss://<your-attached-OSS-Bucket-name>/artifacts/namespaces/<project-name>/batch-quickstart-test-output.txt
The TaskManager .out log displays a maximum of 2,000 records. Because of this limit, the stream job and batch job may show different result record counts. See Print for details.
Step 5: (Optional) Stop the job
If you modify a job and want the changes to take effect, you must redeploy the job, and then stop and restart it. You need to stop and restart a job when you make changes such as:
-
Modifying the JAR code
-
Adding or removing
WITHparameters -
Changing the job version
-
Starting a completely new job
-
Updating parameters that don't take effect dynamically
-
The job cannot reuse a state
For instructions, see Stop a job.
What's next
-
Resource allocation: Configure job resources in Basic (coarse-grained) or Expert (fine-grained) mode before starting the job, or adjust them after the job is running. See Configure job resources.
-
Dynamic updates: Update Flink job parameters without stopping the job to reduce business interruptions. See Dynamic scaling and parameter updates.
-
Log configuration: Configure log levels and route different log levels to separate outputs. See Configure job log outputs.
-
SQL jobs: Follow a quick example to learn the complete development flow for Flink SQL jobs. See Quick Start for Flink SQL jobs.
-
Real-time data warehouse: Build a real-time data warehouse with Hologres. See Build a real-time data warehouse with Hologres.
-
Streaming data lakehouse: Build a streaming data lakehouse with Apache Paimon and StarRocks. See Build a streaming data lakehouse with Paimon and StarRocks.