All Products
Search
Document Center

Realtime Compute for Apache Flink:Run and debug connectors locally

Last Updated:Nov 05, 2025

You can debug a program that uses Realtime Compute for Apache Flink connectors on a local machine. This helps you quickly verify code correctness and identify and resolve issues, saving the cost of cloud migration.

Background information

When you debug a Flink program that uses Realtime Compute for Apache Flink connectors in IntelliJ IDEA, an issue that a connector-related class cannot be found may occur. For example, when you debug a program that uses the MaxCompute connector, the following exception occurs:

Caused by: java.lang.ClassNotFoundException: com.alibaba.ververica.connectors.odps.newsource.split.OdpsSourceSplitSerializer
	at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:351)

This error occurs because specific runtime classes are missing in the default connector JAR.

To add the missing classes, perform the following steps.

Procedure

Step 1: Add dependencies

  1. Download the connector uber JAR package that contains runtime classes from the Maven central repository.

    For example, if the version of the ververica-connector-odps dependency of the MaxCompute connector is 1.17-vvr-8.0.11-1, you need to download the ververica-connector-odps-1.17-vvr-8.0.11-1-uber.jar package from the directory of the Maven central repository to your on-premises directory.

  2. When you obtain the execution environment, set the pipeline.classpaths parameter to the path of the uber JAR package.

    If multiple connector dependencies exist, separate the paths of the packages with semicolons (;). For example, you can set this parameter to file:///path/to/a-uber.jar;file:///path/to/b-uber.jar. In Windows, you must add the name of the related disk to the paths, such as file:///D:/path/to/a-uber.jar;file:///E:/path/to/b-uber/jar. The following sample code shows the configurations for a DataStream program:

    Configuration conf = new Configuration();
    conf.setString("pipeline.classpaths", "file://" + "Absolute path of the uber JAR package.
    StreamExecutionEnvironment env =
      	StreamExecutionEnvironment.getExecutionEnvironment(conf);

    The following sample code shows the configurations for a Table API program:

    Configuration conf = new Configuration();
    conf.setString("pipeline.classpaths", "file://" + "Absolute path of the uber JAR package.
    EnvironmentSettings envSettings =
      	EnvironmentSettings.newInstance().withConfiguration(conf).build();
    TableEnvironment tEnv = TableEnvironment.create(envSettings);
Important
  • Before you upload the compiled JAR to Realtime Compute for Apache Flink, you must remove the pipeline.classpaths configuration.

  • To debug a Flink program that uses the MaxCompute connector on-premises and that runs on a framework version older than 1.17-vvr-8.0.11-1, you must use the 1.17-vvr-8.0.11-1 connector uber JAR. When building the program JAR for cloud deployment, you can include an older version of the connector uber JAR, but ensure you remove any connector options only supported by newer framework versions.

  • To debug a program that uses the MySQL connector, you must also configure dependencies in Maven as instructed in Debug a DataStream program that includes the MySQL connector.

  • During on-premises debugging, you must ensure network connectivity between your Flink application and upstream/downstream services. You have two options:

    • Run the upstream/downstream services locally on the same network as Flink.

    • To use cloud services as data sources or destinations, verify that Flink can access them over the Internet. Also, add the public IP address of your device to the cloud services' whitelists.

Step 2: Configure the ClassLoader JAR package that is required to run the job

To enable Flink to load the runtime classes of the connector, add the ClassLoader JAR package to the configuration:

  1. Click to download the ClassLoader JAR package based on your VVR version:

  2. Open the program file in IntelliJ IDEA.

  3. Click the green icon to the left of the entry class to expand the menu.

    商业版连接器本地运行-IDEA运行1

  4. Select Modify Run Configuration....

    商业化连接器本地运行-IDEA运行2

  5. In the window that appears, click Modify options.

    image

  6. In the Add Run Options drop-down list, select Modify classpath in the Java section.

    The Modify classpath section appears in the window.

  7. In the Modify classpath section, click the + icon, choose Include, and select the downloaded ClassLoader JAR.

  8. Save the configuration.

    Note

    If an error message indicates missing common Flink classes, click Modify options and select Add dependencies with "provided" scope to classpath.

Debug Table API jobs

Starting with VVR 11.1, Realtime Compute for Apache Flink connectors are no longer fully compatible with Apache Flink's flink-table-common package. You may encounter the error below when running jobs that use Table APIs:

java.lang.ClassNotFoundException: org.apache.flink.table.factories.OptionUpgradabaleTableFactory

Solution: Update your pom.xml file. Replace org.apache.flink:flink-table-common with com.alibaba.ververica:flink-table-common, ensuring you use the correct version.

References