All Products
Search
Document Center

Realtime Compute for Apache Flink:Settings of DataStream connectors

Last Updated:Mar 12, 2025

This topic describes how to use DataStream connectors to read and write data.

Dependencies and usage of DataStream connectors

If you want to use a DataStream connector to read and write data, you must use the related type of DataStream connector to connect to Realtime Compute for Apache Flink. You can use the Ververica Runtime (VVR) DataStream connectors that are stored in the Maven central repository to develop drafts.

Important

Interfaces and parameters of connectors may change in the future. We recommend that you use the connectors that are specified to provide DataStream APIs in Supported connectors.

Commercial encryption protection is supported for DataStream connectors. If you run a deployment that includes a DataStream connector, an error is reported. For more information about how to run or debug a Realtime Compute for Apache Flink deployment that includes a connector in an on-premises environment, see Run or debug a Flink deployment that includes a connector in an on-premises environment.

You can use a connector in one of the following ways:

(Recommended) Upload the JAR files of the connectors and call the JAR files as additional dependency files

  1. Add the required connectors as project dependencies to the pom.xml file of the Maven project for your draft and specify <scope>provided</scope> for the dependencies.

    Note
    • ${vvr.version} indicates the engine version that corresponds to the runtime environment of the draft. If the engine version of your draft is vvr-8.0.9-flink-1.17, the Flink version is 1.17.2. We recommend that you use the latest version of VVR. For more information, see Engine updates.

    • In this topic, the JAR files of the connectors are called as additional dependency files. In this case, you do not need to package the required connectors into the JAR file of your draft. Therefore, you must specify <scope>provided</scope> for the connector dependencies.

    <!-- MySQL connector dependency -->
      <dependency>
          <groupId>com.alibaba.ververica</groupId>
          <artifactId>ververica-connector-mysql</artifactId>
          <version>${vvr.version}</version>
          <scope>provided</scope>
      </dependency>
  2. Add the public package flink-connector-base or ververica-connector-common of connectors as project dependencies if you want to develop new connectors or use extended features of the existing connectors.

            <!-- Basic dependency of the public interface of Flink connectors -->
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-connector-base</artifactId>
        <version>${flink.version}</version>
    </dependency>
    <!-- Basic dependency of the public interface of Alibaba Cloud connectors -->
    <dependency>
        <groupId>com.alibaba.ververica</groupId>
        <artifactId>ververica-connector-common</artifactId>
        <version>${vvr.version}</version>
    </dependency>
  3. Deploy the draft and add the JAR packages to the Additional Dependencies field in the Create Jar Deployment dialog box. For more information, see Create a JAR deployment. You can upload the JAR file of a connector that you develop or a connector provided by Realtime Compute for Apache Flink.

    image

Package the connector as a project dependency into the JAR file of your draft

  1. Add the required connectors as project dependencies in the pom.xml file of the Maven project for your draft. For example, you can add the Kafka and MySQL connectors as project dependencies to the pom.xml file of the Maven project for your draft.

    Note
    • ${vvr.version} indicates the engine version that corresponds to the runtime environment of the draft. If the engine version of your draft is vvr-8.0.9-flink-1.17, the Flink version is 1.17.2. We recommend that you use the latest version of VVR. For more information, see Engine updates.

    • In this topic, the connectors are packaged as project dependencies into the JAR file of your draft. In this case, the default scope (compile) must be used for the dependencies.

            <!-- Kafka connector dependency -->
            <dependency>
                <groupId>com.alibaba.ververica</groupId>
                <artifactId>ververica-connector-kafka</artifactId>
                <version>${vvr.version}</version>
            </dependency>
            <!-- MySQL connector dependency -->
            <dependency>
                <groupId>com.alibaba.ververica</groupId>
                <artifactId>ververica-connector-mysql</artifactId>
                <version>${vvr.version}</version>
            </dependency>
  2. Add the public package flink-connector-base or ververica-connector-common of connectors as project dependencies if you want to develop new connectors or use extended features of the existing connectors.

            <!-- Basic dependency of the public interface of Flink connectors -->
            <dependency>
                <groupId>org.apache.flink</groupId>
                <artifactId>flink-connector-base</artifactId>
                <version>${flink.version}</version>
            </dependency>
            <!-- Basic dependency of the public interface of Alibaba Cloud connectors -->
            <dependency>
                <groupId>com.alibaba.ververica</groupId>
                <artifactId>ververica-connector-common</artifactId>
                <version>${vvr.version}</version>
            </dependency>
Important

To prevent conflicts between JAR package dependencies, take note of the following points:

  • Use ${flink.version} to specify the Flink version that corresponds to the runtime environment of the draft. The specified Flink version must be the same as the Flink version that corresponds to the Ververica Runtime (VVR) version of your deployment displayed on the Deployments page. For example, if the engine version that you select for your draft on the Deployments page is vvr-8.0.9-flink-1.17, the Flink version is 1.17.2. We recommend that you use the latest version of VVR. For more information, see Engine updates.

  • Specify <scope>provided</scope> for Apache Flink dependencies. The non-connector dependencies whose names start with flink- in the org.apache.flink group are mainly required.

  • Call only methods that are explicitly marked with @Public or @PublicEvolving in the source code of Apache Flink. Alibaba Cloud Realtime Compute for Apache Flink only ensures compatibility with these methods.

  • Use the built-in dependencies of Realtime Compute for Apache Flink if the DataStream APIs supported by the built-in connectors of Realtime Compute for Apache Flink are used.

References