This topic describes how to integrate and use connectors to read and write data in Flink DataStream programs.
Download connectors and configure project dependencies
To read and write data with the DataStream API, you'll use Realtime Compute for Apache Flink's relevant connectors or custom ones. Built-in connectors can be found in the Maven central repository.
For detailed information on DataStream API-supported connectors, see Details of supported connectors.
Due to commercial encryption protection for DataStream connectors, it is highly recommended to debug your job locally before deploying to a production environment. See Run and debug connectors locally.
You can integrate a connector into your project in one of the following ways:
(Recommended) Upload connector JARs as additional dependencies
Add a connector dependency in your project's
pom.xmlfile and specify<scope>provided</scope>. Example:NoteEngine version:
${vvr.version}is the Ververica Runtime (VVR) engine version for your job, and${flink.version}is the corresponding Apache Flink version.Scope: Specify
<scope>provided</scope>, because this method will upload the connector JAR as an additional dependency.
<!-- MySQL connector dependency --> <dependency> <groupId>com.alibaba.ververica</groupId> <artifactId>ververica-connector-mysql</artifactId> <version>${vvr.version}</version> <scope>provided</scope> </dependency>To extend built-in connectors or use custom connectors, include the dependencies
flink-connector-baseorververica-connector-common:<!-- Basic dependency of the public interface of Apache Flink's connectors --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-base</artifactId> <version>${flink.version}</version> </dependency> <!-- Basic dependency of the public interface of Realtime Compute for Apache Flink's connectors --> <dependency> <groupId>com.alibaba.ververica</groupId> <artifactId>ververica-connector-common</artifactId> <version>${vvr.version}</version> </dependency>Create a JAR deployment, and add the connector JAR to Additional Dependencies in the Create Jar Deployment dialog box. Both custom and built-in connector JARs are supported.

Package a connector JAR into your project's JAR
Add connector dependencies in your project's
pom.xmlfile. The code below adds the Kafka and MySQL connectors as project dependencies:NoteEngine version:
${vvr.version}is the Ververica Runtime (VVR) engine version for your job, and${flink.version}is the corresponding Apache Flink version.Scope: Use the default scope
compile, because this method will package a connector JAR into your project JAR.
<!-- Kafka connector dependency --> <dependency> <groupId>com.alibaba.ververica</groupId> <artifactId>ververica-connector-kafka</artifactId> <version>${vvr.version}</version> </dependency> <!-- MySQL connector dependency --> <dependency> <groupId>com.alibaba.ververica</groupId> <artifactId>ververica-connector-mysql</artifactId> <version>${vvr.version}</version> </dependency>To extend built-in connectors or use custom connectors, include the dependencies
flink-connector-baseorververica-connector-common:<!-- Basic dependency of the public interface of Apache Flink connectors --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-base</artifactId> <version>${flink.version}</version> </dependency> <!-- Basic dependency of the public interface of Realtime Compute for Apache connectors --> <dependency> <groupId>com.alibaba.ververica</groupId> <artifactId>ververica-connector-common</artifactId> <version>${vvr.version}</version> </dependency>
To prevent dependency conflicts, take note of the following:
Flink version consistency: Ensure
${flink.version}in your dependencies accurately reflects the Apache Flink version corresponding to your job's VVR version. For instance, if your engine version isvvr-8.0.9-flink-1.17, set${flink.version}to1.17.2. We recommend using the latest VVR version. For more information, see Engine updates.Dependency scope: Always set
<scope>provided</scope>for Apache Flink dependencies. This primarily applies to non-connector dependencies with names starting withflink-in theorg.apache.flinkgroup.Public API usage: When extending built-in connectors or using custom connectors, ensure you only call methods explicitly marked with
@Publicor@PublicEvolvingin the source code of Apache Flink. Realtime Compute for Apache Flink guarantees compatibility only with these methods.APIs: When using Realtime Compute for Apache Flink's built-in connectors, use their supported APIs over those from open-source connector methods.