This topic describes common use cases and best practices for data ingestion to help you quickly build data synchronization pipelines.
Realtime Compute for Apache Flink's data ingestion feature facilitates data synchronization by leveraging Flink Change Data Capture (CDC) and YAML configurations. This allows you to ingest data from a source to a destination. Explore the provided solutions for different use cases below:
Data Ingestion with Flink CDC details different modules of a YAML script for constructing a data ingestion job. Case studies: Build data ingestion jobs for complex business scenarios provides specific configuration examples for your application development reference.
After you review the preceding configuration examples, you can quickly configure, submit, and run a data ingestion job that meets your requirements.
For solutions to runtime issues, see FAQ about Data Ingestion with Flink CDC.