Realtime Compute for Apache Flink supports three job types to cover different stream processing needs. Use this page to identify the right job type and find resources to get started.
Choose a job type
| Job type | Best for |
|---|---|
| Flink SQL | Real-time ETL (extract, transform, and load), real-time metric computation, multi-stream joins, streaming warehousing and lakehousing |
| Data ingestion with Flink CDC | Real-time database synchronization, data migration, and automatic table synchronization |
| Datastream API | Complex Event Processing (CEP), high-frequency external calls, complex window logic, and custom sources or sinks |
Supported connectors
Realtime Compute for Apache Flink supports over 30 upstream and downstream connectors, spanning databases, message queues, and data lakes.
-
Upstream (Source) examples: Kafka, MySQL CDC, Hologres, Simple Log Service (SLS)
-
Downstream (Sink) examples: MySQL, PostgreSQL, ClickHouse, Doris, StarRocks, Paimon, Object Storage Service (OSS)
For a full list, see Supported connectors.
Get started
Flink SQL
ETL, data aggregations, and lookup joins.
Data ingestion with Flink CDC
Real-time database synchronization and batch table ingestion.
Datastream API
CEP, custom states, and complex job logic.
Typical scenarios
End-to-end examples covering real-world architectures.