All Products
Search
Document Center

Hologres:Import using Flink

Last Updated:Mar 21, 2026

Alibaba Cloud Realtime Compute for Apache Flink is an enterprise-grade, high-performance platform for real-time big data processing built on Apache Flink. Hologres is tightly integrated with Flink. This integration allows you to write and query streaming data in real time, efficiently building a real-time data warehouse.

Service types

Realtime Compute for Apache Flink does not store data. It processes data from external storage systems and supports the following data storage types:

  • Source Table

    A source table provides input data for a Flink job. When a Hologres table is used as a source table, data is imported in batch mode, not streaming mode. Hologres performs a full table scan, sends the data to a downstream destination, and the job completes.

  • Dimension Table

    A dimension table is typically used for point lookups by key. Therefore, when you use a Hologres table as a dimension table, we recommend using Row-oriented Storage. The JOIN condition must use the complete primary key of the table.

  • Result Table

    A result table receives and stores the output data from Flink computations. It provides read and write interfaces for downstream consumption.

Realtime Compute for Apache Flink integrates deeply with Hologres to offer the following enterprise-level advanced features:

  • Consumption of Hologres Binary Logs

    This feature lets you consume change logs from Hologres tables using the message queue pattern.

  • Flink Catalog

    You can import Hologres metadata as a catalog in Flink. This allows you to read Hologres metadata directly from the Fully Managed Flink console without manually registering tables. This capability improves development efficiency and ensures schema accuracy.

  • Schema Evolution

    Fully Managed Flink supports schema evolution. When Flink reads JSON data, it can automatically parse data types and create corresponding table columns, enabling dynamic data model evolution.

The following table describes the Flink service types that Hologres supports and their features.

Service type

Data storage type

Advanced features

Description

Source table

Result table

Dimension table

Hologres binlog

Flink catalog

Schema evolution

Semi-managed Flink

Supports Row-oriented Storage and Column-oriented Storage. For Binary Log source tables, we recommend using Row-oriented Storage or hybrid row-column storage.

Supports Row-oriented Storage and Column-oriented Storage.

We recommend using Row-oriented Storage or hybrid row-column storage.

Supported

Supported

Supported

Uses the EMR Studio development platform.

Blink in exclusive mode (discontinued)

Supports Row-oriented Storage and Column-oriented Storage. For Binary Log source tables, we recommend using Row-oriented Storage or hybrid row-column storage.

Supports Row-oriented Storage and Column-oriented Storage.

We recommend using Row-oriented Storage or hybrid row-column storage.

Hologres V0.8 supports only Row-oriented Storage. Hologres V0.9 and later support both Row-oriented and Column-oriented Storage. We recommend using Row-oriented Storage.

Not supported

Not supported

Uses the Bayes development platform.

We recommend using Fully Managed Flink.

Apache Flink V1.10

Not supported

Supports Row-oriented Storage and Column-oriented Storage.

Not supported

Not supported

Not supported

Not supported

-

Apache Flink V1.11 and later

Not supported

Supports Row-oriented Storage and Column-oriented Storage.

We recommend using Row-oriented Storage.

Not supported

Not supported

Not supported

The Hologres connector code has been open-sourced since Apache Flink V1.11. For details, see alibabacloud-hologres-connectors.

Apache Flink V1.13 and later

Supported

Supports Row-oriented Storage and Column-oriented Storage.

We recommend using Row-oriented Storage.

Not supported

Not supported

Not supported

The Hologres connector code has been open-sourced since Apache Flink V1.11. For details, see alibabacloud-hologres-connectors.

Hologres connector release notes

Flink version

VVR version

Hologres version

Update information

References

1.20

11.6

3.2.x

4.0.x

4.1.x

Source table:

  • Added the LATEST_OFFSET startup mode to consume Binlog data from the latest offset.

  • Added support for the VARCHAR array type in Binlog subscriptions.

Catalog:

  • Hologres Catalog now exposes global indexes and prefix scan keys as catalog indexes.

General:

  • Added a JDBC probe request cache to prevent SQL Gateway from timing out in multi-table scenarios.

Hologres

1.20

11.5

3.2.x

4.0.x

4.1.x

Source table:

  • Added the scan.binlog.prefer.physical-column.over.metadata parameter to prioritize the physical column when its name conflicts with a metadata column.

  • Batch source tables no longer perform a reshuffle by default.

Dimension table:

  • Added a lookup validation for columnar tables. You must set the lookup.read.column-table.enabled parameter to true to use a columnar table as a dimension table.

General:

  • Added support for Access Key V4 (AKV4) authentication.

Bug fixes:

  • Fixed a NullPointerException (NPE) that occurred when a DELETE record was received during a check-and-put operation.

  • Updated the holo-client to fix incorrect column reads when column pruning is enabled for Binlog.

Hologres

1.20

11.4

3.2.x

4.0.x

4.1.x

Sink table:

  • Added the sink.ignore-null-when-update-by-expr.enabled parameter to ignore null values during updates that use an insert expression.

  • Added support for insert conflict expressions (conflict expr) for custom conflict handling.

  • The stream copy mode now supports the binary row format, improving write performance.

Dimension table:

  • Added support for filter pushdown in dimension tables.

Bug fixes:

  • Updated the holo-client to fix a bug that prevented one-to-many dimension table connections from closing when a job stops.

Hologres

1.20

11.3

3.1.x

3.2.x

4.0.x

Source table:

  • Added support for data compression and column pruning in Binlog consumption, reducing network traffic and memory usage.

  • Added support for filter pushdown in Binlog, reducing unnecessary data transfer.

Sink table:

  • Fixed a conflict between the partition suffix and dynamic partitions during creation.

  • Added support for writing to generated columns.

General:

  • Connections now automatically switch to the frontend (FE) when you select a data type that the fixed FE does not support.

  • Updated the holo-client to fix connection failures that occurred when a database name contained @warehouse.

Bug fixes:

  • Fixed an issue where the connector could not recover from a checkpoint if the table had been dropped.

  • Fixed an issue where the partition state LSN was incorrectly initialized to 0 for empty shards when consuming from a partitioned parent table.

  • Fixed a failure that occurred when reading RoaringBitmap data through JDBC Binlog.

Hologres

1.20

11.2

3.1.x

3.2.x

4.0.x

Sink table:

  • Added a table-level setting, sink.not-generate-binlog.enabled, to prevent Binlog generation during writes. This prevents Binlog consumption loops.

  • The dirty data policy now applies only to actual dirty data exceptions.

General:

  • Table metadata is no longer accessed when creating a HologresDynamicTableSink, accelerating job submission.

Bug fixes:

  • Fixed a backward compatibility issue with Binlog and upsert source configurations.

  • Fixed a backward compatibility issue with Hologres configuration parameters.

  • Fixed compatibility issues with deprecated parameters.

  • Fixed an NPE that occurred when writing a TEXT array that contained null elements.

Hologres

1.20

11.1

3.1.x

3.2.x

Source table:

  • Added support for specifying partition values to subscribe to the Binlog of a Hologres partitioned table.

Sink table:

  • Added support for date-formatted partitions.

Dimension table:

  • Added metrics for cache hits and misses.

General:

  • The connector now automatically selects the optimal connection mode based on known issues.

  • Fixed an issue where options with non-standard formatting in SQL hints failed to override catalog options.

Hologres

1.20

11.0

3.1.x

3.2.x

General:

  • Removed all code and dependencies related to RPC and HoloHub. The connector now exclusively uses JDBC mode. Parameter names have been refactored.

Source table:

  • Source tables for full and incremental data: After a full snapshot read is complete, incremental consumption starts from the current maximum LSN to prevent data loss.

Hologres

1.17

8.0.11

2.1.x

2.2.x

3.0.x

Source table:

  • Added support for reading the Binlog of partitioned tables.

  • Added support for metadata columns in source tables.

  • The decimal type in JDBC Binlog now uses the scale from the Flink type.

  • Source tables for full and incremental data: After a snapshot read is complete, incremental consumption now starts from the current maximum LSN.

Sink table:

  • Added a check-and-put capability for conditional writes.

  • Added support for aggressive flushing, reducing data visibility latency.

  • The copy write mode now supports the TIME data type.

  • Added the max cell buffer size parameter for the copy write mode.

  • Added an idle-session-timeout in the copy write mode to prevent prolonged idle connections.

General:

  • Changed the default value of the remove-u0000-in-text.enabled parameter to true.

  • Added support for state compatibility in upgrade and downgrade scenarios.

  • The factory no longer validates Binlog parameters when a catalog dimension table is used, which prevents false positive errors.

  • Jobs now retry only three times before failing fast during deployment to prevent long waits.

Bug fixes:

  • Updated the holo-client to fix a JDBC URL parsing issue.

  • Fixed an issue where JDBC Binlog consumption for source tables for full and incremental data started from LSN+1 after a checkpoint.

  • Fixed a type normalization error in CTAS scenarios when a drop column and a type change occurred simultaneously.

Hologres

1.17

8.0.9~8.0.10

2.1.x

2.2.x

3.0.x

  • Fixed a potential deadlock issue when a new client is registered in a shared connection pool.

  • The table ID is no longer forcibly checked when a Binlog consumption job resumes from a saved state.

Hologres

1.17

8.0.8

2.1.x

2.2.x

Sink table:

  • Added the sink.delete-strategy parameter to provide more options for handling UPDATE_BEFORE records, complementing the existing ignoredelete option.

Hologres

1.17

8.0.7

2.1.x

Dimension table:

  • Fixed an issue where frequent metadata fetching for dimension tables with many fields caused job deployment timeouts.

General:

  • Fixed an insufficient permissions error that occurred when different tables used different users within a shared connection pool.

Hologres

1.17

8.0.6

2.1.x

Source table:

  • The connector now automatically switches from HoloHub to JDBC mode for Hologres V2.1 or later, as HoloHub mode is deprecated in that version. For more information, see Replicate binlog with Flink or Blink.

General:

  • Added support for the type-mapping.timestamp-converting.legacy parameter to correctly read and write the Flink TIMESTAMP_LTZ data type. For more information, see Hologres.

1.17

8.0.5

2.0.x

2.1.x

Source table:

  • For Hologres V2.1 and later, you no longer need to create slots to consume Binlog data over JDBC. For more information, see Consume Binlog via JDBC. As a result, starting from this version, publications and slots are no longer automatically created if the Hologres instance is V2.1 or later.

Sink table:

  • A new deduplication.enabled parameter is added. The default value is true. When this parameter is set to false, the result table can skip deduplication during the aggregation and writing process. This feature is useful for scenarios such as the full replay of upstream CDC streams.

  • Tables without primary keys now support bulk load writes, which consume fewer Hologres resources than the previous JDBC copy method.

General:

  • Added support for enabling encryption in transit by using the connection.ssl.mode and connection.ssl.root-cert.location parameters.

  • Added a timeout parameter for internal JDBC connections to prevent client connections from becoming unresponsive in scenarios such as unexpected server restarts.

1.17

8.0.4

2.0.x

2.1.x

Source table:

  • Fixed an issue where a residual publication could prevent Binlog consumption after a table rebuild. The connector now automatically deletes the old publication.

General:

  • Hologres dimension tables and sink tables in the same job now share a connection pool, which increases the effective connection limit.

1.17

8.0.3

2.0.x

2.1.x

General:

  • Regardless of the Hologres instance version, dimension tables and sink tables no longer support the RPC mode. If you select the RPC mode, it is automatically switched to the jdbc_fixed mode. We recommend that you upgrade your instance if it is an early version.

Hologres

1.15

6.0.7

  • 1.3.x

  • 2.0.x

  • Source table:

    Added compatibility with Hologres V2.0. If the connector detects a connection to a Hologres instance of V2.0 or later, it automatically switches the HoloHub Binlog mode to the JDBC Binlog mode.

  • Dimension table:

    Added compatibility with Hologres V2.0. If the connector detects a connection to a Hologres instance of V2.0 or later, it automatically switches the RPC mode to the jdbc_fixed mode.

  • Sink table:

    • Added compatibility with Hologres V2.0. If the connector detects a connection to a Hologres instance of V2.0 or later, it automatically switches the RPC mode to the jdbc_fixed mode.

    • Added support for partial column updates. You can insert only the fields that are declared in the Flink INSERT statement. This feature simplifies wide-table merge scenarios.

  • General:

    When a record conversion exception occurs, the connector now logs the source data and conversion result to help troubleshoot dirty data issues.

  • Bug fixes:

    • Fixed an issue where using the same connectionPoolName for different instances or databases in the same job did not raise an error.

    • Fixed a null pointer exception in version 6.0.6 that occurred when a string type in a dimension table had a null value.

Hologres

1.15

6.0.6

1.3.x

Source table:

  • The slot name parameter is no longer required when you consume Hologres Binlog data in JDBC mode. Default slots can be created to allow smoother switches from HoloHub mode.

  • The new enable_filter_push_down parameter is added. Batch source tables no longer push down filter conditions by default. Set this parameter to true to enable filter pushdown.

Hologres

1.15

6.0.5

1.3.x

  • General: When a job starts, all parameter information is printed to the TaskManager log for easier troubleshooting.

  • CTAS/CDAS: Added a tolerant mode for field data types. In this mode, if a data type change occurs in the source, the change is considered successful as long as the original and new types can be normalized to the same type.

  • Hologres Catalog: Enhanced the ALTER TABLE syntax to support modifying the properties of Hologres physical tables, including changing table names, adding columns, renaming columns, and modifying column comments.

1.15

6.0.3~6.0.4

1.3.x

Source table:

  • Added a JDBC mode for consuming Hologres Binlog data. This mode supports more data types and allows for custom accounts.

  • Added support for filter pushdown for batch source tables and for the full phase of source tables for full and incremental data.

Sink table:

Added support for writing data in Fixed Copy mode. Fixed Copy is a new feature in Hologres V1.3. Compared with JDBC mode, Fixed Copy mode provides higher throughput and lower data latency via streaming, and reduces client memory consumption by eliminating batching.

Hologres Catalog:

  • Added support for setting default table properties when you create a catalog.

sdkMode parameter: Different modes are available for different types of tables in Hologres. The sdkMode parameter is now used to consolidate mode selection.

1.13

4.0.18

1.1 and later

Fixed an issue where reporting metrics for a sink table degraded write performance.

1.13 and 1.15

4.0.15 and 6.0.2

1.1 and later

Source table:

  • Batch source tables are now case-sensitive by default.

  • Added support for configuring the transaction timeout for Scan operations on batch source tables.

  • Fixed an issue where parsing complex strings in batch source tables could fail.

  • Added an Upsert mode for source tables for full and incremental data.

Dimension table:

Added support for configuring an asynchronous request timeout (asyncTimeoutMs) for Hologres dimension tables.

Sink table:

  • Added support for the PARTITION BY syntax to define a partitioned table when creating a Hologres sink table with CTAS.

  • Metrics supports the currentSendTime metric.

1.13

4.0.13

1.1 and later

  • Added support for source tables for full and incremental data.

  • Added support for the DataStream API.

1.13

4.0.11

0.10 and later

Added support for CTAS and CDAS.

1.13

4.0.8

0.10 and later

Added support for Hologres Catalog for sink tables, source tables, and dimension tables.

Manage Hologres catalogs

1.13

3.0.0

0.9 and later

Added support for real-time data consumption from Hologres.

Fully managed Flink

Known issues and fixes

  • Notes on issues and fixes

    • The affected versions for each issue are clearly specified. Versions outside the listed range are not affected.

    • If the affected version is marked as "N/A", the issue may be a defect in the Hologres engine rather than the connector.

  • Severity levels

    • P0 (Critical): Immediate upgrade is recommended. Triggering this issue can affect production operations, such as query correctness or write success rates.

    • P1 (High): Upgrade is recommended to prevent potential issues.

    • P2 (Medium): Optional upgrade. These issues occur intermittently and can be resolved with a workaround or a job restart.

Severity

Description

Affected version

Fixed version

Solution

P0

When writing to a subset of columns in a result table, if unwritten fields have a time-related default value (such as current_timestamp or now()), the populated value may be incorrect. This occurs because the FixedFE mode does not handle time-related default values correctly.

11.0-11.5

N/A

Use the Flink-side now() function to pass the value to the corresponding field in the result table. Alternatively, set the connection.fixed.enabled parameter to false.

P0

During Binlog consumption, if a physical column and a metadata column share the same name, such as table_name, the connector incorrectly reads the value from the metadata column instead of the physical column, which results in incorrect data.

8.0.11, 11.0-11.4

11.5

Upgrade to version 11.5 or later and set scan.binlog.prefer.physical-column.over.metadata to true. Alternatively, avoid declaring physical columns with the same names as metadata columns in the DDL of the Binlog source table.

P1

During Binlog consumption, column pruning can read data into the wrong columns. This occurs because the holo-client might retrieve unexpected columns when handling column pruning.

11.3-11.5

Hotfixes have been released for all affected versions (11.3-11.5).

A hotfix addresses this issue, so you are unlikely to encounter it. For DataStream jobs, use the latest connector version.

P2

The scanner for a one-to-many dimension table does not close properly when you stop the job. This can lead to resource leaks or job timeouts during shutdown. A problem in the holo-client's internal scanner shutdown logic causes this issue.

Versions earlier than 11.3

11.4

Upgrade to version 11.4 or later.

P1

When you use the check-and-put feature, processing a delete record throws a NullPointerException (NPE) and causes the job to fail.

8.0.11-11.4

11.5

Upgrade to version 11.5 or later. Alternatively, avoid using check-and-put on streams that have delete operations.

P2

The connector fails to resume from a checkpoint if a table was dropped and recreated while the job was running.

11.0-11.2

11.3

In a test environment, you can upgrade to version 11.3 or later to avoid this issue. Note: Dropping a table during Binlog consumption affects data correctness. Therefore, avoid rebuilding a table during Binlog consumption in a production environment.

P1

Reading RoaringBitmap data through JDBC Binlog fails and throws a parsing exception.

11.0-11.2

11.3

Upgrade to version 11.3 or later.

P1

When consuming from a physically partitioned table, if a shard has no data, the connector incorrectly initializes the state's Log Sequence Number (LSN) to 0. This causes data loss when the job resumes from this state.

Versions earlier than 8.0.10, 11.0-11.2

8.0.11, 11.3

Upgrade to version 8.0.11, or 11.3 or later.

P1

Writing a TEXT array that contains a null element throws an NPE and causes the write operation to fail.

11.0-11.1

11.2

Upgrade to version 11.2 or later. Alternatively, ensure that the upstream TEXT array does not contain null elements.

P1

A conflict between the partition creation suffix and dynamic partitioning causes partition creation to fail.

11.0-11.2

11.3

Upgrade to version 11.3 or later.

P2

Exceptions unrelated to dirty data can trigger the dirty data policy, causing valid exceptions to be handled incorrectly (e.g., silently discarded).

11.0-11.1

11.2

Upgrade to version 11.2 or later.

P1

For a Full and Incremental Integration source table, JDBC Binlog starts consuming at LSN+1. If the current LSN is already in a checkpoint, resuming from it might skip one record.

8.0.10 and earlier

8.0.11

Upgrade to version 8.0.11.

P2

In a CTAS scenario, if a column drop and a type change occur in the same operation, a type normalization error causes the schema change to fail.

8.0.10 and earlier

8.0.11

Upgrade to version 8.0.11. Alternatively, avoid dropping a column and changing a type in the same operation.

P2

When using a catalog dimension table, the factory validation of Binlog parameters causes a false-positive exception.

8.0.10 and earlier

8.0.11

Upgrade to version 8.0.11.

P2

In multi-table scenarios, an excessive number of JDBC polling requests causes the SQL Gateway to time out.

11.0-11.5

11.6

Upgrade to version 11.6. Alternatively, reduce the number of Hologres tables in a single job.

P2

When FixedFE mode is selected, if a table contains a data type that is not supported by FixedFE, the connection does not automatically downgrade to a FE connection, causing write or query exceptions.

11.0-11.2

11.3

Upgrade to version 11.3 or later. Alternatively, manually specify the FE connection mode.

P1

When consuming a Binlog in JDBC mode, a "Binlog Convert Failed" exception may occur, or data reading from some shards may stall. This happens because the Hologres instance gateway has an issue when it returns a backend timeout exception to the client, causing the read operation to hang or fail with a parsing error.

N/A

N/A

This issue is more likely to occur when job backpressure is high. If data reading stalls, restart the job and resume from the latest checkpoint.

To completely resolve this issue, upgrade your Hologres instance to version 2.2.21 or later.

P2

Jobs deploy slowly or time out. A thread dump analysis shows that the process is stuck at GetTableSchema.

N/A

N/A

This issue can have multiple causes. You can troubleshoot it by following these steps:

  1. Verify the network connectivity between the Flink cluster and the Hologres instance.

  2. Set the jdbcRetryCount parameter to 1 to ensure the root cause of the exception is not hidden by internal retries.

  3. In Hologres V2.0 and earlier, frequent DDL operations can cause delays in metadata cleanup, which may slow down table metadata queries. We recommend upgrading your Hologres instance to V2.1 or later.

P0

When writing TEXT, JSON, or JSONB data to Hologres in FixedFE mode (which corresponds to the connector's jdbc_fixed mode), an invalid character in the data source can throw an unexpected exception. This can cause the connected FE node to restart, interrupting the connection.

N/A

N/A

If you cannot guarantee the validity of the upstream string, you should write data in JDBC mode and enable a setting for the result table.

The remove-u0000-in-text.enabled parameter.

Alternatively, upgrade your Hologres instance to version 3.0 or later to continue using the jdbc_fixed mode.

P1

When you perform a one-to-many join on a JDBC dimension table, Flink tasks may experience high memory usage or an Out Of Memory (OOM) error.

N/A

N/A

In Hologres V1.3, if you use prefix scan and the number of query results exceeds the value of jdbcScanFetchSize, batch queries may not terminate. As a workaround, set jdbcScanFetchSize to a large value, such as 100000.

To completely resolve this issue, upgrade your Hologres instance to V2.0 or later.

P1

A Binlog job throws the the table id parsed from checkpoint is different from the current table id exception during stateful recovery. The reason is that the job performed a TRUNCATE operation or recreated the table during a previous run. The checkpoint stores the table ID from the job's initial startup, which does not match the current table ID.

8.0.4

8.0.9

Starting from version 8.0.9, the table ID check is no longer enforced. Instead, a warning is logged, which allows the job to resume from the latest state. However, avoid rebuilding a table while a Binlog job is running, as this operation causes all previous Binlog data to be lost.

P2

Backpressure occurs while a job is running. A thread dump shows that the execution pool is stuck in the close() or start() method. This can happen if multiple clients share the same connection pool, which may lead to a deadlock that prevents the connection pool from closing properly.

8.0.5

8.0.9

Upgrade the connector version.

P2

If you run a job for full and incremental consumption after performing a DELETE FROM operation on the source table, the full consumption phase re-consumes all Binlog data from the beginning because no data is available in the incremental phase.

8.0.6 and earlier

8.0.7

Upgrade the connector version or specify a start time for incremental consumption.

P1

If a dimension table contains a large number of fields, the job deployment times out.

8.0.6

8.0.7

Upgrade the connector version.

P0

When the enable_filter_push_down parameter is enabled for a batch source table, the filter does not take effect. As a result, the connector still reads data that should have been filtered.

Note

This issue does not affect source tables for Full and Incremental Integration or Binlog source tables.

8.0.5 and earlier

8.0.6

Upgrade the connector version.

P0

When you write JSON or JSONB data to Hologres in FixedFE mode (which corresponds to the connector's jdbc_fixed mode), if the source data contains invalid JSON or JSONB fields, the connected FE node restarts and the FE connection is interrupted.

8.0.5 and earlier

None

If the validity of upstream JSON or JSONB strings cannot be guaranteed, use the JDBC mode to write the data.

P1

When performing a one-to-many join on a JDBC dimension table, internal exceptions such as connection failures are not properly thrown. This can manifest as backpressure on the asynchronous join node, causing the data flow to stop. This issue occurs rarely.

6.0.7 and earlier

8.0.3

Upgrade the connector version. You can also restart the job as a temporary solution.

P1

A memory leak occurs when consuming Binlog data in JDBC mode. This may manifest as a high consumption rate at the start of the job, which then continuously decreases.

6.0.7 and earlier

6.0.7

Upgrade the connector version. For DataStream jobs, you must use the dependency of version 6.0.7-1.

P0

When writing in JDBC mode, exceptions captured during a scheduled flush (controlled by the jdbcWriteFlushInterval parameter) are not thrown until the next data record is processed. If write traffic is low, a checkpoint might be successfully created while an exception has been captured but not yet thrown. If a subsequent failure occurs, the job will resume from this invalid checkpoint, which can lead to data loss.

6.0.6 and earlier

6.0.7

This issue is more likely to occur with low traffic. Upgrade the connector version, or set the jdbcWriteFlushInterval to be longer than the checkpoint interval.

P2

When consuming Binlog data in JDBC mode without setting a slot name, the system automatically creates one. If the table or schema name contains special characters, the automatically generated slot name is invalid and causes a syntax error.

6.0.6

6.0.7

Upgrade the connector version. For DataStream jobs, you must use the dependency of version 6.0.7-1.

P1

If different Hologres instances or databases in the same job use the same connectionPoolName, exceptions such as "table not found" can occur.

6.0.6 and earlier

6.0.7

Use a different connectionPoolName for each Hologres instance or database used in the same job.

P1

An NPE is thrown if a dimension table contains a string field with a null value.

6.0.6

6.0.7

Upgrade the connector version.

P0

Filter pushdown is enabled by default for Hologres source tables. However, if a job also uses a Hologres dimension table, and the write DML contains a filter on a non-primary key field of the dimension table, the filter is also incorrectly pushed down to the dimension table. This can cause incorrect dimension table join results.

6.0.3-6.0.5

6.0.6

Upgrade the connector version.

P0

If multiple result tables have different mutatetype settings but share the same connectionPoolName to reuse a connection pool, the mutatetype settings can be overwritten, causing them to not take effect.

6.0.2 and earlier

6.0.3

Set the mutatetype for all result tables to InsertOrUpdate. Alternatively, use a different connectionPoolName for tables with different mutatetype settings.

P1

An NPE is thrown if the hg_binlog_timestamp_us field is declared in the DDL of a Binlog source table.

6.0.2

6.0.3

Do not use this field, or upgrade the connector version.

P1

Metric reporting affects the write performance of result tables. A thread dump of the sink node shows that it is stuck in reportWriteLatency.

4.0.15-4.0.17

4.0.18

Use a version that is not affected by this issue.

P2

When reading data of the STRING or STRING ARRAY type from a batch source table, parsing fails if the data contains special characters.

4.0.14 and earlier

4.0.15

Remove the dirty data from the source table, or upgrade the connector version.

P2

If you declare Binlog-specific fields, such as hg_binlog, in the DDL of a source table for Full and Incremental Integration, the full data cannot be consumed.

4.0.13

4.0.14

Avoid using the Full and Incremental Integration feature, or upgrade the connector version.