This section addresses common issues and their solutions pertaining to real-time integration.
Which version of Oracle CDC does Dataphin real-time integration use?
Dataphin real-time integration utilized Oracle CDC 2.3 prior to version V3.13; subsequent versions use Oracle CDC 2.4.
Task error: Oracle LogMiner does not support execution on PDB, Pluggable Databases?
To read the redo log of a Pluggable Database (PDB) within a Container Database (CDB), configure the debezium.database.pdb.name parameter.
Task error: ORA-00942: table or view does not exist.
This error is a known issue in Oracle CDC 2.4, caused by UPDATE LOG_MINING_FLUSH SET LAST_SCN =. For more information, see oracle-cdc cannot read oracle multitenant pdb binlog.
Manually switch to the CDB instance within the CDB database and recreate the LOG_MINING_FLUSH table as follows:
-- Switch to the CDB instance.
ALTER SESSION SET CONTAINER = CDB$ROOT;
-- Create the table and insert data.
CREATE TABLE LOG_MINING_FLUSH(LAST_SCN NUMBER(19,0));
INSERT INTO LOG_MINING_FLUSH VALUES (0);
Task error: Table name exceeds 30 characters (Table 'xxx' won't be captured by Oracle LogMiner because its name exceeds 30 characters)
Ensure that table and column names are limited to 30 characters or fewer to be processed by Oracle LogMiner. This limitation is due to Oracle LogMiner's design. For more information, see LogMiner Requirements.
Runtime log error: DataException: file is not a valid field name.
Verify that field names in Flink exactly match the table names in Oracle. The error occurs because the field name 'file' in the Oracle LogMiner log is not defined in the table schema, leading to a column not found error.
The log can read Oracle LogMiner data, but cannot read the data.
The primary issue is the architecture of Oracle CDC, which results in low read efficiency. Dataphin real-time integration is recommended for better performance, especially in scenarios with large data volumes. The process is as follows:
Retrieve the
startScnandendScnfor each read operation.Identify all
archiveLogandonlineLogfiles that include thestartScn(files with>=startScnwill be processed).Use
Oracle LogMinerto parse these files withDBMS_LOGMNR.START_LOGMNR(this step is time-consuming).Extract the parsed CDC data from
V$LOGMNR_CONTENTS(this step is also slow).
Large data volumes can significantly slow down Oracle CDC data reads due to the need for re-analysis and the use of an online dictionary, which adds to resource overhead.
Appendix: Troubleshooting
Troubleshooting methods
For unresolved common issues, employ the following troubleshooting methods.
Many Oracle CDC issues stem from insufficient permissions. Confirm that Oracle is correctly configured according to your database environment (CDB or non-CDB mode). For more information, see Connector properties.
Oracle CDC requires specific permissions, which differ between CDB and non-CDB modes. Please refer to the documentation for proper configuration.
Use the following SQL commands to check user permissions:
-- Query the current account:
select user from dual;
SELECT * FROM DBA_SYS_PRIVS WHERE GRANTEE = 'C##DBZUSER';
SELECT TABLE_NAME FROM DBA_TAB_PRIVS WHERE GRANTEE = 'C##DBZUSER';
Troubleshooting commands
Common commands
-- Switch between CDB and PDB
ALTER SESSION SET CONTAINER = CDB$ROOT;
-- Query the current user
select user from dual;
-- Check if the redo log is enabled
select * from dba_log_groups where table_name = 'PF_ID_NAME_PK';
-- Query user permissions
SELECT * FROM DBA_SYS_PRIVS WHERE GRANTEE = 'C##DBZUSER';
SELECT TABLE_NAME FROM DBA_TAB_PRIVS WHERE GRANTEE = 'C##DBZUSER';
-- Get the current SCN
SELECT CURRENT_SCN FROM V$DATABASE
Compensation log analysis
-- Analyze the redo log
BEGIN
DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/opt/oracle/product/19c/dbhome_1/dbs/arch1_83_1158061769.dbf', OPTIONS => DBMS_LOGMNR.NEW);
DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/opt/oracle/oradata/ORCLCDB/redo01.log', OPTIONS => DBMS_LOGMNR.ADD_FILE);
DBMS_LOGMNR.START_LOGMNR(Options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
END;
-- Stop log analysis
exec DBMS_LOGMNR.END_LOGMNR();
-- Query the contents of the redo log, which will contain change records
SELECT SEG_NAME,TIMESTAMP,OPERATION FROM V$LOGMNR_CONTENTS WHERE SEG_NAME = 'PF_ID_NAME_PK' ORDER BY TIMESTAMP;
Compensation log file name retrieval
-- View online log file names
View logfile names:
SELECT V$LOGFILE.MEMBER NAME,
V$LOG.THREAD# THREAD_NUMBER,
V$LOG.SEQUENCE# SEQUENCE_NUMBER,
V$LOG.FIRST_CHANGE# FIRST_CHANGE_NUMBER,
LEAD(V$LOG.FIRST_CHANGE#, 1, 281474976710655)
OVER (ORDER BY V$LOG.SEQUENCE#) NEXT_CHANGE_NUMBER,
TO_CHAR(V$LOG.FIRST_TIME, 'YYYY-MM-DD HH24:MI:SS') FIRST_TIME,
TO_CHAR(LEAD(V$LOG.FIRST_TIME, 1, NULL) OVER (ORDER BY V$LOG.SEQUENCE#), 'YYYY-MM-DD HH24:MI:SS') NEXT_TIME,
0 BLOCK_SIZE,
V$LOG.BYTES BYTES,
V$LOG.GROUP# GROUP_NUMBER,
V$LOG.MEMBERS MEMBERS,
V$LOG.ARCHIVED ARCHIVED,
V$LOG.STATUS STATUS
FROM V$LOG,
V$LOGFILE
WHERE (V$LOG.STATUS = 'CURRENT' OR V$LOG.STATUS = 'ACTIVE' OR V$LOG.STATUS = 'INACTIVE')
AND V$LOG.GROUP# = V$LOGFILE.GROUP#
AND V$LOG.THREAD# = 1
ORDER BY V$LOG.SEQUENCE#;
-- Query archive log
SELECT NAME,
THREAD# THREAD_NUMBER,
SEQUENCE# SEQUENCE_NUMBER,
FIRST_CHANGE# FIRST_CHANGE_NUMBER,
NEXT_CHANGE# NEXT_CHANGE_NUMBER,
TO_CHAR(FIRST_TIME, 'YYYY-MM-DD HH24:MI:SS') FIRST_TIME,
TO_CHAR(NEXT_TIME, 'YYYY-MM-DD HH24:MI:SS') NEXT_TIME,
BLOCK_SIZE,
BLOCKS,
DEST_ID,
RESETLOGS_ID,
RESETLOGS_CHANGE# RESETLOGS_CHANGE_NUMBER,
RESETLOGS_TIME,
STATUS,
CREATOR,
REGISTRAR,
APPLIED,
FAL,
DELETED
FROM V$ARCHIVED_LOG
WHERE NAME IS NOT NULL
AND STATUS = 'A'
AND DELETED = 'NO'
AND ARCHIVED = 'YES'
AND STANDBY_DEST = 'NO'
-- AND THREAD# = 1
-- AND RESETLOGS_ID = 1158061769
AND FIRST_TIME <= TO_TIMESTAMP('2024-02-22 11:30:00', 'yyyy-MM-dd hh24:mi:ss')
AND NEXT_TIME > TO_TIMESTAMP('2024-02-22 12:00:00', 'yyyy-MM-dd hh24:mi:ss');