This topic describes the data types and parameters that are supported by Oracle Reader and how to configure Oracle Reader by using the codeless user interface (UI) and code editor.

Oracle Reader can read data from Oracle.
Note
  • ApsaraDB RDS and DRDS do not support Oracle.
  • Oracle Reader uses the ojdbc7-12.1.0.2.jar driver to connect to Oracle databases. For more information about the supported versions of Oracle JDBC drivers, see Oracle JDBC FAQ.
Oracle Reader connects to a remote Oracle database by using Java Database Connectivity (JDBC), generates an SQL statement based on your configurations, and then sends the statement to the database. The system executes the statement on the database and returns data. Then, Oracle Reader assembles the returned data into abstract datasets of the data types supported by Data Integration and sends the datasets to a writer.
  • Oracle Reader generates the SQL statement based on the settings of the table, column, and where parameters and sends the statement to the Oracle database.
  • If you set the querySql parameter, Oracle Reader sends the value of this parameter to the Oracle database.

Data types

Oracle Reader supports most Oracle data types. Make sure that the data types of your database are supported.

The following table describes the data types that are supported by Oracle Reader.
Category Oracle data type
Integer NUMBER, ROWID, INTEGER, INT, and SMALLINT
Floating point NUMERIC, DECIMAL, FLOAT, DOUBLE PRECISION, and REAL
String LONG, CHAR, NCHAR, VARCHAR, VARCHAR2, NVARCHAR2, CLOB, NCLOB, CHARACTER, CHARACTER VARYING, CHAR VARYING, NATIONAL CHARACTER, NATIONAL CHAR, NATIONAL CHARACTER VARYING, NATIONAL CHAR VARYING, and NCHAR VARYING
Date and time TIMESTAMP and DATE
Boolean BIT and BOOLEAN
Binary BLOB, BFILE, RAW, and LONG RAW

Parameters

Parameter Description Required Default value
datasource The name of the data source. It must be the same as the name of the added data source. You can add data sources by using the code editor. Yes No default value
table The name of the table from which you want to read data. Yes No default value
column The names of the columns from which you want to read data. Specify the names in a JSON array. The default value is ["*"], which indicates all the columns in the source table.
  • You can select specific columns to read.
  • The column order can be changed. This indicates that you can specify columns in an order different from the order specified by the schema of the source table.
  • Constants are supported. The column names must be arranged in JSON format.
    ["id", "1", "'mingya.wmy'", "null", "to_char(a + 1)", "2.3" , "true"]
    • id: a column name.
    • 1: an integer constant.
    • 'mingya.wmy': a string constant, which is enclosed in single quotation marks (').
    • null: a null pointer.
    • to_char(a + 1): a function expression.
    • 2.3: a floating-point constant.
    • true: a Boolean value.
  • The column parameter cannot be left empty.
Yes No default value
splitFactor The shard factor, which determines the number of shards into which data to be synchronized is distributed. If you configure multiple parallel threads, the number of shards equals that the number of parallel threads multiplies by the value of the splitFactor parameter. For example, the number of parallel threads is 5, and the splitFactor parameter is set to 5. In this case, five parallel threads are used to perform sharding, and data is distributed into 25 shards.
Note We recommend that you set this parameter in the range of 1 to 100. If you set this parameter to an excessively large value, an out of memory (OOM) error may occur during data synchronization.
No 5
splitMode The shard mode. Valid values:
  • averageInterval: average sampling. In this mode, the maximum and minimum values of all data are identified based on the splitPk parameter. Then, data is evenly distributed based on the number of shards.
  • randomSampling: random sampling. In this mode, data entries are randomly identified as sharding points.
Note
  • If the splitPk parameter is set to a string field, set the splitMode parameter to randomSampling.
  • If the splitMode parameter is set to averageInterval, you can set the splitPk parameter only to a field of a numeric data type.
No randomSampling
splitPk The field that is used for data sharding when Oracle Reader reads data. If you configure this parameter, the source table is sharded based on the value of this parameter. Data Integration then runs parallel threads to read data. This improves data synchronization efficiency.
  • We recommend that you set the splitPk parameter to the name of the primary key column of the table. Data can be evenly distributed into different shards based on the primary key column, instead of being intensively distributed only into specific shards.
  • You can set the splitPk parameter to a field of any data type.
  • If you do not configure the splitPk parameter, Oracle Reader uses a single thread to read all data in the source table.
Note If you use Oracle Reader to read data from a view, you cannot set the splitPk parameter to a field of the ROWID data type.
No No default value
where The WHERE clause. Oracle Reader generates an SQL statement based on the settings of the table, column, and where parameters and uses the statement to read data. For example, you can set this parameter to row_number() in a test.
  • You can use the WHERE clause to read incremental data.
  • If the where parameter is not provided or is left empty, Data Integration reads all data.
No No default value
querySql (available only in the code editor) The SQL statement that is used for refined data filtering. If you configure this parameter, Data Integration filters data based on the value of this parameter. For example, if you want to join multiple tables for data synchronization, you can set this parameter to select a,b from table_a join table_b on table_a.id = table_b.id. If you configure this parameter, Oracle Reader ignores the settings of the table, column, and where parameters. No No default value
fetchSize The number of data records to read at a time. This parameter determines the number of interactions between Data Integration and the database and affects read efficiency.
Note If you set this parameter to a value greater than 2048, an OOM error may occur during data synchronization.
No 1,024

Configure Oracle Reader by using the codeless UI

  1. Configure data sources.
    Configure Source and Target for the synchronization node. Configure data sources
    Parameter Description
    Connection The name of the data source from which you want to read data. This parameter is equivalent to the datasource parameter that is described in the preceding section.
    Table The name of the table from which you want to read data. This parameter is equivalent to the table parameter that is described in the preceding section.
    Filter The condition that is used to filter the data you want to read. Filtering based on the LIMIT keyword is not supported. The SQL syntax is determined by the selected data source.
    Shard Key The shard key. You can use a column in the source table as the shard key. We recommend that you use the primary key column or an indexed column. Only integer columns are supported by the codeless UI. If you want to use a column of other data types such as string, floating point, and date, use the code editor to configure Oracle Reader.
    If you configure this parameter, data sharding is performed based on the value of this parameter, and parallel threads can be used to read data. This improves data synchronization efficiency.
    Note The Shard Key parameter is displayed only after you select the data source for the synchronization node.
  2. Configure field mappings. This operation is equivalent to setting the column parameter that is described in the preceding section.
    Fields in the source on the left have a one-to-one mapping with fields in the destination on the right. You can click Add to add a field. To remove an added field, move the pointer over the field and click the Remove icon. Field mappings
    Operation Description
    Map Fields with the Same Name Click Map Fields with the Same Name to establish mappings between fields with the same name. The data types of the fields must match.
    Map Fields in the Same Line Click Map Fields in the Same Line to establish mappings between fields in the same row. The data types of the fields must match.
    Delete All Mappings Click Delete All Mappings to remove the mappings that are established.
    Auto Layout Click Auto Layout to sort the fields based on specific rules.
    Change Fields Click the Change Fields icon. In the Change Fields dialog box, you can manually edit the fields in the source table. Each field occupies a row. The first and the last blank rows are included, whereas other blank rows are ignored.
    Add
    • Click Add to add a field. Take note of the following rules when you add a field: You can enter constants. Each constant must be enclosed in single quotation marks ('), such as 'abc' and '123'.
    • You can use scheduling parameters, such as ${bizdate}.
    • You can enter functions that are supported by relational databases, such as now() and count(1).
    • If the field that you entered cannot be parsed, the value of Type for the field is Unidentified.
  3. Configure channel control policies. Channel control
    Parameter Description
    Expected Maximum Concurrency The maximum number of parallel threads that the synchronization node uses to read data from the source or write data to the destination. You can configure the parallelism for the synchronization node on the codeless UI.
    Bandwidth Throttling Specifies whether to enable bandwidth throttling. You can enable bandwidth throttling and specify a maximum transmission rate to prevent heavy read workloads on the source. We recommend that you enable bandwidth throttling and set the maximum transmission rate to an appropriate value based on the configurations of the source.
    Dirty Data Records Allowed The maximum number of dirty data records allowed.
    Distributed Execution

    The distributed execution mode that allows you to split your node into pieces and distribute them to multiple Elastic Compute Service (ECS) instances for parallel execution. This speeds up synchronization. If you use a large number of parallel threads to run your synchronization node in distributed execution mode, excessive access requests are sent to the data sources. Therefore, before you use the distributed execution mode, you must evaluate the access load on the data sources. You can enable this mode only if you use an exclusive resource group for Data Integration. For more information about exclusive resource groups for Data Integration, see Exclusive resource groups for Data Integration and Create and use an exclusive resource group for Data Integration.

Configure Oracle Reader by using the code editor

In the following code, a synchronization node is configured to read data from an Oracle database:
{
    "type":"job",
    "version":"2.0",// The version number. 
    "steps":[
        {
            "stepType":"oracle",
            "parameter":{
                "fetchSize":1024,// The number of data records to read at a time. 
                "datasource":"",// The name of the data source. 
                "column":[// The names of the columns from which you want to read data. 
                    "id",
                    "name"
                ],
                "where":"",// The WHERE clause. 
                "splitPk":"",// The shard key. 
                "table":""// The name of the table from which you want to read data. 
            },
            "name":"Reader",
            "category":"reader"
        },
        {
            "stepType":"stream",
            "parameter":{},
            "name":"Writer",
            "category":"writer"
        }
    ],
    "setting":{
        "errorLimit":{
            "record":"0"// The maximum number of dirty data records allowed. 
        },
        "speed":{
            "throttle":true,// Specifies whether to enable bandwidth throttling. The value false indicates that bandwidth throttling is disabled, and the value true indicates that bandwidth throttling is enabled. The mbps parameter takes effect only when the throttle parameter is set to true. 
            "concurrent":1 // The maximum number of parallel threads. 
            "mbps":"12"// The maximum transmission rate.
        }
    },
    "order":{
        "hops":[
            {
                "from":"Reader",
                "to":"Writer"
            }
        ]
    }
} "to":"Writer"
            }
        ]
    }
}

Additional information

  • Data synchronization between primary and secondary databases

    A secondary Oracle database can be deployed for disaster recovery. The secondary database continuously synchronizes data from the primary database based on binary logs. Data latency between the primary and secondary databases cannot be prevented. This may result in data inconsistency.

  • Data consistency control

    Oracle is a relational database management system (RDBMS) that supports strong consistency for data queries. A database snapshot is created before a synchronization node starts. Oracle Reader reads data from the database snapshot. Therefore, if new data is written to the database during data synchronization, Oracle Reader cannot obtain the new data.

    Data consistency cannot be ensured if you enable Oracle Reader to use parallel threads to read data in a synchronization node.

    Oracle Reader shards the source table based on the value of the splitPk parameter and uses parallel threads to read data. These parallel threads belong to different transactions and read data at different points in time. Therefore, the parallel threads observe different snapshots.

    Data inconsistencies cannot be prevented if parallel threads are used for a synchronization node. The following workarounds can be used:
    • Enable Oracle Reader to use a single thread to read data in a synchronization node. This indicates that you do not need to specify a shard key for Oracle Reader. This way, data consistency is ensured, but data is synchronized at low efficiency.
    • Make sure that no data is written to the source table during data synchronization. This ensures that the data in the source table remains unchanged during data synchronization. For example, you can lock the source table or disable data synchronization between primary and secondary databases. This way, data is efficiently synchronized, but your ongoing services may be interrupted.
  • Character encoding

    Oracle Reader uses JDBC to read data. This enables Oracle Reader to automatically convert the encoding format of characters. Therefore, you do not need to specify the encoding format.

  • Incremental data synchronization
    Oracle Reader connects to a database by using JDBC and uses a SELECT statement with a WHERE clause to read incremental data.
    • For batch data, incremental add, update, and delete operations (including logically delete operations) are distinguished by timestamps. Specify the WHERE clause based on a specific timestamp. The time indicated by the timestamp must be later than the time indicated by the latest timestamp in the previous synchronization.
    • For streaming data, specify the WHERE clause based on the ID of a specific record. The ID must be greater than the maximum ID involved in the previous synchronization.

    If the data that is added or modified cannot be distinguished, Oracle Reader can read only full data.

  • Syntax validation

    Oracle Reader allows you to specify custom SELECT statements by using the querySql parameter but does not verify the syntax of these statements.