edit-icon download-icon

Incremental synchronization (script mode)

Last Updated: Mar 20, 2018

Data Integration supports data synchronization in wizard mode and script mode. Wizard mode is simpler while script mode is more flexible.

This section describes how to synchronize incremental data in Table Store to OpenSearch using the script mode of Data Integration.

Channels

  • Script mode of Data Integration

    • Reader: OTSStream Reader
    • Writer: OSSWriter

Configure Table Store

No prior configurations required.

Configure OSS

No prior configurations required.

Configure Data Integration

  1. Create a Table Store data source.

    Note:

    • If you have already created a Table Store data source, skip this step.
    • If you do not want to create a data source, you can specify the endpoint, instanceName, AccessKeyID, and AccessKeySecret on the subsequent configuration page.

    For more information about how to create a data source, see Create a Table Store data source.

  2. Create an OSS data source.

    This step is similar to Step 1. You only need to select OSS as the data source.

    Note: During parameter configuration of the OSS data source, Endpoint does not contain bucketName. For more information, see OSS endpoint introduction.

  3. Create a synchronization task.

    1. Log on to the Data Integration console.

    2. On the Sync Tasks page, select Script Mode.

    3. In the Import Template dialog box that appears, set Source Type to Table Store Stream (OTS Stream) and Type of Objective to OSS.

    4. Click OK to go to the configuration page.

  4. Set configuration items.

    1. On the configuration page, templates of OTSStreamReader and OSSWriter are provided. Complete the configurations by referring to the following annotations.

      1. {
      2. "type": "job",
      3. "version": "1.0",
      4. "configuration": {
      5. "setting": {
      6. "errorLimit": {
      7. "record": "0" # Allowed number of errors. If the number of errors exceeds the value, the synchronization task fails.
      8. },
      9. "speed": {
      10. "mbps": "1", # Maximum traffic of each synchronization task.
      11. "concurrent": "1" # Number of concurrent synchronization tasks each time.
      12. }
      13. },
      14. "reader": {
      15. "plugin": "otsstream", # Name of the Reader plugin.
      16. "parameter": {
      17. "datasource": "", # Name of the Table Store data source. If this parameter is set, you do not need to set endpoint, accessID, accessKey, and instanceName.
      18. "dataTable": "", # Name of the table in Table Store.
      19. "statusTable": "TableStoreStreamReaderStatusTable", # Table that stores the Table Store Stream status; using the default value is recommended
      20. "startTimestampMillis": "", # Start time of the export. In incremental export mode, the task needs to be executed cyclically, and the start time is different at each execution. Therefore, you must set a variable, for example, ${start_time}.
      21. "endTimestampMillis": "", # End time of the export. You must set a variable, for example, ${end_time}.
      22. "date": "yyyyMMdd", # Date from which data is exported. This parameter is the same as startTimestampMillis and endTimestampMillis, and therefore must be deleted.
      23. "mode": "single_version_and_update_only", # Format of the data exported from Table Store Stream. Currently, the parameter must be set to single_version_and_update_only. Add this parameter if it is not in the configuration template.
      24. "column":[ # Names of the columns to be exported from Table Store to OSS. Add this parameter if it is not in the configuration template. Set this parameter as needed.
      25. {
      26. "name": "uid" # Name of the column. It is the primary key column in Table Store.
      27. },
      28. {
      29. "name": "name" # Name of the column. It is an attribute column in Table Store.
      30. },
      31. ],
      32. "isExportSequenceInfo": false, # This parameter can only be set to false in single_version_and_update_only mode.
      33. "maxRetries": 30 # Maximum number of retry times.
      34. }
      35. },
      36. "writer": {
      37. "plugin": "oss", # Name of the Writer plugin
      38. "parameter": {
      39. "datasource": "", # Name of the OSS data source
      40. "object": "", # Prefix of the name of the last file to be backed up to OSS. The recommended value is the Table Store instance name, table name, or date, for example, "instance/table/{date}".
      41. "writeMode": "truncate", # truncate, append, and nonConflict are supported. truncate is used to clear existing files with the same name, append is used to add the data to existing files with the same name, and nonConflict is used to return an error when files with the same name exist.
      42. "fileFormat": "csv", # File format
      43. "encoding": "UTF-8", # Encoding mode
      44. "nullFormat": "null", # Mode of representation in a TXT file under control
      45. "dateFormat": "yyyy-MM-dd HH:mm:ss", # # Time format
      46. "fieldDelimiter": "," # Delimiter of each column
      47. }
      48. }
      49. }
      50. }

      Note: For detailed configuration description, see Configure OTSStreamReader and Configure OSSWriter.

    2. Click Save.

  5. Run the task.

    1. Click operation.

    2. In the dialog box that appears, set the variable parameters.

    3. Click OK.

    4. After the task is completed, log on to the OSS console to verify whether files are backed up.

  6. Configure scheduling.

    1. Click Submit.

    2. In the dialog box that appears, set the scheduling parameters.

      Scheduling parameters

      The parameters are described as follows.

      ParameterDescription
      Scheduling typeSelect cycle control.
      Automatically re-runThis parameter indicates that the task reruns for three times at an interval of 2 minutes if the task fails.
      Start dateThe default value is recommended, which is from January 1, 1970 to 100 years later.
      Scheduling cycleSelect Minute.
      Start timeSelect “00:00 to 23:59”, which indicates that scheduling is required for a full day.
      IntervalSelect 5 Minutes.
      start_timeEnter $[yyyymmddhh24miss-10/24/60], which indicates the time of the scheduling task minus 10 minutes.
      end_timeEnter $[yyyymmddhh24miss-5/24/60], which indicates the time of the scheduling task minus 5 minutes.
      dateEnter ${bdp.system.bizdate}, which indicates the scheduling date.
      Dependency attributesSet this parameter if a dependency exists. If no dependency exists, do not set this parameter.
      Cross-cycle dependencySelect the second option.
  7. Click OK.

    The periodic synchronization task is configured, the configuration file status is Read-only.

  8. Check the task.

    1. At the top of the page, click Operation Center.

    2. On the left-side navigation pane, click Task List > Cycle Task to view the created synchronization task.

    3. The new task begins running at 00:00 on the next day.

      • In the left-side navigation pane, choose Task O&M > Cycle Instance to view each pre-created synchronization task of the day. The scheduling interval is 5 minutes and each task processes data from the past 5 to 10 minutes.

      • Click the instance name to view its details.

    4. You can view the log when a task is running or after it is completed.

  9. Check the data exported to OSS.

    Log on to the OSS consoleOSS console to check whether a new file is generated and whether the file content is correct.

Once the preceding settings are completed, data in Table Store can be automatically synchronized to OSS at a latency of 5 to 10 minutes.

Thank you! We've received your feedback.