Prerequisites

  1. Lindorm Tunnel Service (LTS) service that supports data migration and synchronization is purchased. The account and the password are set for the LTS web UI. The LTS web UI is logged on.
  2. Your LTS and HBase clusters for migration are connected to your ApsaraDB RDS instance.
  3. An HBase cluster data source or a Phoenix data source is added.
  4. An ApsaraDB RDS data source is added.

Versions

  • HBase
    • Self-managed HBase V1.x and V2.x
    • EMR HBase
    • ApsaraDB for HBase Standard Edition and ApsaraDB for Lindorm (cluster edition)
  • Mysql
    • Self-managed MySQL
    • RDS for Mysql

Create a task

  1. Log on to the LTS web UI and choose Data Import and Export > RDS Full Data Migration.
  2. Click Create Migration Task, select the ApsaraDB RDS data source and the destination HBase (Phoenix) cluster, and then enter the mapping information about the table to be migrated.
  3. View the task progress.
  4. After data is migrated to the destination cluster, view the HBase table.

Parameter description

  • HBase table mapping
    {
      "reader": {
        "querySql": [// Split the task based on the number of query SQL statements. Then, run the task in a distributed manner.
          "select id, title, content from rds.test where id < 8",
          "select id, title, content from rds.test where id >= 8"
        ]
      },
      "writer": {
        "columns": [
          {
            "name": "f1:col1",
            "value": "{{ concat(title, id) }}" // Concatenate the title and id fields in MySQL to use the concatenation result as the value of the f1:col1 column in HBase.
          },
          {
            "name": "f1:col2",
            "value": "content",
            "type": "string" // The type field is optional. By default, data is processed and written to HBase as strings.
          },
          {
            "name": "f1:*" // The columns that are not matched in MySQL follow the default matching settings.
          }
        ],
        "rowkey": {
          "value": "{{ concat('idg', id) }}"
        },
        "tableName": "default:t1"
      }
    }
                        
    • Simple calculation functions are supported. These functions use the following Jtwig syntax.
      {
        "name": "cf1:hhh",
        "value": "{{ concat(title, id) }}"
      }
                                  
    • Dynamic columns are supported. Columns that are not matched follow the default matching settings.
      {
          "name": "cf1:*",
      }
                                  
  • Phoenix table mapping
    {
      "reader": {
        "querySql": [
          "select id, title, ts, datetime, date, time, b, f, d from rds.test where id < 8",
          "select id, title, ts, datetime, date, time, b, f, d from rds.test where id >= 8"
        ]
      },
      "writer": {
        "columns": [
          {
            "isPk": true,
            "name": "id"
          },
          {
            "name": "title",
            "value": "title" // The title field in MySQL corresponds to the title in Phoenix. This parameter is optional if the two field names are the same.
          },
          {
            "name": "ts"
          },
          {
            "name": "datetime"
          },
          {
            "name": "date"
          },
          {
            "name": "time"
          },
          {
            "name": "b"
          },
          {
            "name": "f"
          },
          {
            "name": "d"
          }
        ],
        "tableName": "dtstest"
      }
    }