When you use a Hadoop cluster to perform interactive big data analytics and queries, the process may be time-consuming. To address this issue, you can synchronize data from the Hadoop cluster to an Alibaba Cloud Elasticsearch cluster for analytics and queries. Elasticsearch can respond to multiple types of queries within seconds, especially ad hoc queries. This topic describes how to synchronize data from a Hadoop cluster to an Elasticsearch cluster by using the data synchronization feature of DataWorks.

Procedure

  1. Preparations
    Create a Hadoop cluster, a DataWorks workspace, and an Elasticsearch cluster. Configure the Elasticsearch cluster.
  2. Step 1: Prepare data
    Create test data in the Hadoop cluster.
  3. Step 2: Purchase and create an exclusive resource group
    Purchase and create an exclusive resource group for Data Integration. Bind the exclusive resource group to a virtual private cloud (VPC) and the created workspace. Exclusive resource groups can be used to transmit data in a fast and stable manner.
  4. Step 3: Add data sources
    Connect the Elasticsearch cluster and the HDFS data source of the Hadoop cluster to the Data Integration service of DataWorks.
  5. Step 4: Create and run a data synchronization node
    Use the codeless user interface (UI) to create a node to synchronize data from the MySQL data source to the Elasticsearch cluster and configure the node. Select the exclusive resource group that you created when you configure the node. The data synchronization node runs on the selected exclusive resource group for Data Integration and writes data to the Elasticsearch cluster.
  6. Step 5: View synchronization results
    In the Kibana console, view the synchronized data and search for data based on specific conditions.

Preparations

  1. Create a Hadoop cluster.
    Before you synchronize data, make sure that your Hadoop cluster runs normally. In this step, the Alibaba Cloud E-MapReduce (EMR) service is used to automatically create a Hadoop cluster. For more information, see Create a cluster.
    Sample configurations of the EMR Hadoop cluster: (Default configurations are used for items that are not listed. You can also modify the default configurations based on your business requirements.)
    • Cluster Type: Hadoop
    • EMR Version: EMR-3.26.3
    • Assign Public IP Address: turned on
  2. Create an Elasticsearch cluster and enable the Auto Indexing feature for the cluster.
    For more information, see Create an Alibaba Cloud Elasticsearch cluster and Access and configure an Elasticsearch cluster. Make sure that the Elasticsearch cluster resides in the same virtual private cloud (VPC), region, and zone as the EMR Hadoop cluster. In this step, an Elasticsearch V6.7.0 cluster of the Standard Edition is created.
  3. Create a DataWorks workspace.
    Make sure that the workspace resides in the same region as the Elasticsearch cluster. For more information, see Create a workspace.

Step 1: Prepare data

  1. Log on to the EMR console.
  2. In the top navigation bar, select the region where your EMR Hadoop cluster resides.
  3. In the upper part of the page, click the Data Platform tab.
  4. On the Data Platform tab, click Create Project to create a data development project. In this step, set Select Resource Group to Default Resource Group.
    For more information, see Manage projects.
  5. In the Projects section, find the created project and click Edit Job in the Actions column to create a job.
    For more information, see Edit jobs. In this step, set Job Type to Hive.
  6. Create a data table and insert data into the table.
    1. In the code editor, enter an SQL statement to create a Hive table. Then, click Run.
      In this step, the following statement is used:
      CREATE TABLE IF NOT
      EXISTS hive_esdoc_good_sale(
       create_time timestamp,
       category STRING,
       brand STRING,
       buyer_id STRING,
       trans_num BIGINT,
       trans_amount DOUBLE,
       click_cnt BIGINT
       )
       PARTITIONED BY (pt string) ROW FORMAT
       DELIMITED FIELDS TERMINATED BY ',' lines terminated by '\n'
    2. In the Run Job dialog box, configure the parameters and click OK.
      • Set Select Resource Group to Default Resource Group.
      • Set Target Cluster to the cluster that you created.
    3. Create another job. In the code editor, enter the following SQL statement to insert test data.
      You can import data from Object Storage Service (OSS) or other data sources. You can also manually insert data. In this step, data is manually inserted.
      insert into
      hive_esdoc_good_sale PARTITION(pt =1 ) values('2018-08-21','Coat','Brand A','lilei',3,500.6,7),('2018-08-22','Fresh','Brand B','lilei',1,303,8),('2018-08-22','Coat','Brand C','hanmeimei',2,510,2),(2018-08-22,'Bathroom','Brand A','hanmeimei',1,442.5,1),('2018-08-22','Fresh','Brand D','hanmeimei',2,234,3),('2018-08-23','Coat','Brand B','jimmy',9,2000,7),('2018-08-23','Fresh','Brand A','jimmy',5,45.1,5),('2018-08-23','Coat','Brand E','jimmy',5,100.2,4),('2018-08-24','Fresh','Brand G','peiqi',10,5560,7),('2018-08-24','Bathroom','Brand F','peiqi',1,445.6,2),('2018-08-24','Coat','Brand A','ray',3,777,3),('2018-08-24','Bathroom','Brand G','ray',3,122,3),('2018-08-24','Coat','Brand C','ray',1,62,7) ;
  7. Check whether the data is inserted.
    1. Create a job for an ad hoc query.
      For more information, see Implement ad hoc queries.
    2. Enter the following SQL statement and click Run:
      select * from hive_esdoc_good_sale where pt =1;
    3. In the lower part of the page, click the Records tab. On this tab, click Details in the Action column. The Scheduling Center tab appears.
    4. On the Scheduling Center tab, click the Execution Result tab.
      Then, you can check whether the data is inserted into the Hive table of the Hadoop cluster for synchronization. The following figure shows the inserted data. View test data

Step 2: Purchase and create an exclusive resource group

  1. Log on to the DataWorks console.
  2. In the top navigation bar, select the desired region. In the left-side navigation pane, click Resource Groups.
  3. Purchase exclusive resources for Data Integration. For more information, see Purchase exclusive resources for Data Integration.
    Notice The exclusive resources for Data Integration must reside in the same region as the DataWorks workspace that you created.
  4. Create an exclusive resource group for Data Integration. For more information, see Create an exclusive resource group for Data Integration.
    The following figure shows the configuration used in this example. Resource Group Type is set to Exclusive Resource Groups for Data Integration. Create an exclusive resource group
  5. Find the created exclusive resource group and click Network Settings in the Actions column. The VPC Binding tab appears. On the VPC Binding tab, click Add Binding to bind the exclusive resource group to a VPC. For more information, see Configure network settings.
    Exclusive resources are deployed in the VPC where DataWorks resides. You can use DataWorks to synchronize data from the EMR Hadoop cluster to the Elasticsearch cluster only after DataWorks connects to the VPCs where the database and cluster reside. In this topic, the EMR Hadoop cluster and Elasticsearch cluster reside in the same VPC. Therefore, when you bind the exclusive resource group to a VPC, you need to select the VPC and vSwitch to which the Elasticsearch cluster belongs. Bind an exclusive resource group for Data Integration to a VPC
  6. Click Change Workspace in the Actions column that corresponds to the exclusive resource group to bind it to the DataWorks workspace that you created. For more information, see Associate an exclusive resource group with a workspace.
    Bind an exclusive resource group to a workspace

Step 3: Add data sources

  1. Go to the Data Integration page.
    1. In the left-side navigation pane of the DataWorks console, click Workspaces.
    2. Find the workspace you created and click Data Integration in the Actions column.
  2. In the left-side navigation pane of the Data Integration page, choose Data Source > Data Sources.
  3. On the Data Source page, click New data source in the upper-right corner.
  4. In the Semi-structured storage section of the Add data source dialog box, click HDFS.
  5. In the Add HDFS data source dialog box, specify Data Source Name and DefaultFS.
    Add an HDFS data source

    DefaultFS: If your EMR Hadoop cluster is in non-HA mode, set this parameter to hdfs://Private IP address of emr-header-1:9000. If your EMR Hadoop cluster is in HA mode, set this parameter to hdfs://Private IP address of emr-header-1:8020. The private IP address of emr-header-1 is used because emr-header-1 communicates with DataWorks over a VPC.

    After the parameters are configured, you can test the connectivity between the MaxCompute data source and the exclusive resource group. If the connectivity test is passed, Connectable appears in the Connectivity status column. Success
  6. Click Complete.
  7. Add an Elasticsearch data source in the same way.
    Configuration of the Elasticsearch data source
    Parameter Description
    Endpoint The URL that is used to access the Elasticsearch cluster. Specify the URL in the following format: http://<Internal or public endpoint of the Elasticsearch cluster>:9200. You can obtain the endpoint from the Basic Information page of the cluster. For more information, see View the basic information of a cluster.
    Notice If you use the public endpoint of the cluster, add the elastic IP address (EIP) of the exclusive resource group to the public IP address whitelist of the cluster. For more information, see Configure a public or private IP address whitelist for an Elasticsearch cluster and Add the EIP or CIDR block of an exclusive resource group for Data Integration to the whitelist of a data source.
    Username The username that is used to access the Elasticsearch cluster. The default username is elastic.
    Password The password that corresponds to the elastic username. The password of the elastic username is specified when you create the cluster. If you forget the password, you can reset it. For more information about the procedure and precautions for resetting the password, see Reset the access password for an Elasticsearch cluster.
    Note Configure the parameters that are not listed in the preceding table based on your business requirements.

Step 4: Create and run a data synchronization node

  1. On the DataStudio page of the DataWorks console, create a workflow.
    For more information, see Manage workflows.
  2. Create a batch synchronization node.
    1. In the DataStudio pane, open the newly created workflow, right-click Data Integration, and then choose Create > Batch Synchronization.
    2. In the Create Node dialog box, configure the Node Name parameter and click Commit.
  3. In the Source section of the Connections step, specify the HDFS data source and the name of the table that you created. In the Target section, specify the Elasticsearch data source, index name, and index type.
    Select the HDFS data source
    Note
  4. In the Mappings step, configure mappings between source fields and destination fields.
  5. In the Channel step, configure the parameters.
  6. Configure properties for the node.
    In the right-side navigation pane of the configuration tab of the node, click Properties. On the Properties tab, configure properties for the node. For more information about the parameters, see Basic properties.
    Notice
    • Before you commit a node, you must configure a dependent ancestor node for the node in the Dependencies section of the Properties tab. For more information, see Instructions to configure scheduling dependencies.
    • If you want the system to periodically run a node, you must configure time properties for the node in the Schedule section of the Properties tab. The time properties include Validity Period, Scheduling Cycle, Run At, and Rerun.
    • The configuration of an auto triggered node takes effect at 00:00 of the next day.
  7. Configure the resource group that you want to use to run the synchronization node.
    Select a resource group
    1. In the right-side navigation pane of the configuration tab of the node, click the Resource Group configuration tab.
    2. Select the exclusive resource group that you create from the Exclusive Resource Groups drop-down list.
  8. Commit the node.
    1. Save the current configurations and click the Submit icon icon in the top toolbar.
    2. In the Commit Node dialog box, enter your comments in the Change description field.
    3. Click OK.
  9. Click the Run icon icon in the top toolbar to run the node.
    You can view the operational logs of the node when the node is running. After the node is successfully run, the result shown in the following figure is returned. Success

Step 5: View synchronization results

  1. Log on to the Kibana console of the destination Elasticsearch cluster.
    For more information, see Log on to the Kibana console.
  2. In the left-side navigation pane, click Dev Tools.
  3. On the Console tab of the page that appears, run the following command to query the synchronized data:
    POST /hive_esdoc_good_sale/_search?pretty
    {
    "query": { "match_all": {}}
    }
    Note hive_esdoc_good_sale is the index name that is specified by the index field when you configure the node by using the code editor.
    If the data is synchronized, the result shown in the following figure is returned. View synchronized data
  4. Run the following command to search for all documents that contain Brand A:
    POST /hive_esdoc_good_sale/_search?pretty
    {
      "query": { "match_phrase": { "brand":"Brand A" } }
    }
    All documents that contain Brand A returned
  5. Run the following command to sort products of each brand based on the number of clicks. Then, determine the popularity of the products.
    POST /hive_esdoc_good_sale/_search?pretty
    {
    "query": { "match_all": {} },
    "sort": { "click_cnt": { "order": "desc" } },
    "_source": ["category", "brand","click_cnt"]
    }
    Products sorted by the number of clicks

    For more information about other commands and their use scenarios, see Alibaba Cloud Elasticsearch documentation and open source Elasticsearch documentation.