DataWorks provides Hive Reader and Hive Writer for you to read data from and write data to Hive data sources. You can use the codeless user interface (UI) or code editor to configure synchronization nodes for Hive data sources.

Background information

Workspaces in standard mode support the data source isolation feature. You can add data sources separately for the development and production environments to isolate the data sources. This helps keep your data secure. For more information about the feature, see Isolate connections between the development and production environments.
If you use Object Storage Service (OSS) as the storage, you must take note of the following items:
  • The value of the defaultFS parameter must start with oss://. For example, the value can be `oss://IP:PORT` or `oss://nameservice`.
  • You must configure the parameters that are required for connecting to OSS in the advanced parameters of Hive. The following code provides an example:
    {
            "hiveConfig":{
                "fs.oss.accessKeyId":"<yourAccessKeyId>",
                    "fs.oss.accessKeySecret":"<yourAccessKeySecret>",
                    "fs.oss.endpoint":"oss-cn-<yourRegion>-internal.aliyuncs.com"
            }
        }

Limits

  • You can use only exclusive resource groups for Data Integration to read data from or write data to Hive data sources. For more information about exclusive resource groups for Data Integration, see Create and use an exclusive resource group for Data Integration.
  • Hive data sources support only Kerberos authentication. Other authentication methods will be available in the future.

Add a Hive data source

  1. Go to the Data Source page.
    1. Log on to the DataWorks console.
    2. In the left-side navigation pane, click Workspaces.
    3. After you select the region where the required workspace resides, find the workspace and click Data Integration in the Actions column.
    4. In the left-side navigation pane of the Data Integration page, choose Data Source > Data Sources to go to the Data Source page.
  2. On the Data Source page, click Add data source in the upper-right corner.
  3. In the Add data source dialog box, click Hive in the Big Data Storage section.
  4. In the Add Hive data source dialog box, configure the parameters.
    You can use one of the following modes to add a Hive data source: Alibaba Cloud instance mode, Connection string mode, and Built-in Mode of CDH.
    • The following table describes the parameters you must configure if you add a Hive data source by using Alibaba Cloud instance mode. Hive
      Parameter Description
      Data source type The type of the data source. Set this parameter to Alibaba Cloud instance mode.
      Data Source Name The name of the data source. The name can contain letters, digits, and underscores (_) and must start with a letter.
      Data source description The description of the data source. The description can be a maximum of 80 characters in length.
      Environment The environment in which the data source is used. Valid values: Development and Production.
      Note This parameter is displayed only when the workspace is in standard mode.
      Region The region where the data source resides.
      Cluster ID The ID of the E-MapReduce (EMR) cluster. You can log on to the EMR console to obtain the ID.
      EMR instance account ID The ID of the Alibaba Cloud account that is used to purchase the EMR cluster. You can log on to the Alibaba Cloud Management Console by using your Alibaba Cloud account and view your account ID on the Security Settings page.
      Database Name The name of the Hive database.
      HIVE Login The mode that is used to connect to the Hive database. Valid values: Login with username and password and Anonymous.

      If you select Login with username and password, enter the username and password that you can use to connect to the Hive database.

      Hive Version The Hive version that you want to use.
      defaultFS The address of the NameNode node in the Active state in Hadoop Distributed File System (HDFS), in the format of hdfs://IP address of the host:Port number.
      Extended parameters The advanced parameters of Hive, such as those related to high availability. The following code provides an example:
      "hadoopConfig":{
      "dfs.nameservices": "testDfs",
      "dfs.ha.namenodes.testDfs": "namenode1,namenode2",
      "dfs.namenode.rpc-address.youkuDfs.namenode1": "",
      "dfs.namenode.rpc-address.youkuDfs.namenode2": "",
      "dfs.client.failover.proxy.provider.testDfs
      "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
      }
    • The following table describes the parameters you must configure if you add a Hive data source by using Connection string mode. Hive
      Parameter Description
      Data source type The type of the data source. Set this parameter to Connection string mode.
      Data Source Name The name of the data source. The name can contain letters, digits, and underscores (_) and must start with a letter.
      Data source description The description of the data source. The description can be a maximum of 80 characters in length.
      Environment The environment in which the data source is used. Valid values: Development and Production.
      Note This parameter is displayed only when the workspace is in standard mode.
      HIVE JDBC URL The Java Database Connectivity (JDBC) URL of the Hive metadatabase.
      Database Name The name of the Hive database. You can run the show databases command on the Hive client to query the created databases.
      HIVE Login The mode that is used to connect to the Hive database. Valid values: Login with username and password and Anonymous.

      If you select Login with username and password, enter the username and password that you can use to connect to the Hive database.

      Hive Version The Hive version that you want to use.
      metastoreUris The Uniform Resource Identifiers (URIs) of the Hive metadatabase, in the format of thrift://ip1:port1,thrift://ip2:port2.
      defaultFS The address of the NameNode node in the Active state in Hadoop Distributed File System (HDFS), in the format of hdfs://IP address of the host:Port number.
      Extended parameters The advanced parameters of Hive, such as those related to high availability. The following code provides an example:
      "hadoopConfig":{
      "dfs.nameservices": "testDfs",
      "dfs.ha.namenodes.testDfs": "namenode1,namenode2",
      "dfs.namenode.rpc-address.youkuDfs.namenode1": "",
      "dfs.namenode.rpc-address.youkuDfs.namenode2": "",
      "dfs.client.failover.proxy.provider.testDfs
      "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
      }
      Special Authentication Method

      Specifies whether identity authentication is required. Default value: None. You can also set this parameter to Kerberos Authentication. For more information about Kerberos authentication, see Configure Kerberos authentication.

      Keytab File

      If you set Special Authentication Method to Kerberos Authentication, you must select the desired keytab file from the Keytab File drop-down list.

      If no keytab file is available, you can click Add Authentication File to upload a keytab file.

      CONF File

      If you set Special Authentication Method to Kerberos Authentication, you must select the desired CONF file from the CONF File drop-down list.

      If no CONF file is available, you can click Add Authentication File to upload a CONF file.

      principal

      The Kerberos principal. Specify this parameter in the format of Principal name/Instance name@Domain name, such as ****/hadoopclient@**.*** .

    • The following table describes the parameters you must configure if you add a Hive data source by using Built-in Mode of CDH Cluster. Built-in Mode of CDH
      Parameter Description
      Data source type The type of the data source. Set this parameter to Built-in Mode of CDH.
      Data Source Name The name of the data source. The name can contain letters, digits, and underscores (_) and must start with a letter.
      Data source description The description of the data source. The description can be a maximum of 80 characters in length.
      Environment The environment in which the data source is used. Valid values: Development and Production.
      Note This parameter is displayed only when the workspace is in standard mode.
      Select CDH Cluster The CDH cluster that you want to use.
      Special Authentication Method

      Specifies whether identity authentication is required. Default value: None. You can also set this parameter to Kerberos Authentication. For more information about Kerberos authentication, see Configure Kerberos authentication.

      Keytab File

      If you set Special Authentication Method to Kerberos Authentication, you must select the desired keytab file from the Keytab File drop-down list.

      If no keytab file is available, you can click Add Authentication File to upload a keytab file.

      CONF File

      If you set Special Authentication Method to Kerberos Authentication, you must select the desired CONF file from the CONF File drop-down list.

      If no CONF file is available, you can click Add Authentication File to upload a CONF file.

      principal

      The Kerberos principal. Specify this parameter in the format of Principal name/Instance name@Domain name, such as ****/hadoopclient@**.*** .

  5. Set Resource Group connectivity to Data Integration.
  6. Find the desired resource group in the resource group list in the lower part of the dialog box and click Test connectivity in the Actions column.
    A synchronization node can use only one type of resource group. To ensure that your synchronization nodes can be normally run, you must test the connectivity of all the resource groups for Data Integration on which your synchronization nodes will be run. If you want to test the connectivity of multiple resource groups for Data Integration at a time, select the resource groups and click Batch test connectivity. For more information, see Select a network connectivity solution.
    Note
    • By default, the resource group list displays only exclusive resource groups for Data Integration. To ensure the stability and performance of data synchronization, we recommend that you use exclusive resource groups for Data Integration.
    • If you want to test the network connectivity between the shared resource group or a custom resource group and the data source, click Advanced below the resource group list. In the Warning message, click Confirm. Then, all available shared and custom resource groups appear in the resource group list.
  7. After the data source passes the connectivity test, click Complete.

Obtain the Hive configuration in the EMR console

  1. Log on to the EMR console.
  2. In the top navigation bar, click Cluster Management.
  3. On the Cluster Management tab, find the cluster whose details you want to view and click Details in the Actions column. On the Cluster Overview page, view the cluster details.
  4. In the left-side navigation pane, choose Cluster Service > Hive.
  5. On the Hive page, click the Configure tab.
  6. In the Configure Filter section, enter javax in the search box and click the Search icon to view the Hive configuration in the Service Configuration section.
    Service Configuration