This topic describes how to create an E-MapReduce (EMR) Hive node. EMR Hive nodes allow you to use SQL-like statements to read data from, write data to, and manage data warehouses with large volumes of data stored in a distributed storage system. You can use EMR Hive nodes to efficiently analyze large amounts of log data.
- An EMR cluster is created. The inbound rules of the security group to which the cluster
belongs include the following rules:
- Action: Allow
- Protocol type: Custom TCP
- Port range: 8898/8898
- Authorization object: 100.104.0.0/16
- An EMR compute engine instance is bound to the required workspace. The EMR option is displayed only after you bind an EMR compute engine instance to the workspace on the Workspace Management page. For more information, see Configure a workspace.
- If you integrate Hive with Ranger in EMR, you need to modify whitelist configurations and restart Hive before you develop
EMR nodes in DataWorks. Otherwise, the error message Cannot modify spark.yarn.queue at runtime or Cannot modify SKYNET_BIZDATE at runtime is returned when you run EMR nodes.
- You can modify the whitelist configurations by using custom parameters in EMR. Append
key-value pairs to the value of a custom parameter. In this example, the custom parameter
for Hive components is used. The following sample code provides an example:
hive.security.authorization.sqlstd.confwhitelist.append=tez.*|spark.*|mapred.*|mapreduce.*|ALISA.*|SKYNET.*Note In the code, ALISA.*and SKYNET.*are special configurations for DataWorks.
- After the whitelist configurations are modified, restart the Hive service to make the configurations take effect. For more information about how to restart a service, see Restart a service.
- You can modify the whitelist configurations by using custom parameters in EMR. Append key-value pairs to the value of a custom parameter. In this example, the custom parameter for Hive components is used. The following sample code provides an example:
- Go to the DataStudio page.
- Log on to the DataWorks console.
- In the left-side navigation pane, click Workspaces.
- In the top navigation bar, select the region where your workspace resides, find the workspace, and then click Data Analytics in the Actions column.
- On the page that appears, move the pointer over the icon and choose . Alternatively, you can click the related workflow in the Data Analytics pane, right-click EMR, and then choose .
- In the Create Node dialog box, set the Node Name and Location parameters.Note The node name must be 1 to 128 characters in length and can contain letters, digits, underscores (_), and periods (.).
- Click Commit.
- On the node configuration tab, enter the code.Note If multiple EMR compute engine instances are bound to the current workspace, you must select an EMR compute engine instance. If only one EMR compute engine instance is bound to the current workspace, you do not need to do so.
- Save and commit the node.Notice You must set the Rerun and Parent Nodes parameters before you can commit the node.
In a workspace in standard mode, you must click Deploy in the upper-right corner after you commit the node. For more information, see Deploy nodes.
- Click the icon in the toolbar to save the node.
- Click the icon in the toolbar.
- In the Commit Node dialog box, enter your comments in the Change description field.
- Click OK.
- Test the node. For more information, see View auto triggered nodes.