Self-built Hive data warehouse across versions

Best Practices Overview

Application Scenario

Customers build their own Hadoop clusters in the IDC or public cloud environment, and the data is centrally stored in the HDFS file system, and Hive is used to perform common ETL tasks. After the customer decides to go to the cloud, he will migrate the data of the self-built Hadoop cluster to Alibaba Cloud's self-built Hadoop or EMR.

Technology Architecture

This practice plan is based on the technical architecture and main process as shown in the figure below.

Solution advantage

• Ease of use
You can simply select the required ECS model (CPU, memory) and disk, and select the required software for automatic deployment.
• Economy
You can create a cluster on demand, that is, you can release the cluster after the offline job runs, and you can dynamically add nodes when needed.
• Deep integration
E-MapReduce is deeply integrated with other Alibaba Cloud products (such as OSS, MNS, RDS, and MaxCompute, etc.), and supports these products as the input source or output destination of the Hadoop/Spark computing engine.
•Safety
E-MapReduce integrates the Alibaba Cloud RAM resource authority management system, and isolates service authority through master and sub-accounts.
•reliability
Using Alibaba Cloud database RDS to save Hive metadata information can improve data reliability and service availability, and eliminate the need for customers to operate and maintain their own MySQL databases.

Preconditions

Before proceeding with this article, you need to complete the following preparations:

• Register an Alibaba Cloud account and complete real-name authentication. You can log in to the Alibaba Cloud console and go to the real-name authentication page to check whether the real-name authentication has been completed.
• Aliyun account balance greater than 100 yuan. You can log in to the Alibaba Cloud console and go to the account overview page to check the account balance.
•Own a domain name that has passed the filing.
•Open services such as ECS, OSS, EMR, RDS, DTS and VPN gateway.

Resource Planning Description

• This solution is only used as a practical exercise. Please adjust it according to the actual structure of the business system when executing it in the production environment.
• All cloud product specifications purchased in this solution are for demonstration purposes. Please purchase products and services of corresponding specifications according to actual business needs.
• This solution focuses on demonstrating migration ideas and methods, and the offline IDC simulation environment is presented with component configuration.

1. Basic environment construction
In this practice plan, a relatively complete practice environment will be built according to the technical architecture diagram, including Apache log generation
server, Kafka queue, Hadoop cluster, Hive+MySQL metabase.

1.1. Build Hadoop + Kafka + log generator environment
Please refer to the best practice of Migrating Self-built Hadoop Data to Alibaba Cloud EMR
Chapter 1: 1. Build a self-built Hadoop cluster environment to complete the basic environment setup of this practice plan.

1.2. Create Hive metadata database
In this practical solution, a MySQL database will be installed on ECS to save Hive metadata information.
Step 1 Log in to the cloud server console, region: East China 2 (Shanghai).
Step 2 On the instance list page, click Create Instance in the upper right corner.
Step 3 In the custom purchase mode, complete various configurations.
Basic configuration:

Payment model: This practice plan uses preemptible instances.
Note: During the solution demonstration, we choose preemptible instances to reduce your cost; in the actual commercial environment, please choose the ECS instance payment mode suitable for the business scenario. For specific information about preemptible instances,
Region and Availability Zone: East China 2 (Shanghai) Availability Zone F

example
ᅳ Instance specification: ecs.g6.large (2 vCPU 8 GiB, general-purpose g6)
ᅳ Price limit for a single instance specification: To create a preemptible instance, you must set a price ceiling for a single instance specification.

a) Choose to set a single upper limit price.
b) Click to view historical prices.
c) In the historical price chart of preemptible instances, it can be seen that the current market price of instances in East China 2 Availability Zone F is 0.034 ¥/instance/hour. Therefore, we set the upper limit price for a single instance to 0.04 ¥/instance/hour. Slightly above the current market price. When users perform actual operations, please refer to the real-time price displayed on the interface.

Number of purchased instances: 1

mirror image:
a) Select the mirror market.
b) Click for more options (including operating systems) from the mirror market.
c) Type magento and click Search.
d) Select the Magento open source e-commerce system (LAMP | with demo data), and click Use. The image contains the MySQL database, and the default password of the root user is 123456

Storage: system disk
High-efficiency cloud disk 80GiB

Step 4 After the configuration is complete, click Next: Network and Security Group.
Step 5 On the Network and Security Group page, refer to the table below to configure related parameters.

After the configuration is complete, click Next: System Configuration.

Step 6 On the system configuration page, refer to the table below to configure related parameters.

After the configuration is complete, click Confirm Order.

Step 7 On the order confirmation page, confirm the parameter information. Confirm that it is correct, read, agree and check the "Terms of Service for Cloud Server ECS" and "Terms of Use for Mirrored Products", and click Create Instance.

Step 8 After creating the task and submitting it successfully, click the management console to go to the ECS instance list page to view the details.

So far, the ECS used to simulate the customer's self-built Hadoop system in this practice plan has been fully built.

Step 9 Log in to the MySQL console through the following URL:

Step 10 Create an account, which is used to connect to Hive.

After completion, the interface is displayed as shown in the figure below:

1.3. Install and configure Hive
In this practical solution, the Hive 1.2.2 version is used as the source version.

Step 1 Log in to the hadoop-master node through SSH, and run the following commands to download and decompress the Hive software package.

Step 2 Configure environment variables.

Step 3 Generate the hive-site.xml configuration file.

illustrate:

(1) Replace 192.168.100.140 in red font with the VPC IP address of MySQL in your environment.
(2) hive122db is the database in MySQL, which is used to save the metadata of Hive.

Step 4 Generate and configure the hive-env.sh startup file.

Add the following content at the end of the file, save and exit:

Step 5 Download the MySQL driver.

Download the MySQL Connector library file by the following command

Step 6 Initialize the source database.

When the information shown in the figure below appears on the interface, it means that the initialization is complete.

Step 7 Start Hive to enter the HQL command line interactive interface.
hive

Note: In the hive command line interface, use quit to exit.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us