本文介绍在E-MapReduce集群中开发Pig作业流程。
在Pig中使用OSS
在使用OSS路径的时候,请使用类似如下的形式。
oss://${AccessKeyId}:${AccessKeySecret}@${bucket}.${endpoint}/${path}
参数说明:
-
${accessKeyId}:您账号的AccessKeyId。
-
${accessKeySecret}:该AccessKeyId对应的密钥。
-
${bucket}: 该AccessKeyId对应的bucket。
-
${endpoint}:访问OSS使用的网络,由您集群所在的region决定,对应的OSS也需要是在集群对应的region。
具体的值请参见OSS Endpoint。
-
${path}:bucket中的路径。
实施步骤
以Pig中带的script1-hadoop.pig为例进行说明,将Pig中的 tutorial.jar和 excite.log.bz2上传到OSS中,假设上传路径分别为oss://emr/jars/tutorial.jar和oss://emr/data/excite.log.bz2。
请参见如下操作步骤:
- 编写脚本
根据准备工作中所上传的OSS路径,修改脚本中的jar包文件路径和输入输出路径。如下所示,注意OSS路径设置形式为oss://${accesskeyId}:${accessKeySecret}@${bucket}.${endpoint}/object/path。
/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ -- Query Phrase Popularity (Hadoop cluster) -- This script processes a search query log file from the Excite search engine and finds search phrases that occur with particular high frequency during certain times of the day. -- Register the tutorial JAR file so that the included UDFs can be called in the script. REGISTER oss://${AccessKeyId}:${AccessKeySecret}@${bucket}.${endpoint}/data/tutorial.jar; -- Use the PigStorage function to load the excite log file into the ▒raw▒ bag as an array of records. -- Input: (user,time,query) raw = LOAD 'oss://${AccessKeyId}:${AccessKeySecret}@${bucket}.${endpoint}/data/excite.log.bz2' USING PigStorage('\t') AS (user, time, query); -- Call the NonURLDetector UDF to remove records if the query field is empty or a URL. clean1 = FILTER raw BY org.apache.pig.tutorial.NonURLDetector(query); -- Call the ToLower UDF to change the query field to lowercase. clean2 = FOREACH clean1 GENERATE user, time, org.apache.pig.tutorial.ToLower(query) as query; -- Because the log file only contains queries for a single day, we are only interested in the hour. -- The excite query log timestamp format is YYMMDDHHMMSS. -- Call the ExtractHour UDF to extract the hour (HH) from the time field. houred = FOREACH clean2 GENERATE user, org.apache.pig.tutorial.ExtractHour(time) as hour, query; -- Call the NGramGenerator UDF to compose the n-grams of the query. ngramed1 = FOREACH houred GENERATE user, hour, flatten(org.apache.pig.tutorial.NGramGenerator(query)) as ngram; -- Use the DISTINCT command to get the unique n-grams for all records. ngramed2 = DISTINCT ngramed1; -- Use the GROUP command to group records by n-gram and hour. hour_frequency1 = GROUP ngramed2 BY (ngram, hour); -- Use the COUNT function to get the count (occurrences) of each n-gram. hour_frequency2 = FOREACH hour_frequency1 GENERATE flatten($0), COUNT($1) as count; -- Use the GROUP command to group records by n-gram only. -- Each group now corresponds to a distinct n-gram and has the count for each hour. uniq_frequency1 = GROUP hour_frequency2 BY group::ngram; -- For each group, identify the hour in which this n-gram is used with a particularly high frequency. -- Call the ScoreGenerator UDF to calculate a "popularity" score for the n-gram. uniq_frequency2 = FOREACH uniq_frequency1 GENERATE flatten($0), flatten(org.apache.pig.tutorial.ScoreGenerator($1)); -- Use the FOREACH-GENERATE command to assign names to the fields. uniq_frequency3 = FOREACH uniq_frequency2 GENERATE $1 as hour, $0 as ngram, $2 as score, $3 as count, $4 as mean; -- Use the FILTER command to move all records with a score less than or equal to 2.0. filtered_uniq_frequency = FILTER uniq_frequency3 BY score > 2.0; -- Use the ORDER command to sort the remaining records by hour and score. ordered_uniq_frequency = ORDER filtered_uniq_frequency BY hour, score; -- Use the PigStorage function to store the results. -- Output: (hour, n-gram, score, count, average_counts_among_all_hours) STORE ordered_uniq_frequency INTO 'oss://${AccessKeyId}:${AccessKeySecret}@${bucket}.${endpoint}/data/script1-hadoop-results' USING PigStorage();
- 创建作业
将步骤1中编写的脚本存放到OSS上,假设存储路径为 oss://emr/jars/script1-hadoop.pig,在E-MapReduce作业中创建如下作业。
- 创建执行计划并运行
在E-MapReduce执行计划中创建执行计划,将上一步创建好的Pig作业添加到执行计划中,策略请选择运行,script1-hadoop作业就会在选定集群中运行起来。