This topic describes how to configure a Pig job.

Prerequisites

  • A project is created. For more information, see Manage projects.
  • A Pig script is prepared. Sample code:
     /*
     * Licensed to the Apache Software Foundation (ASF) under one
     * or more contributor license agreements.  See the NOTICE file
     * distributed with this work for additional information
     * regarding copyright ownership.  The ASF licenses this file
     * to you under the Apache License, Version 2.0 (the
     * "License"); you may not use this file except in compliance
     * with the License.  You may obtain a copy of the License at
     *
     *     http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
     -- Query Phrase Popularity (Hadoop cluster)
     -- This script processes a search query log file from the Excite search engine and finds search phrases that occur with particular high frequency during certain times of the day. 
     -- Register the tutorial JAR file so that the included UDFs can be called in the script.
     REGISTER oss://emr/checklist/jars/chengtao/pig/tutorial.jar;
     -- Use the PigStorage function to load the excite log file into the "raw" bag as an array of records.
     -- Input: (user,time,query) 
     raw = LOAD 'oss://emr/checklist/data/chengtao/pig/excite.log.bz2' USING PigStorage('\t') AS (user, time, query);
     -- Call the NonURLDetector UDF to remove records if the query field is empty or a URL. 
     clean1 = FILTER raw BY org.apache.pig.tutorial.NonURLDetector(query);
     -- Call the ToLower UDF to change the query field to lowercase. 
     clean2 = FOREACH clean1 GENERATE user, time, org.apache.pig.tutorial.ToLower(query) as query;
     -- Because the log file only contains queries for a single day, we are only interested in the hour.
     -- The excite query log timestamp format is YYMMDDHHMMSS.
     -- Call the ExtractHour UDF to extract the hour (HH) from the time field.
     houred = FOREACH clean2 GENERATE user, org.apache.pig.tutorial.ExtractHour(time) as hour, query;
     -- Call the NGramGenerator UDF to compose the n-grams of the query.
     ngramed1 = FOREACH houred GENERATE user, hour, flatten(org.apache.pig.tutorial.NGramGenerator(query)) as ngram;
     -- Use the  DISTINCT command to get the unique n-grams for all records.
     ngramed2 = DISTINCT ngramed1;
     -- Use the  GROUP command to group records by n-gram and hour. 
     hour_frequency1 = GROUP ngramed2 BY (ngram, hour);
     -- Use the  COUNT function to get the count (occurrences) of each n-gram. 
     hour_frequency2 = FOREACH hour_frequency1 GENERATE flatten($0), COUNT($1) as count;
     -- Use the  GROUP command to group records by n-gram only. 
     -- Each group now corresponds to a distinct n-gram and has the count for each hour.
     uniq_frequency1 = GROUP hour_frequency2 BY group::ngram;
     -- For each group, identify the hour in which this n-gram is used with a particularly high frequency.
     -- Call the ScoreGenerator UDF to calculate a "popularity" score for the n-gram.
     uniq_frequency2 = FOREACH uniq_frequency1 GENERATE flatten($0), flatten(org.apache.pig.tutorial.ScoreGenerator($1));
     -- Use the  FOREACH-GENERATE command to assign names to the fields. 
     uniq_frequency3 = FOREACH uniq_frequency2 GENERATE $1 as hour, $0 as ngram, $2 as score, $3 as count, $4 as mean;
     -- Use the  FILTER command to move all records with a score less than or equal to 2.0.
     filtered_uniq_frequency = FILTER uniq_frequency3 BY score > 2.0;
     -- Use the  ORDER command to sort the remaining records by hour and score. 
     ordered_uniq_frequency = ORDER filtered_uniq_frequency BY hour, score;
     -- Use the  PigStorage function to store the results. 
     -- Output: (hour, n-gram, score, count, average_counts_among_all_hours)
     STORE ordered_uniq_frequency INTO 'oss://emr/checklist/data/chengtao/pig/script1-hadoop-results' USING PigStorage();
  • The script1-hadoop-oss.pig file is saved and uploaded to a directory in OSS, such as oss://path/to/script1-hadoop-oss.pig.

Procedure

  1. Go to the Data Platform tab.
    1. Log on to the Alibaba Cloud EMR console by using your Alibaba Cloud account.
    2. In the top navigation bar, select the region where your cluster resides and select a resource group based on your business requirements.
    3. Click the Data Platform tab.
  2. In the Projects section, find your project and click Edit Job in the Actions column.
  3. Create a Pig job.
    1. In the Edit Job pane on the left, right-click the folder on which you want to perform operations and select Create Job.
    2. In the Create Job dialog box, specify Name and Description, and select Pig from the Job Type drop-down list.
      This option indicates that a Pig job will be created. You can use the following command syntax to submit a Pig job:
      pig [user provided parameters]
    3. Click OK.
  4. Edit job content.
    1. Specify the command line parameters required to submit the job in the Content field.
      For example, to use the Pig script uploaded to OSS, enter the following command:
      -x mapreduce ossref://emr/checklist/jars/chengtao/pig/script1-hadoop-oss.pig
      Note Click + Enter an OSS path in the lower part of the page. In the OSS File dialog box, set File Prefix to OSSREF and specify File Path. The system automatically completes the path of the Pig script in OSS.
    2. Click Save.