注册 登录  
 加关注
   显示下一条  |  关闭
温馨提示!由于新浪微博认证机制调整,您的新浪微博帐号绑定已过期,请重新绑定!立即重新绑定新浪微博》  |  关闭

itoedr的it学苑

记录从IT文盲学到专家的历程

 
 
 

日志

 
 

hadoop的两个工具组件:pig与hive  

2014-10-05 19:05:42|  分类: big-data |  标签: |举报 |字号 订阅

  下载LOFTER 我的照片书  |

关于pig

       Pig是一种编程语言,它简化了Hadoop常见的工作任务。Pig可加载数据、表达转换数据以及存储最终结果。Pig内置的操作使得半结构化数据变得有意义(如日志文件)。同时Pig可扩展使用Java中添加的自定义数据类型并支持数据转换。
Hive在Hadoop中扮演数据仓库的角色。Hive添加数据的结构在HDFS(hive superimposes structure on data in HDFS),并允许使用类似于SQL语法进行数据查询。与Pig一样,Hive的核心功能是可扩展的。

 关于Hive
       Hive更适合于数据仓库的任务,Hive主要用于静态的结构以及需要经常分析的工作。Hive与SQL相似促使其成为Hadoop与其他BI工具结合的理想交集。Pig赋予开发人员在大数据集领域更多的灵活性,并允许开发简洁的脚本用于转换数据流以便嵌入到较大的应用程序。Pig相比Hive相对轻量,它主要的优势是相比于直接使用Hadoop Java APIs可大幅削减代码量。

Pig与Hive的实质

       经过Pig Latin的转换后变成了一道MapReduce的作业,通过MapReduce多个线程,进程或者独立系统并行执行处理的结果集进行分类和归纳。
       Map() 和 Reduce() 两个函数会并行运行,即使不是在同一的系统的同一时刻也在同时运行一套任务,当所有的处理都完成之后,结果将被排序,格式化,并且保存到一个文件。Pig利用MapReduce将计算分成两个阶段,第一个阶段分解成为小块并且分布到每一个存储数据的节点上进行执行,对计算的压力进行分散,第二个阶段聚合第一个阶段执行的这些结果,这样可以达到非常高的吞吐量,通过不多的代码和工作量就能够驱动上千台机器并行计算,充分的利用计算机的资源,打消运行中的瓶颈。
      Pig最大的作用就是对mapreduce算法(框架)实现了一套shell脚本 ,类似我们通常熟悉的SQL语句,在Pig中称之为Pig Latin,在这套脚本中我们可以对加载出来的数据进行排序、过滤、求和、分组(group by)、关联(Joining),Pig也可以由用户自定义一些函数对数据集进行操作,也就是传说中的UDF(user-defined functions)。
        Pig用来写一些即时脚本吧,比如领导问你要份数据,半个小时要出来之类。


***************

Pig Tutorial

***************

        The Pig tutorial shows you how to run Pig scripts using Pig's local mode and mapreduce mode (see Execution Modes).

To get started, do the following preliminary tasks:

  1. Make sure the JAVA_HOME environment variable is set the root of your Java installation.
  2. Make sure your PATH includes bin/pig (this enables you to run the tutorials using the "pig" command).
    $ export PATH=/<my-path-to-pig>/pig-0.9.0/bin:$PATH 
    
  3. Set the PIG_HOME environment variable:
    $ export PIG_HOME=/<my-path-to-pig>/pig-0.9.0 
    
  4. Create the pigtutorial.tar.gz file:
    • Move to the Pig tutorial directory (.../pig-0.9.0/tutorial).
    • Edit the build.xml file in the tutorial directory.
      Change this:   <property name="pigjar" value="../pig.jar" />
      To this:       <property name="pigjar" value="../pig-0.9.0-core.jar" />
      
    • Run the "ant" command from the tutorial directory. This will create the pigtutorial.tar.gz file.
  5. Copy the pigtutorial.tar.gz file from the Pig tutorial directory to your local directory.
  6. Unzip the pigtutorial.tar.gz file.
    $ tar -xzf pigtutorial.tar.gz
    
  7. A new directory named pigtmp is created. This directory contains the Pig Tutorial Files. These files work with Hadoop 0.20.2 and include everything you need to run Pig Script 1 and Pig Script 2.

Running the Pig Scripts in Local Mode

To run the Pig scripts in local mode, do the following:

  1. Move to the pigtmp directory.
  2. Execute the following command (using either script1-local.pig or script2-local.pig).
    $ pig -x local script1-local.pig
    
  3. Review the result files, located in the part-r-00000 directory.

    The output may contain a few Hadoop warnings which can be ignored:

    2010-04-08 12:55:33,642 [main] INFO  org.apache.hadoop.metrics.jvm.JvmMetrics 
    - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
    

Running the Pig Scripts in Mapreduce Mode

To run the Pig scripts in mapreduce mode, do the following:

  1. Move to the pigtmp directory.
  2. Copy the excite.log.bz2 file from the pigtmp directory to the HDFS directory.
    $ hadoop fs –copyFromLocal excite.log.bz2 .
    
  3. Set the PIG_CLASSPATH environment variable to the location of the cluster configuration directory (the directory that contains the core-site.xml, hdfs-site.xml and mapred-site.xml files):
    export PIG_CLASSPATH=/mycluster/conf
    

    Note: The PIG_CLASSPATH can also be used to add any other 3rd party dependencies or resource files a pig script may require. If there is also a need to make the added entries take the highest precedence in the Pig JVM's classpath order, one may also set the env-var PIG_USER_CLASSPATH_FIRST to any value, such as 'true' (and unset the env-var to disable).

  4. Set the HADOOP_CONF_DIR environment variable to the location of the cluster configuration directory:
    export HADOOP_CONF_DIR=/mycluster/conf
    
  5. Execute the following command (using either script1-hadoop.pig or script2-hadoop.pig):
    $ pig script1-hadoop.pig
    
  6. Review the result files, located in the script1-hadoop-results or script2-hadoop-results HDFS directory:
    $ hadoop fs -ls script1-hadoop-results
    $ hadoop fs -cat 'script1-hadoop-results/*' | less
    

Pig Tutorial Files

The contents of the Pig tutorial file (pigtutorial.tar.gz) are described here.

File

Description

pig.jar

Pig JAR file

tutorial.jar

User defined functions (UDFs) and Java classes

script1-local.pig

Pig Script 1, Query Phrase Popularity (local mode)

script1-hadoop.pig

Pig Script 1, Query Phrase Popularity (mapreduce mode)

script2-local.pig

Pig Script 2, Temporal Query Phrase Popularity (local mode)

script2-hadoop.pig

Pig Script 2, Temporal Query Phrase Popularity (mapreduce mode)

excite-small.log

Log file, Excite search engine (local mode)

excite.log.bz2

Log file, Excite search engine (mapreduce)

The user defined functions (UDFs) are described here.

UDF

Description

ExtractHour

Extracts the hour from the record.

NGramGenerator

Composes n-grams from the set of words.

NonURLDetector

Removes the record if the query field is empty or a URL.

ScoreGenerator

Calculates a "popularity" score for the n-gram.

ToLower

Changes the query field to lowercase.

TutorialUtil

Divides the query string into a set of words.

Pig Script 1: Query Phrase Popularity

The Query Phrase Popularity script (script1-local.pig or script1-hadoop.pig) processes a search query log file from the Excite search engine and finds search phrases that occur with particular high frequency during certain times of the day.

The script is shown here:

  • Register the tutorial JAR file so that the included UDFs can be called in the script.

REGISTER ./tutorial.jar; 
  • Use the PigStorage function to load the excite log file (excite.log or excite-small.log) into the “raw” bag as an array of records with the fields user, time, and query.

raw = LOAD 'excite.log' USING PigStorage('\t') AS (user, time, query);
  • Call the NonURLDetector UDF to remove records if the query field is empty or a URL.

clean1 = FILTER raw BY org.apache.pig.tutorial.NonURLDetector(query);
  • Call the ToLower UDF to change the query field to lowercase.

clean2 = FOREACH clean1 GENERATE user, time, org.apache.pig.tutorial.ToLower(query) as query;
  • Because the log file only contains queries for a single day, we are only interested in the hour. The excite query log timestamp format is YYMMDDHHMMSS. Call the ExtractHour UDF to extract the hour (HH) from the time field.

houred = FOREACH clean2 GENERATE user, org.apache.pig.tutorial.ExtractHour(time) as hour, query;
  • Call the NGramGenerator UDF to compose the n-grams of the query.

ngramed1 = FOREACH houred GENERATE user, hour, flatten(org.apache.pig.tutorial.NGramGenerator(query)) as ngram;
  • Use the DISTINCT operator to get the unique n-grams for all records.

ngramed2 = DISTINCT ngramed1;
  • Use the GROUP operator to group records by n-gram and hour.

hour_frequency1 = GROUP ngramed2 BY (ngram, hour);
  • Use the COUNT function to get the count (occurrences) of each n-gram.

hour_frequency2 = FOREACH hour_frequency1 GENERATE flatten($0), COUNT($1) as count;
  • Use the GROUP operator to group records by n-gram only. Each group now corresponds to a distinct n-gram and has the count for each hour.

uniq_frequency1 = GROUP hour_frequency2 BY group::ngram;
  • For each group, identify the hour in which this n-gram is used with a particularly high frequency. Call the ScoreGenerator UDF to calculate a "popularity" score for the n-gram.

uniq_frequency2 = FOREACH uniq_frequency1 GENERATE flatten($0), flatten(org.apache.pig.tutorial.ScoreGenerator($1));
  • Use the FOREACH-GENERATE operator to assign names to the fields.

uniq_frequency3 = FOREACH uniq_frequency2 GENERATE $1 as hour, $0 as ngram, $2 as score, $3 as count, $4 as mean;
  • Use the FILTER operator to remove all records with a score less than or equal to 2.0.

filtered_uniq_frequency = FILTER uniq_frequency3 BY score > 2.0;
  • Use the ORDER operator to sort the remaining records by hour and score.

ordered_uniq_frequency = ORDER filtered_uniq_frequency BY hour, score;
  • Use the PigStorage function to store the results. The output file contains a list of n-grams with the following fields: hour, ngram, score, count, mean.

STORE ordered_uniq_frequency INTO '/tmp/tutorial-results' USING PigStorage(); 

Pig Script 2: Temporal Query Phrase Popularity

The Temporal Query Phrase Popularity script (script2-local.pig or script2-hadoop.pig) processes a search query log file from the Excite search engine and compares the occurrence of frequency of search phrases across two time periods separated by twelve hours.

The script is shown here:

  • Register the tutorial JAR file so that the user defined functions (UDFs) can be called in the script.

REGISTER ./tutorial.jar;
  • Use the PigStorage function to load the excite log file (excite.log or excite-small.log) into the “raw” bag as an array of records with the fields user, time, and query.

raw = LOAD 'excite.log' USING PigStorage('\t') AS (user, time, query);
  • Call the NonURLDetector UDF to remove records if the query field is empty or a URL.

clean1 = FILTER raw BY org.apache.pig.tutorial.NonURLDetector(query);
  • Call the ToLower UDF to change the query field to lowercase.

clean2 = FOREACH clean1 GENERATE user, time, org.apache.pig.tutorial.ToLower(query) as query;
  • Because the log file only contains queries for a single day, we are only interested in the hour. The excite query log timestamp format is YYMMDDHHMMSS. Call the ExtractHour UDF to extract the hour from the time field.

houred = FOREACH clean2 GENERATE user, org.apache.pig.tutorial.ExtractHour(time) as hour, query;
  • Call the NGramGenerator UDF to compose the n-grams of the query.

ngramed1 = FOREACH houred GENERATE user, hour, flatten(org.apache.pig.tutorial.NGramGenerator(query)) as ngram;
  • Use the DISTINCT operator to get the unique n-grams for all records.

ngramed2 = DISTINCT ngramed1;
  • Use the GROUP operator to group the records by n-gram and hour.

hour_frequency1 = GROUP ngramed2 BY (ngram, hour);
  • Use the COUNT function to get the count (occurrences) of each n-gram.

hour_frequency2 = FOREACH hour_frequency1 GENERATE flatten($0), COUNT($1) as count;
  • Use the FOREACH-GENERATE operator to assign names to the fields.

hour_frequency3 = FOREACH hour_frequency2 GENERATE $0 as ngram, $1 as hour, $2 as count;
  • Use the FILTERoperator to get the n-grams for hour ‘00’

hour00 = FILTER hour_frequency2 BY hour eq '00';
  • Uses the FILTER operators to get the n-grams for hour ‘12’

hour12 = FILTER hour_frequency3 BY hour eq '12';
  • Use the JOIN operator to get the n-grams that appear in both hours.

same = JOIN hour00 BY $0, hour12 BY $0;
  • Use the FOREACH-GENERATE operator to record their frequency.

same1 = FOREACH same GENERATE hour_frequency2::hour00::group::ngram as ngram, $2 as count00, $5 as count12;
  • Use the PigStorage function to store the results. The output file contains a list of n-grams with the following fields: ngram, count00, count12.

STORE same1 INTO '/tmp/tutorial-join-results' USING PigStorage();
  评论这张
 
阅读(248)| 评论(0)
推荐 转载

历史上的今天

在LOFTER的更多文章

评论

<#--最新日志,群博日志--> <#--推荐日志--> <#--引用记录--> <#--博主推荐--> <#--随机阅读--> <#--首页推荐--> <#--历史上的今天--> <#--被推荐日志--> <#--上一篇,下一篇--> <#-- 热度 --> <#-- 网易新闻广告 --> <#--右边模块结构--> <#--评论模块结构--> <#--引用模块结构--> <#--博主发起的投票-->
 
 
 
 
 
 
 
 
 
 
 
 
 
 

页脚

网易公司版权所有 ©1997-2017