29

I have the following scenario-

Pig version used 0.70

Sample HDFS directory structure:

/user/training/test/20100810/<data files>
/user/training/test/20100811/<data files>
/user/training/test/20100812/<data files>
/user/training/test/20100813/<data files>
/user/training/test/20100814/<data files>

As you can see in the paths listed above, one of the directory names is a date stamp.

Problem: I want to load files from a date range say from 20100810 to 20100813.

I can pass the 'from' and 'to' of the date range as parameters to the Pig script but how do I make use of these parameters in the LOAD statement. I am able to do the following

temp = LOAD '/user/training/test/{20100810,20100811,20100812}' USING SomeLoader() AS (...);

The following works with hadoop:

hadoop fs -ls /user/training/test/{20100810..20100813}

But it fails when I try the same with LOAD inside the pig script. How do I make use of the parameters passed to the Pig script to load data from a date range?

Error log follows:

Backend error message during job submission
-------------------------------------------
org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Unable to create input splits for: hdfs://<ServerName>.com/user/training/test/{20100810..20100813}
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:269)
        at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:858)
        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:875)
        at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:793)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:752)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1062)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:752)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:726)
        at org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378)
        at org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247)
        at org.apache.hadoop.mapred.jobcontrol.JobControl.run(JobControl.java:279)
        at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input Pattern hdfs://<ServerName>.com/user/training/test/{20100810..20100813} matches 0 files
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTextInputFormat.listStatus(PigTextInputFormat.java:36)
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:258)
        ... 14 more



Pig Stack Trace
---------------
ERROR 2997: Unable to recreate exception from backend error: org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Unable to create input splits for: hdfs://<ServerName>.com/user/training/test/{20100810..20100813}

org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias test
        at org.apache.pig.PigServer.openIterator(PigServer.java:521)
        at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:544)
        at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:241)
        at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:162)
        at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:138)
        at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:75)
        at org.apache.pig.Main.main(Main.java:357)
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 2997: Unable to recreate exception from backend error: org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Unable to create input splits for: hdfs://<ServerName>.com/user/training/test/{20100810..20100813}
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher.getStats(Launcher.java:169)

Do I need to make use of a higher language like Python to capture all date stamps in the range and pass them to LOAD as a comma separated list?

cheers

4

11 に答える 11

31

As zjffdu said, the path expansion is done by the shell. One common way to solve your problem is to simply use Pig parameters (which is a good way to make your script more resuable anyway):

shell:

pig -f script.pig -param input=/user/training/test/{20100810..20100812}

script.pig:

temp = LOAD '$input' USING SomeLoader() AS (...);
于 2010-09-24T18:07:41.197 に答える
21

Pigは、シェルのglobユーティリティではなく、hadoopファイルglobユーティリティを使用してファイル名パターンを処理しています。Hadoopはここに文書化されています。ご覧のとおり、hadoopは範囲の「..」演算子をサポートしていません。2つの選択肢があるように思われます。{date1,date2,date2,...,dateN}リストを手動で書き出すか、これがまれなユースケースである場合はおそらくその方法です。または、そのリストを生成するラッパースクリプトを作成します。日付範囲からこのようなリストを作成することは、選択したスクリプト言語にとって簡単な作業です。私のアプリケーションでは、生成されたリストルートを使用しましたが、正常に機能しています(CHD3配布)。

于 2011-02-16T15:57:10.000 に答える
10

i ran across this answer when i was having trouble trying to create a file glob in a script and then pass it as a parameter into a pig script.

none of the current answers applied to my situation, but i did find a general answer that might be helpful here.

in my case, the shell expansion was happening and then passing that into the script - causing complete problems with the pig parser, understandably.

so by simply surrounding the glob in double-quotes protects it from being expanded by the shell, and passes it as is into the command.

WON'T WORK:

$ pig -f my-pig-file.pig -p INPUTFILEMASK='/logs/file{01,02,06}.log' -p OTHERPARAM=6

WILL WORK

$ pig -f my-pig-file.pig -p INPUTFILEMASK="/logs/file{01,02,06}.log" -p OTHERPARAM=6

i hope this saves someone some pain and agony.

于 2011-12-15T22:34:15.480 に答える
6

So since this works:

temp = LOAD '/user/training/test/{20100810,20100811,20100812}' USING SomeLoader()

but this does not work:

temp = LOAD '/user/training/test/{20100810..20100812}' USING SomeLoader()

but if you want a date range that spans say 300 days and passing a full list to LOAD is not elegant to say the least. I came up with this and it works.

Say you want to load data from 2012-10-08 to today 2013-02-14, what you can do is

temp = LOAD '/user/training/test/{201210*,201211*,201212,2013*}' USING SomeLoader()

then do a filter after that

filtered = FILTER temp BY (the_date>='2012-10-08')
于 2013-02-14T22:21:34.067 に答える
4

デイブキャンベルに感謝します。彼らはいくつかの票を得たので、それ以上の答えのいくつかは間違っています。

以下は私のテスト結果です:

  • 作品

    • pig -f test.pig -param input="/test_{20120713,20120714}.txt"
      • 式の「、」の前後にスペースを含めることはできません
    • pig -f test.pig -param input="/test_201207*.txt"
    • pig -f test.pig -param input="/test_2012071?.txt"
    • pig -f test.pig -param input="/test_20120713.txt,/test_20120714.txt"
    • pig -f test.pig -param input=/test_20120713.txt,/test_20120714.txt
      • 式の「、」の前後にスペースを含めることはできません
  • 動作しません

    • pig -f test.pig -param input="/test_{20120713..20120714}.txt"
    • pig -f test.pig -param input=/test_{20120713,20120714}.txt
    • pig -f test.pig -param input=/test_{20120713..20120714}.txt
于 2012-07-23T01:42:37.053 に答える
4

I found this problem is caused by linux shell. Linux shell will help you expand

 {20100810..20100812} 

to

  20100810 20100811 20100812, 

then you actually run command

bin/hadoop fs -ls 20100810 20100811 20100812

But in the hdfs api, it won't help you to expand the expression.

于 2010-09-15T10:12:44.460 に答える
4
temp = LOAD '/user/training/test/2010081*/*' USING SomeLoader() AS (...);
load 20100810~20100819 data
temp = LOAD '/user/training/test/2010081{0,1,2}/*' USING SomeLoader() AS (...);
load 20100810~2010812 data

if the variable is in the middle of file path, concate subfolder name or use '*' for all files.

于 2011-07-24T13:35:48.050 に答える
1

Do I need to make use of a higher language like Python to capture all date stamps in the range and pass them to LOAD as a comma separated list?

Probably you don't - this can be done using custom Load UDF, or try rethinking you directory structure (this will work good if your ranges are mostly static).

additionally: Pig accepts parameters, maybe this would help you (maybe you could do function that will load data from one day and union it to resulting set, but I don't know if it's possible)

edit: probably writing simple python or bash script that generates list of dates (folders) is the easiest solution, you than just have to pass it to Pig, and this should work fine

于 2010-08-18T19:57:13.110 に答える
1

To Romain's answer, if you want to just parameterize the date, the shell will run like this:

pig -param input="$(echo {20100810..20100812} | tr ' ' ,)" -f script.pig

pig:

temp = LOAD '/user/training/test/{$input}' USING SomeLoader() AS (...);

Please note the quotes.

于 2016-03-02T14:06:02.717 に答える
0

Pig support globe status of hdfs,

so I think pig can handle the pattern /user/training/test/{20100810,20100811,20100812},

could you paste the error logs ?

于 2010-08-20T06:14:10.543 に答える
0

Here's a script I'm using to generate a list of dates, and then put this list to pig script params. Very tricky, but works for me.

For example:

DT=20180101
DT_LIST=''
for ((i=0; i<=$DAYS; i++))
do
    d=$(date +%Y%m%d -d "${DT} +$i days");
    DT_LIST=${DT_LIST}$d','
done

size=${#DT_LIST}
DT_LIST=${DT_LIST:0:size-1}


pig -p input_data=xxx/yyy/'${DT_LIST}' script.pig

于 2020-06-27T00:34:52.963 に答える