0

やった

bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.4.jar -inputreader "StreamXmlRecordReader, begin=<metaData>,end=</metaData>" -input /user/root/xmlpytext/metaData.xml -mapper /Users/amrita/desktop/hadoop/pythonpractise/mapperxml.py -file /Users/amrita/desktop/hadoop/pythonpractise/mapperxml.py -reducer /Users/amrita/desktop/hadoop/pythonpractise/reducerxml.py  -file /Users/amrita/desktop/hadoop/pythonpractise/mapperxml.py -output /user/root/xmlpytext-output1 -numReduceTasks 1

しかし、それは示しています

13/03/22 09:38:48 INFO mapred.FileInputFormat: Total input paths to process : 1
13/03/22 09:38:49 INFO streaming.StreamJob: getLocalDirs(): [/Users/amrita/desktop/hadoop/temp/mapred/local]
13/03/22 09:38:49 INFO streaming.StreamJob: Running job: job_201303220919_0001
13/03/22 09:38:49 INFO streaming.StreamJob: To kill this job, run:
13/03/22 09:38:49 INFO streaming.StreamJob: /private/var/root/hadoop-1.0.4/libexec/../bin/hadoop job  -Dmapred.job.tracker=-kill job_201303220919_0001
13/03/22 09:38:49 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201303220919_0001
13/03/22 09:38:50 INFO streaming.StreamJob:  map 0%  reduce 0%
13/03/22 09:39:26 INFO streaming.StreamJob:  map 100%  reduce 100%
13/03/22 09:39:26 INFO streaming.StreamJob: To kill this job, run:
13/03/22 09:39:26 INFO streaming.StreamJob: /private/var/root/hadoop-1.0.4/libexec/../bin/hadoop job  -Dmapred.job.tracker=-kill job_201303220919_0001
13/03/22 09:39:26 INFO streaming.StreamJob: Tracking URL: http:///jobdetails.jsp?jobid=job_201303220919_0001
13/03/22 09:39:26 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201303220919_0001_m_000000
13/03/22 09:39:26 INFO streaming.StreamJob: killJob...
Streaming Command Failed!

私がjobdetails.jspを通過すると、そこに表示されます

java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
    at org.apache.hadoop.streaming.StreamInputFormat.getRecordReader(StreamInputFormat.java:77)
    at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:197)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at org.apache.hadoop.streaming.StreamInputFormat.getRecordReader(StreamInputFormat.java:74)
    ... 8 more
Caused by: java.io.IOException: JobConf: missing required property: stream.recordreader.begin
    at org.apache.hadoop.streaming.StreamXmlRecordReader.checkJobGet(StreamXmlRecordReader.java:278)
    at org.apache.hadoop.streaming.StreamXmlRecordReader.<init>(StreamXmlRecordReader.java:52)
    ... 13 more

私のマッパー

#!/usr/bin/env python
import sys
import cStringIO
import xml.etree.ElementTree as xml
def cleanResult(element):
    result = None
    if element is not None:
        result = element.text
        result = result.strip()
    else:
        result = ""
    return result
def process(val):
    root = xml.fromstring(val)
    sceneID = cleanResult(root.find('sceneID'))
    cc = cleanResult(root.find('cloudCover'))
    returnval = ("%s,%s") % (sceneID,cc)
    return returnval.strip()
if __name__ == '__main__':
    buff = None
    intext = False
    for line in sys.stdin:
        line = line.strip()
        if line.find("<metaData>") != -1:
            intext = True
            buff = cStringIO.StringIO()
             buff.write(line)
         elif line.find("</metaData>") != -1:
              intext = False
              buff.write(line)
              val = buff.getvalue()
              buff.close()
              buff = None
              print process(val)
        else:
            if intext:
                buff.write(line)

減速機

#!/usr/bin/env python
import sys
if __name__ == '__main__':
     for line in sys.stdin:
         print line.strip()

なぜこれが起こるのか誰か教えてください。Hadoop-1.0.4 im mac を使用しています。何か間違っていることでも。私は何かを変更する必要があります。助けてください。

4

2 に答える 2

2

コンマと begin の間のスペースを削除します, begin=<

正しい形式は次のとおりです。

hadoop jar hadoop-streaming.jar -inputreader
"StreamXmlRecord,begin=BEGIN_STRING,end=END_STRING" ..... (rest of the command)

これは、次の行を囲むコードによるものです。org.apache.hadoop.streaming.StreamJob

for (int i = 1; i < args.length; i++) {
    String[] nv = args[i].split("=", 2);
    String k = "stream.recordreader." + nv[0];
    String v = (nv.length > 1) ? nv[1] : "";
    jobConf_.set(k, v);
  }
于 2014-03-27T22:12:32.350 に答える
0

不足している構成変数を次のように設定してみてください (stream.recordreader.プレフィックスを追加し、それらが jar の後の最初の引数であることを確認して、それらを二重引用符で囲みます)。

bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.4.jar \
  "-Dstream.recordreader.begin=<metaData>" \
  "-Dstream.recordreader.end=</metaData>" \
  -inputreader "StreamXmlRecordReader \
  -input /user/root/xmlpytext/metaData.xml \
  -mapper /Users/amrita/desktop/hadoop/pythonpractise/mapperxml.py \
  -file /Users/amrita/desktop/hadoop/pythonpractise/mapperxml.py \
  -reducer /Users/amrita/desktop/hadoop/pythonpractise/reducerxml.py \
  -file /Users/amrita/desktop/hadoop/pythonpractise/mapperxml.py \
  -output /user/root/xmlpytext-output1 \
  -numReduceTasks 1
于 2013-03-22T10:42:16.280 に答える