4

仮想列 INPUT__FILE__NAME でフィルターを使用してハイブ クエリを実行すると、次の例外が発生します。

hive> select count(*) from netflow where INPUT__FILE__NAME='vzb.1351794600.0';
FAILED: SemanticException java.lang.RuntimeException: cannot find field input__file__name from [org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector$MyField@1d264bf5, org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector$MyField@3d44d0c6, 
.
.
.
org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector$MyField@7e6bc5aa]

このエラーは、列名が間違っている場合に発生するエラーとは異なります

hive> select count(*) from netflow where INPUT__FILE__NAM='vzb.1351794600.0';
FAILED: SemanticException [Error 10004]: Line 1:35 Invalid table alias or column reference 'INPUT__FILE__NAM': (possible column names are: first, last, ....)

ただし、select 句でこの仮想列を使用すると問題なく動作します。

hive> select INPUT__FILE__NAME from netflow group by INPUT__FILE__NAME;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 4
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201306041359_0006, Tracking URL = http://192.168.0.224:50030/jobdetails.jsp?jobid=job_201306041359_0006
Kill Command = /opt/hadoop/bin/../bin/hadoop job  -kill job_201306041359_0006
Hadoop job information for Stage-1: number of mappers: 12; number of reducers: 4
2013-06-14 18:20:10,265 Stage-1 map = 0%,  reduce = 0%
2013-06-14 18:20:33,363 Stage-1 map = 8%,  reduce = 0%
.
.
.
2013-06-14 18:21:15,554 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201306041359_0006
MapReduce Jobs Launched: 
Job 0: Map: 12  Reduce: 4   HDFS Read: 3107826046 HDFS Write: 55 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
hdfs://192.168.0.224:9000/data/jk/vzb/vzb.1351794600.0
Time taken: 78.467 seconds

既に存在する HDFS データに外部ハイブ テーブルを作成しようとしています。そして、無視したい余分なファイルがフォルダーにあります。次のスタックフローの質問で尋ねられ、提案されていることと同様に 、ハイブに外部テーブルを作成するときにハイブに特定のファイルのみを入力として取得させる方法は、ディレクトリ内の特定のファイルに指定できますか?

どんな助けでも大歓迎です。私が得ている完全なスタックトレースは次のとおりです

2013-06-14 15:01:32,608 ERROR ql.Driver (SessionState.java:printError(401)) - FAILED: SemanticException java.lang.RuntimeException: cannot find field input__
org.apache.hadoop.hive.ql.parse.SemanticException: java.lang.RuntimeException: cannot find field input__file__name from [org.apache.hadoop.hive.serde2.object
        at org.apache.hadoop.hive.ql.optimizer.pcr.PcrOpProcFactory$FilterPCR.process(PcrOpProcFactory.java:122)
        at org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
        at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:87)
        at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:124)
        at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:101)
        at org.apache.hadoop.hive.ql.optimizer.pcr.PartitionConditionRemover.transform(PartitionConditionRemover.java:86)
        at org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:102)
        at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8163)
        at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
        at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:50)
        at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:431)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:335)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:893)
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:755)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:613)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: cannot find field input__file__name from [org.apache.hadoop.hive.ser
        at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:231)
        at org.apache.hadoop.hive.ql.optimizer.pcr.PcrOpProcFactory$FilterPCR.process(PcrOpProcFactory.java:112)
        ... 23 more
Caused by: java.lang.RuntimeException: cannot find field input__file__name from [org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector$MyF
        at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:344)
        at org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector.getStructFieldRef(UnionStructObjectInspector.java:100)
        at org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.initialize(ExprNodeColumnEvaluator.java:57)
        at org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:128)
        at org.apache.hadoop.hive.ql.optimizer.ppr.PartExprEvalUtils.prepareExpr(PartExprEvalUtils.java:100)
        at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.pruneBySequentialScan(PartitionPruner.java:328)
        at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:219)
        ... 24 more
4

0 に答える 0