1

次のコマンドを使用して、Hive にテーブルを作成しました。

CREATE TABLE tweet_table(
    tweet STRING
)
ROW FORMAT
    DELIMITED
        FIELDS TERMINATED BY '\n'
        LINES TERMINATED BY '\n'

私はいくつかのデータを挿入します:

LOAD DATA LOCAL INPATH 'data.txt' INTO TABLE tweet_table

data.txt :

data1
data2
data3data4
data5

コマンドは次をselect * from tweet_table返します。

data1
data2
data3data4
data5

しかしselect tweet from tweet_table、私に与えます:

java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
    at org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:230)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:381)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:374)
    at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:540)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:338)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
    at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
    at java.beans.XMLDecoder.readObject(XMLDecoder.java:250)
    at org.apache.hadoop.hive.ql.exec.Utilities.deserializeMapRedWork(Utilities.java:542)
    at org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:222)
    ... 7 more


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

データがフィールドではなく正しいテーブルに保存された場合のようにtweet、なぜですか?

4

1 に答える 1