1

私は Cloudera CDH 5.0.2 を使用しており、flume データを Hive metastore/warehouse@HDFS にインポートしたいと考えていました。しかし、それは機能していません。

次の JSON SerDe を使用しました: http://files.cloudera.com/samples/hive-serdes-1.0-SNAPSHOT.jar

このコードを使用して、ハイブ エディターを使用してテーブルを作成しています。

CREATE EXTERNAL TABLE tweets (
  id BIGINT,
  created_at STRING,
  source STRING,
  favorited BOOLEAN,
  retweeted_status STRUCT<
    text:STRING,
    user:STRUCT<screen_name:STRING,name:STRING>,
    retweet_count:INT>,
  entities STRUCT<
    urls:ARRAY<STRUCT<expanded_url:STRING>>,
    user_mentions:ARRAY<STRUCT<screen_name:STRING,name:STRING>>,
    hashtags:ARRAY<STRUCT<text:STRING>>>,
  text STRING,
  user STRUCT<
    screen_name:STRING,
    name:STRING,
    friends_count:INT,
    followers_count:INT,
    statuses_count:INT,
    verified:BOOLEAN,
    utc_offset:INT,
    time_zone:STRING>,
  in_reply_to_screen_name STRING
    ) 
    PARTITIONED BY (datehour INT)
    ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe'
    LOCATION '/user/flume/tweets';

そして、HIVE エディターを使用してクエリを実行すると、次のログが表示されます。

14/06/29 01:30:54 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO parse.ParseDriver: Parsing command: CREATE EXTERNAL TABLE tweets3 (
  id BIGINT,
  created_at STRING,
  source STRING,
  favorited BOOLEAN,
  retweeted_status STRUCT<
    text:STRING,
    user:STRUCT<screen_name:STRING,name:STRING>,
    retweet_count:INT>,
  entities STRUCT<
    urls:ARRAY<STRUCT<expanded_url:STRING>>,
    user_mentions:ARRAY<STRUCT<screen_name:STRING,name:STRING>>,
    hashtags:ARRAY<STRUCT<text:STRING>>>,
  text STRING,
  user STRUCT<
    screen_name:STRING,
    name:STRING,
    friends_count:INT,
    followers_count:INT,
    statuses_count:INT,
    verified:BOOLEAN,
    utc_offset:INT,
    time_zone:STRING>,
  in_reply_to_screen_name STRING
) 
PARTITIONED BY (datehour INT)
ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe'
LOCATION '/user/flume/tweets'
14/06/29 01:30:54 INFO parse.ParseDriver: Parse Completed
14/06/29 01:30:54 INFO log.PerfLogger: </PERFLOG method=parse start=1404030654781 end=1404030654788 duration=7 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
14/06/29 01:30:54 INFO parse.SemanticAnalyzer: Creating table tweets3 position=22
14/06/29 01:30:54 INFO ql.Driver: Semantic Analysis Completed
14/06/29 01:30:54 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1404030654788 end=1404030654791 duration=3 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
14/06/29 01:30:54 INFO log.PerfLogger: </PERFLOG method=compile start=1404030654781 end=1404030654791 duration=10 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/06/29 01:30:54 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO ql.Driver: Creating lock manager of type org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager
14/06/29 01:30:54 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost.localdomain:2181 sessionTimeout=600000 watcher=org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager$DummyWatcher@66b05d6
14/06/29 01:30:54 WARN ZooKeeperHiveLockManager: Unexpected ZK exception when creating parent node /hive_zookeeper_namespace_hive
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hive_zookeeper_namespace_hive
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
    at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
    at org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.setContext(ZooKeeperHiveLockManager.java:121)
    at org.apache.hadoop.hive.ql.Driver.createLockManager(Driver.java:174)
    at org.apache.hadoop.hive.ql.Driver.checkConcurrency(Driver.java:154)
    at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1047)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:926)
    at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
    at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:64)
    at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:177)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)
14/06/29 01:30:54 INFO log.PerfLogger: <PERFLOG method=acquireReadWriteLocks from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO log.PerfLogger: </PERFLOG method=acquireReadWriteLocks start=1404030654913 end=1404030654914 duration=1 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO ql.Driver: Starting command: CREATE EXTERNAL TABLE tweets3 (
  id BIGINT,
  created_at STRING,
  source STRING,
  favorited BOOLEAN,
  retweeted_status STRUCT<
    text:STRING,
    user:STRUCT<screen_name:STRING,name:STRING>,
    retweet_count:INT>,
  entities STRUCT<
    urls:ARRAY<STRUCT<expanded_url:STRING>>,
    user_mentions:ARRAY<STRUCT<screen_name:STRING,name:STRING>>,
    hashtags:ARRAY<STRUCT<text:STRING>>>,
  text STRING,
  user STRUCT<
    screen_name:STRING,
    name:STRING,
    friends_count:INT,
    followers_count:INT,
    statuses_count:INT,
    verified:BOOLEAN,
    utc_offset:INT,
    time_zone:STRING>,
  in_reply_to_screen_name STRING
    ) 
    PARTITIONED BY (datehour INT)
ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe'
LOCATION '/user/flume/tweets'
14/06/29 01:30:54 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1404030654810 end=1404030654914 duration=104 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO log.PerfLogger: <PERFLOG method=task.DDL.Stage-0 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:54 INFO hive.metastore: Trying to connect to metastore with URI thrift://localhost.localdomain:9083
14/06/29 01:30:54 WARN security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user HDFS
org.apache.hadoop.util.Shell$ExitCodeException: id: HDFS: No such user
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
at org.apache.hadoop.security.Groups.getGroups(Groups.java:139)
at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1409)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:312)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:169)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1161)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:62)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2407)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2418)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:598)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3697)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:253)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1485)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1263)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1091)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:926)
at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:64)
at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:177)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
14/06/29 01:30:54 WARN security.UserGroupInformation: No groups available for user HDFS
14/06/29 01:30:54 INFO hive.metastore: Connected to metastore.
14/06/29 01:30:54 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=9e8711ee-2e1d-474d-a2cc-082bd92b9ce7]: getOperationStatus()
14/06/29 01:30:55 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=9e8711ee-2e1d-474d-a2cc-082bd92b9ce7]: getOperationStatus()
14/06/29 01:30:55 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=9e8711ee-2e1d-474d-a2cc-082bd92b9ce7]: getOperationStatus()
14/06/29 01:30:56 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=9e8711ee-2e1d-474d-a2cc-082bd92b9ce7]: getOperationStatus()
14/06/29 01:30:56 INFO log.PerfLogger: </PERFLOG method=task.DDL.Stage-0 start=1404030654914 end=1404030656188 duration=1274 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:56 INFO log.PerfLogger: </PERFLOG method=runTasks start=1404030654914 end=1404030656188 duration=1274 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:56 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1404030654914 end=1404030656188 duration=1274 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:56 INFO ql.Driver: OK
14/06/29 01:30:56 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:56 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1404030656189 end=1404030656189 duration=0 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:56 INFO log.PerfLogger: </PERFLOG method=Driver.run start=1404030654810 end=1404030656189 duration=1379 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 01:30:56 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=9e8711ee-2e1d-474d-a2cc-082bd92b9ce7]: getOperationStatus()

HDFS にアクセスしてウェアハウスを参照すると、ファイルが表示されません。ウェアハウスにデータがインポートされていないようです。

メタストアに PostgreSQL を使用しています。

そして、このクエリを使用してデータをインポートしようとすると:

06/29/14 00:31:09 LOAD DATA INPATH '/user/flume/tweets/FlumeData.1404026375345' INTO TABLE 'default.tweets' PARTITION (datehour='1404026375345')

次のエラー メッセージが表示されます。

14/06/29 00:31:09 INFO log.PerfLogger: <PERFLOG method=compile     from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO parse.ParseDriver: Parsing command: LOAD DATA INPATH '/user/flume/tweets/FlumeData.1404026375345' INTO TABLE `default.tweets` PARTITION (datehour='1404026375345')
14/06/29 00:31:09 INFO parse.ParseDriver: Parse Completed
14/06/29 00:31:09 INFO log.PerfLogger: </PERFLOG method=parse start=1404027069010 end=1404027069030 duration=20 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO ql.Driver: Semantic Analysis Completed
14/06/29 00:31:09 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1404027069030 end=1404027069464 duration=434 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
14/06/29 00:31:09 INFO log.PerfLogger: </PERFLOG method=compile start=1404027069010 end=1404027069464 duration=454 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/06/29 00:31:09 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO ql.Driver: Creating lock manager of type org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager
14/06/29 00:31:09 INFO zookeeper.ZooKeeper: Initiating client connection,     connectString=localhost.localdomain:2181 sessionTimeout=600000 watcher=org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager$DummyWatcher@78513cb2
14/06/29 00:31:09 INFO log.PerfLogger: <PERFLOG method=acquireReadWriteLocks from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO log.PerfLogger: </PERFLOG method=acquireReadWriteLocks start=1404027069488 end=1404027069503 duration=15 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO ql.Driver: Starting command: LOAD DATA INPATH '/user/flume/tweets/FlumeData.1404026375345' INTO TABLE `default.tweets` PARTITION (datehour='1404026375345')
14/06/29 00:31:09 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1404027069472 end=1404027069504 duration=32 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO log.PerfLogger: <PERFLOG method=task.MOVE.Stage-0 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:09 INFO exec.Task: Loading data to table default.tweets partition (datehour=1404026375345) from hdfs://localhost.localdomain:8020/user/flume/tweets/FlumeData.1404026375345
14/06/29 00:31:09 INFO hive.metastore: Trying to connect to metastore with URI thrift://localhost.localdomain:9083
14/06/29 00:31:09 INFO hive.metastore: Connected to metastore.
14/06/29 00:31:09 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=e0530ca4-dadf-4f95-8f8c-ae91b1027cc6]: getOperationStatus()
14/06/29 00:31:09 INFO exec.MoveTask: Partition is: {datehour=1404026375345}
14/06/29 00:31:10 ERROR exec.Task: Failed with exception copyFiles: error while checking/creating destination directory!!!
org.apache.hadoop.hive.ql.metadata.HiveException: copyFiles: error while checking/creating destination directory!!!
    at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java:2235)
    at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1227)
    at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:407)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
    at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1485)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1263)
    at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1091)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:926)
    at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
    at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:64)
    at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:177)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=hive, access=WRITE, inode="/user/flume/tweets":flume:flume:drwxr-xr-x
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:176)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5461)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5443)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5417)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3571)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3541)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3515)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:739)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:558)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2549)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2518)
    at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:827)
    at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:823)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:823)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:816)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
    at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java:2229)
    ... 17 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hive, access=WRITE, inode="/user/flume/tweets":flume:flume:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:176)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5461)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5443)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5417)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3571)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3541)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3515)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:739)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:558)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980)

    at org.apache.hadoop.ipc.Client.call(Client.java:1409)
    at org.apache.hadoop.ipc.Client.call(Client.java:1362)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:500)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2547)
    ... 25 more
14/06/29 00:31:10 INFO log.PerfLogger: </PERFLOG method=task.MOVE.Stage-0 start=1404027069504 end=1404027070056 duration=552 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:10 ERROR ql.Driver: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
14/06/29 00:31:10 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1404027069503 end=1404027070058 duration=555 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:10 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:10 INFO ZooKeeperHiveLockManager:  about to release lock for default/tweets
14/06/29 00:31:10 INFO ZooKeeperHiveLockManager:  about to release lock for default
14/06/29 00:31:10 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1404027070058 end=1404027070067 duration=9 from=org.apache.hadoop.hive.ql.Driver>
14/06/29 00:31:10 ERROR operation.Operation: Error:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
    at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:146)
    at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:64)
    at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:177)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)
14/06/29 00:31:10 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=e0530ca4-dadf-4f95-8f8c-ae91b1027cc6]: getOperationStatus()
14/06/29 00:31:10 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=e0530ca4-dadf-4f95-8f8c-ae91b1027cc6]: getOperationStatus()

Flume は正常に動作しており、flume/tweets の下にある HDFS のすべてのツイートとデータを確認できます。しかし、Hive が HDFS ウェアハウスのメタストアにデータをコピーしないのはなぜでしょうか?

4

0 に答える 0