0

単一ノードのhadoopクラスターをセットアップし、apache Hiveで動作するように構成しました。次のコマンドを使用して(sqoopを使用して)mysqlテーブルをインポートしたとき

sqoop import --connect jdbc:mysql:// localhost / dwhadoop --tableorders --username root --password 123456 --hive-import

それは私がするときにその後スローされるいくつかの例外を除いて正常に実行されます

Hive> show tales;

ordersテーブルは表示されません

importコマンドを再度実行すると、ordersdirがすでに存在するというエラーが表示されます。

解決策を見つけるのを手伝ってください

編集:

orderインポート前にテーブルを作成していません。インポートを実行する前に、ハイブにテーブルを作成する必要がありますか。別のテーブルをインポートCustomersすると、次の例外が発生します

[root@localhost root-647263876]# sqoop import --connect jdbc:mysql://localhost/dwhadoop  --table Customers --username root --password 123456 --hive-import
Warning: /usr/lib/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: $HADOOP_HOME is deprecated.

12/08/05 07:30:25 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
12/08/05 07:30:25 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
12/08/05 07:30:25 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
12/08/05 07:30:26 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
12/08/05 07:30:26 INFO tool.CodeGenTool: Beginning code generation
12/08/05 07:30:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Customers` AS t LIMIT 1
12/08/05 07:30:26 INFO orm.CompilationManager: HADOOP_HOME is /home/enigma/hadoop/libexec/..
Note: /tmp/sqoop-root/compile/e48d4803894ee63079f7194792d624ed/Customers.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/08/05 07:30:28 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/e48d4803894ee63079f7194792d624ed/Customers.jar
12/08/05 07:30:28 WARN manager.MySQLManager: It looks like you are importing from mysql.
12/08/05 07:30:28 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
12/08/05 07:30:28 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
12/08/05 07:30:28 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
12/08/05 07:30:28 INFO mapreduce.ImportJobBase: Beginning import of Customers
12/08/05 07:30:28 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/08/05 07:30:29 INFO mapred.JobClient: Running job: job_local_0001
12/08/05 07:30:29 INFO util.ProcessTree: setsid exited with exit code 0
12/08/05 07:30:29 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@11f41fd
12/08/05 07:30:29 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
12/08/05 07:30:30 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/08/05 07:30:30 INFO mapred.LocalJobRunner: 
12/08/05 07:30:30 INFO mapred.Task: Task attempt_local_0001_m_000000_0 is allowed to commit now
12/08/05 07:30:30 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_m_000000_0' to Customers
12/08/05 07:30:30 INFO mapred.JobClient:  map 0% reduce 0%
12/08/05 07:30:32 INFO mapred.LocalJobRunner: 
12/08/05 07:30:32 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
12/08/05 07:30:33 INFO mapred.JobClient:  map 100% reduce 0%
12/08/05 07:30:33 INFO mapred.JobClient: Job complete: job_local_0001
12/08/05 07:30:33 INFO mapred.JobClient: Counters: 13
12/08/05 07:30:33 INFO mapred.JobClient:   File Output Format Counters 
12/08/05 07:30:33 INFO mapred.JobClient:     Bytes Written=45
12/08/05 07:30:33 INFO mapred.JobClient:   File Input Format Counters 
12/08/05 07:30:33 INFO mapred.JobClient:     Bytes Read=0
12/08/05 07:30:33 INFO mapred.JobClient:   FileSystemCounters
12/08/05 07:30:33 INFO mapred.JobClient:     FILE_BYTES_READ=3205
12/08/05 07:30:33 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=52579
12/08/05 07:30:33 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=45
12/08/05 07:30:33 INFO mapred.JobClient:   Map-Reduce Framework
12/08/05 07:30:33 INFO mapred.JobClient:     Map input records=3
12/08/05 07:30:33 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
12/08/05 07:30:33 INFO mapred.JobClient:     Spilled Records=0
12/08/05 07:30:33 INFO mapred.JobClient:     Total committed heap usage (bytes)=21643264
12/08/05 07:30:33 INFO mapred.JobClient:     CPU time spent (ms)=0
12/08/05 07:30:33 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
12/08/05 07:30:33 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
12/08/05 07:30:33 INFO mapred.JobClient:     Map output records=3
12/08/05 07:30:33 INFO mapreduce.ImportJobBase: Transferred 45 bytes in 5.359 seconds (8.3971 bytes/sec)
12/08/05 07:30:33 INFO mapreduce.ImportJobBase: Retrieved 3 records.
12/08/05 07:30:33 INFO hive.HiveImport: Loading uploaded data into Hive
12/08/05 07:30:33 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Customers` AS t LIMIT 1
12/08/05 07:30:33 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Cannot run program "hive": error=2, No such file or directory
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
    at java.lang.Runtime.exec(Runtime.java:615)
    at java.lang.Runtime.exec(Runtime.java:526)
    at org.apache.sqoop.util.Executor.exec(Executor.java:76)
    at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:344)
    at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:297)
    at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:239)
    at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:393)
    at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:454)
    at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
    at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
    at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
Caused by: java.io.IOException: error=2, No such file or directory
    at java.lang.UNIXProcess.forkAndExec(Native Method)
    at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
    at java.lang.ProcessImpl.start(ProcessImpl.java:130)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1021)
    ... 15 more

しかし、もう一度インポートを実行すると、

Warning: /usr/lib/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: $HADOOP_HOME is deprecated.

12/08/05 07:33:48 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
12/08/05 07:33:48 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
12/08/05 07:33:48 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
12/08/05 07:33:48 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
12/08/05 07:33:48 INFO tool.CodeGenTool: Beginning code generation
12/08/05 07:33:49 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Customers` AS t LIMIT 1
12/08/05 07:33:49 INFO orm.CompilationManager: HADOOP_HOME is /home/enigma/hadoop/libexec/..
Note: /tmp/sqoop-root/compile/9855cf7de9cf54c59095fb4bfd65a369/Customers.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/08/05 07:33:50 ERROR orm.CompilationManager: Could not rename /tmp/sqoop-root/compile/9855cf7de9cf54c59095fb4bfd65a369/Customers.java to /app/hadoop/tmp/mapred/staging/root-647263876/./Customers.java
java.io.IOException: Destination '/app/hadoop/tmp/mapred/staging/root-647263876/./Customers.java' already exists
    at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:1811)
    at org.apache.sqoop.orm.CompilationManager.compile(CompilationManager.java:227)
    at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:83)
    at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:368)
    at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:454)
    at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
    at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
    at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
12/08/05 07:33:50 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/9855cf7de9cf54c59095fb4bfd65a369/Customers.jar
12/08/05 07:33:51 WARN manager.MySQLManager: It looks like you are importing from mysql.
12/08/05 07:33:51 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
12/08/05 07:33:51 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
12/08/05 07:33:51 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
12/08/05 07:33:51 INFO mapreduce.ImportJobBase: Beginning import of Customers
12/08/05 07:33:51 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/08/05 07:33:52 INFO mapred.JobClient: Cleaning up the staging area file:/app/hadoop/tmp/mapred/staging/root-195281052/.staging/job_local_0001
12/08/05 07:33:52 ERROR security.UserGroupInformation: PriviledgedActionException as:root cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory Customers already exists
12/08/05 07:33:52 ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory Customers already exists
    at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:889)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
    at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:119)
    at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:179)
    at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:413)
    at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:97)
    at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:381)
    at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:454)
    at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
    at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
    at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
4

2 に答える 2

1

注意すべき主な点は、Sqoopがハイブを呼び出そうとするがパスにないため、元のインポートが失敗することです。続行する前に、その問題を修正してください。

次に、hdfs (HDFSではなくローカル)からCustomersディレクトリを見つけて削除し、再試行する必要があります。

私が見たところ、「Customers.javaはすでに存在します」という形式のエラーは致命的ではありません。

于 2012-08-06T01:36:04.007 に答える
0
  1. Sqoopでテーブルを作成し、データをhiveのテーブルにロードする場合は、クエリに--create-hiveテーブルを含める必要があります。
  2. データをハイブにインポートすると、Sqoopは、データをテーブルに最終的にロードするために、ステージングとして一時的なhdfsディレクトリを作成しようとします。その間、ディレクトリがまだ存在していないことを確認することをお勧めします。
  3. sqoopの作業ディレクトリには、ファイルシステムを変更するための十分な権限がないようです。'user'が関連ファイルの所有者であることを確認してください。
于 2016-11-25T12:28:55.997 に答える