2

スタンドアロン モードで Apache Spark Cluster(2.2.0) を使用しています。これまで、寄木細工のファイルを保存するために HDFS を使用して実行していました。Apache Hive 1.2 の Hive Metastore Service を使用してアクセスし、Thriftserver、Spark over JDBC を使用しています。

HDFS の代わりに S3 Object Storage を使用したいと考えています。次の構成を hive-site.xml に追加しました。

<property>
  <name>fs.s3a.access.key</name>
  <value>access_key</value>
  <description>Profitbricks Access Key</description>
</property>
<property>
  <name>fs.s3a.secret.key</name>
  <value>secret_key</value>
  <description>Profitbricks Secret Key</description>
</property>
<property>
  <name>fs.s3a.endpoint</name>
  <value>s3-de-central.profitbricks.com</value>
  <description>ProfitBricks S3 Object Storage Endpoint</description>
</property>
<property>
  <name>fs.s3a.endpoint.http.port</name>
  <value>80</value>
  <description>ProfitBricks S3 Object Storage Endpoint HTTP Port</description>
</property>
<property>
  <name>fs.s3a.endpoint.https.port</name>
  <value>443</value>
  <description>ProfitBricks S3 Object Storage Endpoint HTTPS Port</description>
</property>
<property>
  <name>hive.metastore.warehouse.dir</name>
  <value>s3a://dev.spark.my_bucket/parquet/</value>
  <description>Profitbricks S3 Object Storage Hive Warehouse Location</description>
</property>

MySQL 5.7 データベースにハイブ メタストアがあります。Hive lib フォルダーに次の jar ファイルを追加しました。

  • aws-java-sdk-1.7.4.jar
  • hadoop-aws-2.7.3.jar

MySQL の古い Hive メタストア スキーマを削除してから、次のコマンドでメタストア サービスを開始するとhive --service metastore &、次のエラーが発生します。

java.lang.NoClassDefFoundError: com/fasterxml/jackson/databind/ObjectMapper
        at com.amazonaws.util.json.Jackson.<clinit>(Jackson.java:27)
        at com.amazonaws.internal.config.InternalConfig.loadfrom(InternalConfig.java:182)
        at com.amazonaws.internal.config.InternalConfig.load(InternalConfig.java:199)
        at com.amazonaws.internal.config.InternalConfig$Factory.<clinit>(InternalConfig.java:232)
        at com.amazonaws.ServiceNameFactory.getServiceName(ServiceNameFactory.java:34)
        at com.amazonaws.AmazonWebServiceClient.computeServiceName(AmazonWebServiceClient.java:703)
        at com.amazonaws.AmazonWebServiceClient.getServiceNameIntern(AmazonWebServiceClient.java:676)
        at com.amazonaws.AmazonWebServiceClient.computeSignerByURI(AmazonWebServiceClient.java:278)
        at com.amazonaws.AmazonWebServiceClient.setEndpoint(AmazonWebServiceClient.java:160)
        at com.amazonaws.services.s3.AmazonS3Client.setEndpoint(AmazonS3Client.java:475)
        at com.amazonaws.services.s3.AmazonS3Client.init(AmazonS3Client.java:447)
        at com.amazonaws.services.s3.AmazonS3Client.<init>(AmazonS3Client.java:391)
        at com.amazonaws.services.s3.AmazonS3Client.<init>(AmazonS3Client.java:371)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:235)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2848)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2830)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
        at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:104)
        at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:140)
        at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:146)
        at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)
        at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:177)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:601)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5757)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:5990)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5915)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.lang.ClassNotFoundException: com.fasterxml.jackson.databind.ObjectMapper
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

欠落しているクラスは Jackson ライブラリに属しており、spark-2.2.0-bin-hadoop2.7/jars/ フォルダーにある Jackson-*.jar をコピーしました。

  • jackson-annotations-2.6.5.jar
  • jackson-core-2.6.5.jar
  • jackson-core-asl-1.9.13.jar
  • jackson-databind-2.6.5.jar
  • ジャクソン-jaxrs-1.9.13.jar
  • ジャクソン-マッパー-asl-1.9.13.jar
  • jackson-module-paranamer-2.6.5.jar
  • jackson-module-scala_2.11-2.6.5.jar
  • jackson-xc-1.9.13.jar

しかし、その後、次のエラーが発生しました。

2018-01-05 17:51:00,819 ERROR [main]: metastore.HiveMetaStore (HiveMetaStore.java:main(5920)) - Metastore Thrift Server threw an exception...
java.lang.NumberFormatException: For input string: "100M"
        at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
        at java.lang.Long.parseLong(Long.java:589)
        at java.lang.Long.parseLong(Long.java:631)
        at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1319)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:248)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2848)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2830)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
        at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:104)
        at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:140)
        at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:146)
        at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)
        at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:177)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:601)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5757)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:5990)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5915)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148)

ここでのエラーは、jar バージョンの非互換性と関係があると思いますが、正しいバージョンを見つけることができません。

誰かがここで私を助けることができますか?

4

1 に答える 1