0

こんにちは、マスター ログで HBase を実行しているときにこの例外が発生し、HMaster が実行されていません。

2012-05-20 11:54:38,206 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/23.21.190.123:2181
2012-05-20 11:54:38,236 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to localhost/23.21.190.123:2181, initiating session
2012-05-20 11:54:38,291 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server localhost/23.21.190.123:2181, sessionid = 0x1376a1960900000, negotiated timeout = 180000
2012-05-20 11:54:38,323 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=Master, sessionId=ip-10-28-213-145.ec2.internal:60000
2012-05-20 11:54:38,350 INFO org.apache.hadoop.hbase.metrics: MetricsString added: revision
2012-05-20 11:54:38,350 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUser
2012-05-20 11:54:38,350 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsDate
2012-05-20 11:54:38,350 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUrl
2012-05-20 11:54:38,350 INFO org.apache.hadoop.hbase.metrics: MetricsString added: date
2012-05-20 11:54:38,350 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsRevision
2012-05-20 11:54:38,350 INFO org.apache.hadoop.hbase.metrics: MetricsString added: user
2012-05-20 11:54:38,350 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsVersion
2012-05-20 11:54:38,350 INFO org.apache.hadoop.hbase.metrics: MetricsString added: url
2012-05-20 11:54:38,350 INFO org.apache.hadoop.hbase.metrics: MetricsString added: version
2012-05-20 11:54:38,350 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2012-05-20 11:54:38,351 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2012-05-20 11:54:38,351 INFO org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
2012-05-20 11:54:38,452 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Master=ip-10-28-213-145.ec2.internal:60000
2012-05-20 11:54:40,150 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/23.21.190.123:54310. Already tried 0 time(s).
2012-05-20 11:54:41,151 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/23.21.190.123:54310. Already tried 1 time(s).
2012-05-20 11:54:42,153 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/23.21.190.123:54310. Already tried 2 time(s).
2012-05-20 11:54:43,155 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/23.21.190.123:54310. Already tried 3 time(s).
2012-05-20 11:54:44,156 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/23.21.190.123:54310. Already tried 4 time(s).
2012-05-20 11:54:45,157 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/23.21.190.123:54310. Already tried 5 time(s).
2012-05-20 11:54:46,159 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/23.21.190.123:54310. Already tried 6 time(s).
2012-05-20 11:54:47,160 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/23.21.190.123:54310. Already tried 7 time(s).
2012-05-20 11:54:48,161 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/23.21.190.123:54310. Already tried 8 time(s).
2012-05-20 11:54:49,162 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/23.21.190.123:54310. Already tried 9 time(s).
2012-05-20 11:54:49,165 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call to localhost/23.21.190.123:54310 failed on connection exception: java.net.ConnectException: Connection refused
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
    at org.apache.hadoop.ipc.Client.call(Client.java:743)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy5.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:113)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:215)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:177)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
    at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:364)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:81)
    at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:346)
    at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:282)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
    at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
    at org.apache.hadoop.ipc.Client.call(Client.java:720)
    ... 16 more
2012-05-20 11:54:49,168 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
2012-05-20 11:54:49,168 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping service threads
2012-05-20 11:54:49,168 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60000
2012-05-20 11:54:49,169 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: exiting
2012-05-20 11:54:49,169 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: exiting
2012-05-20 11:54:49,169 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: exiting
2012-05-20 11:54:49,169 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: exiting
2012-05-20 11:54:49,169 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: exiting
2012-05-20 11:54:49,170 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: exiting
2012-05-20 11:54:49,170 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: exiting
2012-05-20 11:54:49,170 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: exiting
2012-05-20 11:54:49,170 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: exiting
2012-05-20 11:54:49,170 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: exiting
2012-05-20 11:54:49,171 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on 60000
2012-05-20 11:54:49,194 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
2012-05-20 11:54:49,217 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2012-05-20 11:54:49,217 INFO org.apache.zookeeper.ZooKeeper: Session: 0x1376a1960900000 closed
2012-05-20 11:54:49,217 INFO org.apache.hadoop.hbase.master.HMaster: HMaster main thread exiting


私はたくさん検索しましたが、解決策を見つけることができません.これは私の /etc/hosts ファイルです

127.0.0.1   localhost hbase-system
23.21.190.123   hbase.com.com hbase localhost


localhost エントリ、つまり # 127.0.0.1 localhost にコメントを付けて試してみましたが、HMaster を実行できませんでした。
これは私の hbase-site.xml です

<configuration>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:54310/hbase</value>
</property>

そして、これは私のhadoop core-site.xmlです

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
</property><property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
</configuration>

誰かが問題を抱えている場合は助けてください
ありがとう

4

3 に答える 3

1
  1. Hadoop ホームから Hadoop コア ファイルを hbase lib フォルダーに追加します。
  2. クラスター内の hbase のバージョンを確認します。違いがある場合、エラーが発生する可能性があります。
  3. localhost の代わりに IP アドレスを使用する
  4. ssh 接続を確認します。
于 2012-06-07T08:41:08.383 に答える
0

疑似モードでHbaseを実行している場合、IPは必要ありません..ループバックは正常に動作します..両方の行を「127.0.0.1」にするだけです...また、hbse-site.xmlファイルに次の2つのプロパティを追加します-

<property>
  <name>hbase.zookeeper.quorum</name>
  <value>localhost</value>
</property>    
<property>
      <name>hbase.zookeeper.property.clientPort</name>
      <value>2181</value>         
</property>

これとは別に、HADOOP_HOME の「hadoop-core-*-.jar」と HADOOP_HOME/lib ディレクトリの「commons-collections-3.2.1.jar」という jar ファイルを「HBASE_HOME/lib」ディレクトリに追加します。

于 2012-05-22T09:14:36.883 に答える