2

約 1,000 の avro レコードのデータセットに対して Yarn MR (mapreduce する 2 つの ec2 インスタンスを使用) ジョブを実行していますが、マップ フェーズが不規則に動作しています。以下の進行状況を参照してください。もちろん、resourcemanager と nodemanagers のログを確認しましたが、疑わしいものは何もありませんでしたが、これらのログは冗長すぎます

そこで何が起こっているのですか?

        hive> select * from nikon where qs_cs_s_aid='VIEW' limit 10;

        Total MapReduce jobs = 1
        Launching Job 1 out of 1
        Number of reduce tasks is set to 0 since there's no reduce operator
        Starting Job = job_1352281315350_0020, Tracking URL = http://blabla.ec2.internal:8088/proxy/application_1352281315350_0020/
        Kill Command = /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=blabla.com:8032 -kill job_1352281315350_0020
        Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 0

        2012-11-07 11:14:40,976 Stage-1 map = 0%,  reduce = 0%
        2012-11-07 11:15:06,136 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 10.38 sec
        2012-11-07 11:15:07,253 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 12.18 sec
        2012-11-07 11:15:08,371 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 12.18 sec
        2012-11-07 11:15:09,491 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 12.18 sec
        2012-11-07 11:15:10,643 Stage-1 map = 2%,  reduce = 0%, Cumulative CPU 15.42 sec
        (...)
        2012-11-07 11:15:35,441 Stage-1 map = 28%,  reduce = 0%, Cumulative CPU 37.77 sec
        2012-11-07 11:15:36,486 Stage-1 map = 28%,  reduce = 0%, Cumulative CPU 37.77 sec

here restart at 16% ?

        2012-11-07 11:15:37,692 Stage-1 map = 16%,  reduce = 0%, Cumulative CPU 21.15 sec
        2012-11-07 11:15:38,815 Stage-1 map = 16%,  reduce = 0%, Cumulative CPU 21.15 sec
        2012-11-07 11:15:39,865 Stage-1 map = 16%,  reduce = 0%, Cumulative CPU 21.15 sec
        2012-11-07 11:15:41,064 Stage-1 map = 18%,  reduce = 0%, Cumulative CPU 22.4 sec
        2012-11-07 11:15:42,181 Stage-1 map = 18%,  reduce = 0%, Cumulative CPU 22.4 sec
        2012-11-07 11:15:43,299 Stage-1 map = 18%,  reduce = 0%, Cumulative CPU 22.4 sec

here restart at 0% ?

        2012-11-07 11:15:44,418 Stage-1 map = 0%,  reduce = 0%
        2012-11-07 11:16:02,076 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 6.86 sec
        2012-11-07 11:16:03,193 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 6.86 sec
        2012-11-07 11:16:04,259 Stage-1 map = 2%,  reduce = 0%, Cumulative CPU 8.45 sec
        (...)
        2012-11-07 11:16:31,291 Stage-1 map = 22%,  reduce = 0%, Cumulative CPU 35.34 sec
        2012-11-07 11:16:32,414 Stage-1 map = 26%,  reduce = 0%, Cumulative CPU 37.93 sec

here restart at 11% ?

        2012-11-07 11:16:33,459 Stage-1 map = 11%,  reduce = 0%, Cumulative CPU 19.53 sec
        2012-11-07 11:16:34,507 Stage-1 map = 11%,  reduce = 0%, Cumulative CPU 19.53 sec
        2012-11-07 11:16:35,731 Stage-1 map = 13%,  reduce = 0%, Cumulative CPU 21.47 sec
        (...)
        2012-11-07 11:16:46,839 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 24.14 sec

here restart at 0% ?

        2012-11-07 11:16:47,939 Stage-1 map = 0%,  reduce = 0%
        2012-11-07 11:16:56,653 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 7.54 sec
        2012-11-07 11:16:57,814 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 7.54 sec
        (...)

言うまでもなく、ジョブはしばらくするとエラーでクラッシュします: java.io.IOException: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: -56

4

1 に答える 1

1

これは、障害時にマップ タスクを再試行する Hadoop のように見えます (デフォルトでは、それぞれ異なるホストで 3 回再試行します)。これにより、ジョブの耐障害性が向上します。

これは、障害が特定のホストでの一時的な問題によって引き起こされている場合に役立ちます (これは思ったよりも多く発生します)。ただし、あなたの場合、ハイブクエリの何かが原因で配列の範囲外の例外が本当にあります。失敗したタスクのログをチェックして、その理由をデバッグしてみます。

于 2012-11-07T16:58:24.357 に答える