1

Ceph のクラッシュ マップがどのように機能するのか、まだやや混乱しており、誰かが光を当ててくれることを期待していました。これが私のosdツリーです:

core@store101 ~ $ ceph osd tree
ID  WEIGHT  TYPE NAME                                UP/DOWN REWEIGHT PRIMARY-AFFINITY 
 -1 6.00000 root default                                                               
 -2 3.00000     datacenter dc1                                                         
 -4 3.00000         rack rack_dc1                                                      
-10 1.00000             host store101                                   
  4 1.00000                 osd.4                         up  1.00000          1.00000 
 -7 1.00000             host store102                                   
  1 1.00000                 osd.1                         up  1.00000          1.00000 
 -9 1.00000             host store103                                   
  3 1.00000                 osd.3                         up  1.00000          1.00000 
 -3 3.00000     datacenter dc2                                                         
 -5 3.00000         rack rack_dc2                                                      
 -6 1.00000             host store104                                   
  0 1.00000                 osd.0                         up  1.00000          1.00000 
 -8 1.00000             host store105                                   
  2 1.00000                 osd.2                         up  1.00000          1.00000 
-11 1.00000             host store106                                   
  5 1.00000                 osd.5                         up  1.00000          1.00000 

私は単純に、レプリケーション値が 2 以上の場合、オブジェクトのすべてのレプリカが同じデータセンターにないことを確認しようとしています。私が持っていた(インターネットから取得した)ルールは次のとおりです。

rule replicated_ruleset_dc {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step choose firstn 2 type datacenter
        step choose firstn 2 type rack
        step chooseleaf firstn 0 type host
        step emit
}

ただし、配置グループをダンプすると、すぐに同じデータセンターからの 2 つの OSD が表示されます。OSD 5,0

core@store101 ~ $ ceph pg dump | grep 5,0
1.73    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.939197  0'0 96:113  [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854945  0'0 2015-07-09 12:05:01.854945
1.70    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.947403  0'0 96:45   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854941  0'0 2015-07-09 12:05:01.854941
1.6f    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.947056  0'0 96:45   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854940  0'0 2015-07-09 12:05:01.854940
1.6c    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.938591  0'0 96:45   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854939  0'0 2015-07-09 12:05:01.854939
1.66    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.937803  0'0 96:107  [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854936  0'0 2015-07-09 12:05:01.854936
1.67    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.929323  0'0 96:33   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854937  0'0 2015-07-09 12:05:01.854937
1.65    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.928200  0'0 96:33   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854936  0'0 2015-07-09 12:05:01.854936
1.63    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.927642  0'0 96:107  [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854935  0'0 2015-07-09 12:05:01.854935
1.3f    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.924738  0'0 96:33   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854920  0'0 2015-07-09 12:05:01.854920
1.36    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.917833  0'0 96:45   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854916  0'0 2015-07-09 12:05:01.854916
1.33    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.911484  0'0 96:104  [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854915  0'0 2015-07-09 12:05:01.854915
1.2b    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.878280  0'0 96:58   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854911  0'0 2015-07-09 12:05:01.854911
1.5 0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.942620  0'0 96:98   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854892  0'0 2015-07-09 12:05:01.854892

少なくとも 1 つのレプリカが常に別の DC にあることを確認するにはどうすればよいですか?

4

1 に答える 1

0

昨日、ceph クラッシュ マップを変更し ました ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 181.99979 root default -12 90.99989 rack rack1 -2 15.46999 host ceph0 1 3.64000 osd.1 up 1.00000 1.00000 0 3.64000 osd.0 up 1.00000 1.00000 8 2.73000 osd.8 up 1.00000 1.00000 9 2.73000 osd.9 up 1.00000 1.00000 19 2.73000 osd.19 up 1.00000 1.00000 ... -13 90.99989 rack rack2 -3 15.46999 host ceph2 2 3.64000 osd.2 up 1.00000 1.00000 3 3.64000 osd.3 up 1.00000 1.00000 10 2.73000 osd.10 up 1.00000 1.00000 11 2.73000 osd.11 up 1.00000 1.00000 18 2.73000 osd.18 up 1.00000 1.00000 ... rack rack1 { id -12 # do not change unnecessarily # weight 91.000 alg straw hash 0 # rjenkins1 item ceph0 weight 15.470 ... } rack rack2 { id -13 # do not change unnecessarily # weight 91.000 alg straw hash 0 # rjenkins1 item ceph2 weight 15.470 ... } root default { id -1 # do not change unnecessarily # weight 182.000 alg straw hash 0 # rjenkins1 item rack1 weight 91.000 item rack2 weight 91.000 } rule racky { ruleset 3 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type rack step emit } 。「ルート デフォルト」セクションを表示してください

そして、これを試してください rule replicated_ruleset_dc { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type datacenter step emit }

于 2015-11-30T17:26:00.703 に答える