[ceph-users] Crushmap and failure domains at rack level (ideally data-center level in the future)

Bastiaan Visser bvisser at flexyz.com
Tue Oct 23 17:46:51 PDT 2018


Something must be wrong, since you have min_size 3 the pool should go read only once you take out the first rack. Probably even when you take out the first host. 

What is the outputput of ceph osd pool get <poolname> min_size ? 

I guess it will be 2, since you did not hit a problem while taking out one rack. as soon as you take out an extra host after that, some PG's will have a size < 2, so the pool goes read-only and recovery should start. Once all pg's are back to size 2, the pool would be RW again. 


From: "Waterbly, Dan" <dan.waterbly at sos.wa.gov> 
To: "ceph-users" <ceph-users at lists.ceph.com> 
Sent: Tuesday, October 23, 2018 6:44:52 PM 
Subject: [ceph-users] Crushmap and failure domains at rack level (ideally data-center level in the future) 



Hello, 



I want to create a crushmap rule where I can lose two racks of hosts and still be able to operate. I have tried the rule below, but it only allows me to operate (rados gateway) with one rack down and two racks up. If I lose any host in the two remaining racks my rados gateway stops responding. Here is my crushmap and rule. If anyone can point out what I am doing wrong it would be greatly appreciated. I’m very new to ceph so please forgive any incorrect terminology I have used. 



# begin crush map 

tunable choose_local_tries 0 

tunable choose_local_fallback_tries 0 

tunable choose_total_tries 50 

tunable chooseleaf_descend_once 1 

tunable chooseleaf_vary_r 1 

tunable chooseleaf_stable 1 

tunable straw_calc_version 1 

tunable allowed_bucket_algs 54 



# devices 

device 0 osd.0 class hdd 

device 1 osd.1 class hdd 

device 2 osd.2 class hdd 

device 3 osd.3 class hdd 

device 4 osd.4 class hdd 

device 5 osd.5 class hdd 

device 6 osd.6 class hdd 

device 7 osd.7 class hdd 

device 8 osd.8 class hdd 

device 9 osd.9 class hdd 

device 10 osd.10 class hdd 

device 11 osd.11 class hdd 

device 12 osd.12 class hdd 

device 13 osd.13 class hdd 

device 14 osd.14 class hdd 

device 15 osd.15 class hdd 

device 16 osd.16 class hdd 

device 17 osd.17 class hdd 

device 18 osd.18 class hdd 

device 19 osd.19 class hdd 

device 20 osd.20 class hdd 

device 21 osd.21 class hdd 

device 22 osd.22 class hdd 

device 23 osd.23 class hdd 

device 24 osd.24 class hdd 

device 25 osd.25 class hdd 

device 26 osd.26 class hdd 

device 27 osd.27 class hdd 

device 28 osd.28 class hdd 

device 29 osd.29 class hdd 

device 30 osd.30 class hdd 

device 31 osd.31 class hdd 

device 32 osd.32 class hdd 

device 33 osd.33 class hdd 

device 34 osd.34 class hdd 

device 35 osd.35 class hdd 

device 36 osd.36 class hdd 

device 37 osd.37 class hdd 

device 38 osd.38 class hdd 

device 39 osd.39 class hdd 

device 40 osd.40 class hdd 

device 41 osd.41 class hdd 

device 42 osd.42 class hdd 

device 43 osd.43 class hdd 

device 44 osd.44 class hdd 

device 45 osd.45 class hdd 

device 46 osd.46 class hdd 

device 47 osd.47 class hdd 

device 48 osd.48 class hdd 

device 49 osd.49 class hdd 

device 50 osd.50 class hdd 

device 51 osd.51 class hdd 

device 52 osd.52 class hdd 

device 53 osd.53 class hdd 

device 54 osd.54 class hdd 

device 55 osd.55 class hdd 

device 56 osd.56 class hdd 

device 57 osd.57 class hdd 

device 58 osd.58 class hdd 

device 59 osd.59 class hdd 

device 60 osd.60 class hdd 

device 61 osd.61 class hdd 

device 62 osd.62 class hdd 

device 63 osd.63 class hdd 

device 64 osd.64 class hdd 

device 65 osd.65 class hdd 

device 66 osd.66 class hdd 

device 67 osd.67 class hdd 

device 68 osd.68 class hdd 

device 69 osd.69 class hdd 

device 70 osd.70 class hdd 

device 71 osd.71 class hdd 

device 72 osd.72 class hdd 

device 73 osd.73 class hdd 

device 74 osd.74 class hdd 

device 75 osd.75 class hdd 

device 76 osd.76 class hdd 

device 77 osd.77 class hdd 

device 78 osd.78 class hdd 

device 79 osd.79 class hdd 

device 80 osd.80 class hdd 

device 81 osd.81 class hdd 

device 82 osd.82 class hdd 

device 83 osd.83 class hdd 

device 84 osd.84 class hdd 

device 85 osd.85 class hdd 

device 86 osd.86 class hdd 

device 87 osd.87 class hdd 

device 88 osd.88 class hdd 

device 89 osd.89 class hdd 

device 90 osd.90 class hdd 

device 91 osd.91 class hdd 

device 92 osd.92 class hdd 

device 93 osd.93 class hdd 

device 94 osd.94 class hdd 

device 95 osd.95 class hdd 

device 96 osd.96 class hdd 

device 97 osd.97 class hdd 

device 98 osd.98 class hdd 

device 99 osd.99 class hdd 

device 100 osd.100 class hdd 

device 101 osd.101 class hdd 

device 102 osd.102 class hdd 

device 103 osd.103 class hdd 

device 104 osd.104 class hdd 

device 105 osd.105 class hdd 

device 106 osd.106 class hdd 

device 107 osd.107 class hdd 

device 108 osd.108 class hdd 

device 109 osd.109 class hdd 

device 110 osd.110 class hdd 

device 111 osd.111 class hdd 

device 112 osd.112 class hdd 

device 113 osd.113 class hdd 

device 114 osd.114 class hdd 

device 115 osd.115 class hdd 

device 116 osd.116 class hdd 

device 117 osd.117 class hdd 

device 118 osd.118 class hdd 

device 119 osd.119 class hdd 

device 120 osd.120 class hdd 

device 121 osd.121 class hdd 

device 122 osd.122 class hdd 

device 123 osd.123 class hdd 

device 124 osd.124 class hdd 

device 125 osd.125 class hdd 

device 126 osd.126 class hdd 

device 127 osd.127 class hdd 

device 128 osd.128 class hdd 

device 129 osd.129 class hdd 

device 130 osd.130 class hdd 

device 131 osd.131 class hdd 

device 132 osd.132 class hdd 

device 133 osd.133 class hdd 

device 134 osd.134 class hdd 

device 135 osd.135 class hdd 

device 136 osd.136 class hdd 

device 137 osd.137 class hdd 

device 138 osd.138 class hdd 

device 139 osd.139 class hdd 

device 140 osd.140 class hdd 

device 141 osd.141 class hdd 

device 142 osd.142 class hdd 

device 143 osd.143 class hdd 



# types 

type 0 osd 

type 1 host 

type 2 chassis 

type 3 rack 

type 4 row 

type 5 pdu 

type 6 pod 

type 7 room 

type 8 datacenter 

type 9 region 

type 10 root 



# buckets 

host cephstorage02 { 

id -3 # do not change unnecessarily 

id -4 class hdd # do not change unnecessarily 

# weight 116.437 

alg straw2 

hash 0 # rjenkins1 

item osd.6 weight 7.277 

item osd.10 weight 7.277 

item osd.25 weight 7.277 

item osd.34 weight 7.277 

item osd.44 weight 7.277 

item osd.55 weight 7.277 

item osd.64 weight 7.277 

item osd.75 weight 7.277 

item osd.81 weight 7.277 

item osd.95 weight 7.277 

item osd.102 weight 7.277 

item osd.108 weight 7.277 

item osd.121 weight 7.277 

item osd.126 weight 7.277 

item osd.134 weight 7.277 

item osd.138 weight 7.277 

} 

host cephstorage05 { 

id -5 # do not change unnecessarily 

id -6 class hdd # do not change unnecessarily 

# weight 116.437 

alg straw2 

hash 0 # rjenkins1 

item osd.3 weight 7.277 

item osd.11 weight 7.277 

item osd.23 weight 7.277 

item osd.29 weight 7.277 

item osd.40 weight 7.277 

item osd.42 weight 7.277 

item osd.51 weight 7.277 

item osd.67 weight 7.277 

item osd.79 weight 7.277 

item osd.85 weight 7.277 

item osd.90 weight 7.277 

item osd.104 weight 7.277 

item osd.113 weight 7.277 

item osd.128 weight 7.277 

item osd.140 weight 7.277 

item osd.143 weight 7.277 

} 

host cephstorage06 { 

id -7 # do not change unnecessarily 

id -8 class hdd # do not change unnecessarily 

# weight 116.437 

alg straw2 

hash 0 # rjenkins1 

item osd.13 weight 7.277 

item osd.18 weight 7.277 

item osd.30 weight 7.277 

item osd.38 weight 7.277 

item osd.46 weight 7.277 

item osd.48 weight 7.277 

item osd.56 weight 7.277 

item osd.59 weight 7.277 

item osd.68 weight 7.277 

item osd.80 weight 7.277 

item osd.91 weight 7.277 

item osd.101 weight 7.277 

item osd.112 weight 7.277 

item osd.120 weight 7.277 

item osd.130 weight 7.277 

item osd.141 weight 7.277 

} 

host cephstorage03 { 

id -9 # do not change unnecessarily 

id -10 class hdd # do not change unnecessarily 

# weight 116.437 

alg straw2 

hash 0 # rjenkins1 

item osd.2 weight 7.277 

item osd.8 weight 7.277 

item osd.21 weight 7.277 

item osd.28 weight 7.277 

item osd.36 weight 7.277 

item osd.50 weight 7.277 

item osd.62 weight 7.277 

item osd.69 weight 7.277 

item osd.72 weight 7.277 

item osd.77 weight 7.277 

item osd.83 weight 7.277 

item osd.87 weight 7.277 

item osd.103 weight 7.277 

item osd.118 weight 7.277 

item osd.129 weight 7.277 

item osd.136 weight 7.277 

} 

host cephstorage04 { 

id -11 # do not change unnecessarily 

id -12 class hdd # do not change unnecessarily 

# weight 116.437 

alg straw2 

hash 0 # rjenkins1 

item osd.4 weight 7.277 

item osd.14 weight 7.277 

item osd.20 weight 7.277 

item osd.27 weight 7.277 

item osd.37 weight 7.277 

item osd.45 weight 7.277 

item osd.58 weight 7.277 

item osd.65 weight 7.277 

item osd.71 weight 7.277 

item osd.82 weight 7.277 

item osd.94 weight 7.277 

item osd.98 weight 7.277 

item osd.107 weight 7.277 

item osd.119 weight 7.277 

item osd.122 weight 7.277 

item osd.131 weight 7.277 

} 

host cephstorage07 { 

id -13 # do not change unnecessarily 

id -14 class hdd # do not change unnecessarily 

# weight 116.437 

alg straw2 

hash 0 # rjenkins1 

item osd.0 weight 7.277 

item osd.12 weight 7.277 

item osd.22 weight 7.277 

item osd.33 weight 7.277 

item osd.43 weight 7.277 

item osd.52 weight 7.277 

item osd.57 weight 7.277 

item osd.63 weight 7.277 

item osd.70 weight 7.277 

item osd.78 weight 7.277 

item osd.88 weight 7.277 

item osd.100 weight 7.277 

item osd.110 weight 7.277 

item osd.115 weight 7.277 

item osd.123 weight 7.277 

item osd.137 weight 7.277 

} 

host cephstorage01 { 

id -15 # do not change unnecessarily 

id -16 class hdd # do not change unnecessarily 

# weight 116.437 

alg straw2 

hash 0 # rjenkins1 

item osd.5 weight 7.277 

item osd.15 weight 7.277 

item osd.26 weight 7.277 

item osd.35 weight 7.277 

item osd.41 weight 7.277 

item osd.54 weight 7.277 

item osd.66 weight 7.277 

item osd.74 weight 7.277 

item osd.93 weight 7.277 

item osd.99 weight 7.277 

item osd.109 weight 7.277 

item osd.114 weight 7.277 

item osd.124 weight 7.277 

item osd.132 weight 7.277 

item osd.139 weight 7.277 

item osd.142 weight 7.277 

} 

host cephstorage08 { 

id -17 # do not change unnecessarily 

id -18 class hdd # do not change unnecessarily 

# weight 116.437 

alg straw2 

hash 0 # rjenkins1 

item osd.1 weight 7.277 

item osd.9 weight 7.277 

item osd.16 weight 7.277 

item osd.19 weight 7.277 

item osd.32 weight 7.277 

item osd.47 weight 7.277 

item osd.49 weight 7.277 

item osd.60 weight 7.277 

item osd.73 weight 7.277 

item osd.84 weight 7.277 

item osd.89 weight 7.277 

item osd.96 weight 7.277 

item osd.105 weight 7.277 

item osd.117 weight 7.277 

item osd.127 weight 7.277 

item osd.135 weight 7.277 

} 

host cephstorage09 { 

id -19 # do not change unnecessarily 

id -20 class hdd # do not change unnecessarily 

# weight 116.437 

alg straw2 

hash 0 # rjenkins1 

item osd.7 weight 7.277 

item osd.17 weight 7.277 

item osd.24 weight 7.277 

item osd.31 weight 7.277 

item osd.39 weight 7.277 

item osd.53 weight 7.277 

item osd.61 weight 7.277 

item osd.76 weight 7.277 

item osd.86 weight 7.277 

item osd.92 weight 7.277 

item osd.97 weight 7.277 

item osd.106 weight 7.277 

item osd.111 weight 7.277 

item osd.116 weight 7.277 

item osd.125 weight 7.277 

item osd.133 weight 7.277 

} 



rack rack01 { 

id -101 

alg straw2 

hash 0 

item cephstorage01 weight 116.437 

item cephstorage02 weight 116.437 

item cephstorage03 weight 116.437 

} 



rack rack02 { 

id -102 

alg straw2 

hash 0 

item cephstorage04 weight 116.437 

item cephstorage05 weight 116.437 

item cephstorage06 weight 116.437 

} 



rack rack03 { 

id -103 

alg straw2 

hash 0 

item cephstorage07 weight 116.437 

item cephstorage08 weight 116.437 

item cephstorage09 weight 116.437 

} 



datacenter dc-1 { 

id -201 

alg straw2 

item rack01 weight 349.311 

item rack02 weight 349.311 

item rack03 weight 349.311 

} 



region us-west { 

id -301 

alg straw2 

item dc-1 weight 1047.933 

} 



root default { 

id -1 # do not change unnecessarily 

id -2 class hdd # do not change unnecessarily 

# weight 1047.931 

alg straw2 

item wa-east weight 1047.933 

} 



# rules 

rule replicated_rule { 

id 0 

type replicated 

min_size 3 

max_size 3 

step take default 

step choose firstn 0 type datacenter 

step chooseleaf firstn 0 type rack 

step emit 

} 



# end crush map 



Dan Waterbly | Senior Application Developer | 509.235.7500 x225 | [ mailto:dan.waterbly at sos.wa.gov | dan.waterbly at sos.wa.gov ] 


WASHINGTON STATE ARCHIVES | DIGITAL ARCHIVES 



_______________________________________________ 
ceph-users mailing list 
ceph-users at lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181024/8dff20fe/attachment.html>


More information about the ceph-users mailing list