[ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

Massimo Sgaravatto massimo.sgaravatto at gmail.com
Wed Mar 13 22:42:25 PDT 2019


[root at c-mon-01 /]# ceph osd df tree
ID  CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE VAR  PGS TYPE NAME

 -1       1.95190        - 1.95TiB 88.4GiB 1.87TiB    0    0   - root
default
 -2             0        -      0B      0B      0B    0    0   -     rack
Rack15-PianoAlto
 -3       0.39038        -  400GiB 18.9GiB  381GiB 4.74 1.07   -     host
c-osd-1
  0   hdd 0.09760  1.00000  100GiB 5.42GiB 94.5GiB 5.43 1.23  77
 osd.0
  1   hdd 0.09760  1.00000  100GiB 2.65GiB 97.3GiB 2.65 0.60  71
 osd.1
  2   hdd 0.09760  1.00000  100GiB 3.68GiB 96.3GiB 3.68 0.83  66
 osd.2
  3   hdd 0.09760  1.00000  100GiB 7.19GiB 92.8GiB 7.19 1.63  62
 osd.3
 -4       0.39038        -  400GiB 17.0GiB  383GiB 4.24 0.96   -     host
c-osd-2
  4   hdd 0.09760  1.00000  100GiB 7.54GiB 92.4GiB 7.55 1.71  70
 osd.4
  5   hdd 0.09760  1.00000  100GiB 3.36GiB 96.6GiB 3.36 0.76  71
 osd.5
  6   hdd 0.09760  1.00000  100GiB 54.1MiB 99.9GiB 0.05 0.01  67
 osd.6
  7   hdd 0.09760  1.00000  100GiB 6.01GiB 93.9GiB 6.01 1.36  59
 osd.7
 -5       0.39038        -  400GiB 20.7GiB  379GiB 5.17 1.17   -     host
c-osd-3
  8   hdd 0.09760  1.00000  100GiB 6.70GiB 93.2GiB 6.71 1.52  63
 osd.8
  9   hdd 0.09760  1.00000  100GiB 4.93GiB 95.0GiB 4.94 1.12  70
 osd.9
 10   hdd 0.09760  1.00000  100GiB 4.11GiB 95.8GiB 4.11 0.93  71
 osd.10
 11   hdd 0.09760  1.00000  100GiB 4.92GiB 95.0GiB 4.92 1.11  59
 osd.11
-11       0.78076        -  800GiB 31.8GiB  768GiB 3.98 0.90   -     host
c-osd-5
 12   hdd 0.09760  1.00000  100GiB 4.39GiB 95.6GiB 4.39 0.99  47
 osd.12
 13   hdd 0.09760  1.00000  100GiB 4.48GiB 95.5GiB 4.48 1.01  41
 osd.13
 14   hdd 0.09760  1.00000  100GiB 3.69GiB 96.3GiB 3.69 0.84  45
 osd.14
 15   hdd 0.09760  1.00000  100GiB 3.63GiB 96.4GiB 3.63 0.82  39
 osd.15
 16   hdd 0.09760  1.00000  100GiB 3.48GiB 96.5GiB 3.48 0.79  47
 osd.16
 17   hdd 0.09760  1.00000  100GiB 4.35GiB 95.6GiB 4.35 0.98  44
 osd.17
 18   hdd 0.09760  1.00000  100GiB 3.57GiB 96.4GiB 3.57 0.81  46
 osd.18
 19   hdd 0.09760  1.00000  100GiB 4.23GiB 95.8GiB 4.23 0.96  37
 osd.19
                     TOTAL 1.95TiB 88.4GiB 1.87TiB 4.42

MIN/MAX VAR: 0.01/1.71  STDDEV: 1.64
[root at c-mon-01 /]#


On Thu, Mar 14, 2019 at 6:30 AM Konstantin Shalygin <k0ste at k0ste.ru> wrote:

> I have a cluster where for some OSD the weight-set is defined, while for
> other OSDs it is not [*].
>
> The OSDs with weight-set defined are Filestore OSDs created years ago using
> "ceph-disk prepare"
>
> The OSDs where the weight set is not defined are Bluestore OSDs installed
> recently using
> ceph-volume
>
>
> I think this problem explains while I am not able to use anymore the
> balancer (after having added these new OSDs).
>
> Any "clean" procedure to fix this mess is appreciated !
>
> Thanks, Massimo
>
>
>
> [*]
>
> [root at c-mon-01 <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> /]# ceph osd crush tree
> ID  CLASS WEIGHT  (compat) TYPE NAME
>  -1       1.95190          root default
>  -2             0        0     rack Rack15-PianoAlto
>  -3       0.39038  0.38974     host c-osd-1
>   0   hdd 0.09760  0.09988         osd.0
>   1   hdd 0.09760  0.10040         osd.1
>   2   hdd 0.09760  0.08945         osd.2
>   3   hdd 0.09760  0.10001         osd.3
>  -4       0.39038  0.39076     host c-osd-2
>   4   hdd 0.09760  0.10275         osd.4
>   5   hdd 0.09760  0.10081         osd.5
>   6   hdd 0.09760  0.10135         osd.6
>   7   hdd 0.09760  0.08585         osd.7
>  -5       0.39038  0.39055     host c-osd-3
>   8   hdd 0.09760  0.10622         osd.8
>   9   hdd 0.09760  0.09148         osd.9
>  10   hdd 0.09760  0.10164         osd.10
>  11   hdd 0.09760  0.09122         osd.11
> -11       0.78076  0.78076     host c-osd-5
>  12   hdd 0.09760                  osd.12
>  13   hdd 0.09760                  osd.13
>  14   hdd 0.09760                  osd.14
>  15   hdd 0.09760                  osd.15
>  16   hdd 0.09760                  osd.16
>  17   hdd 0.09760                  osd.17
>  18   hdd 0.09760                  osd.18
>  19   hdd 0.09760                  osd.19
> [root at c-mon-01 <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> /]#
>
>
> Please, show your `ceph osd df tree`.
>
>
>
> k
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20190314/16988347/attachment.html>


More information about the ceph-users mailing list