[ceph-users] osd reweight = pgs stuck unclean

John Petrini jpetrini at coredial.com
Wed Nov 7 06:05:02 PST 2018


Hello,

I've got a small development cluster that shows some strange behavior
that I'm trying to understand.

If I reduce the weight of an OSD using ceph osd reweight X 0.9 for
example Ceph will move data but recovery stalls and a few pg's remain
stuck unclean. If I reset them all back to 1 ceph goes healthy again.

This is running an older version 0.94.6.

Here's the OSD tree:

ID WEIGHT  TYPE NAME        UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 8.24982 root default
-2 2.74994     host node-10
11 0.54999         osd.11        up  1.00000          1.00000
 3 0.54999         osd.3         up  1.00000          1.00000
12 0.54999         osd.12        up  1.00000          1.00000
 0 0.54999         osd.0         up  1.00000          1.00000
 6 0.54999         osd.6         up  1.00000          1.00000
-3 2.74994     host node-11
 8 0.54999         osd.8         up  1.00000          1.00000
15 0.54999         osd.15        up  1.00000          1.00000
 9 0.54999         osd.9         up  1.00000          1.00000
 2 0.54999         osd.2         up  1.00000          1.00000
13 0.54999         osd.13        up  1.00000          1.00000
-4 2.74994     host node-3
 4 0.54999         osd.4         up  1.00000          1.00000
 5 0.54999         osd.5         up  1.00000          1.00000
 7 0.54999         osd.7         up  1.00000          1.00000
 1 0.54999         osd.1         up  1.00000          1.00000
10 0.54999         osd.10        up  1.00000          1.00000


More information about the ceph-users mailing list