[ceph-users] Reboot 1 OSD server, now ceph says 60% misplaced?
treed at ultraviolet.org
Sun Nov 19 03:41:03 PST 2017
On Sun, Nov 19, 2017 at 02:41:56AM PST, Gregory Farnum spake thusly:
> Okay, so the hosts look okay (although very uneven numbers of OSDs).
> But the sizes are pretty wonky. Are the disks really that mismatched
> in size? I note that many of them in host10 are set to 1.0, but most
> of the others are some fraction less than that.
Yes, they are that mismatched. This is a very mix and match cluster we
built out of what we had laying around. I know that isn't ideal.
Possibly due to the large mismatch in disk sizes (although I had always
expected CRUSH to manage it batter given the default weighting
proportional to size) we used to run into situations where the small
disks would fill up even when the large disks were barely at 50%. So
back in June we ran bc-ceph-reweight-by-utilization.py fairly frequently
for a few days until things were happy and stable and it stayed that way
until tonight's incident.
I'm pretty sure you are right: The weights got reset to defaults causing
lots of movement. I had forgotten that ceph osd reweight is not a
persistent setting. So it looks like once things settle I need to adjust
crush weights appropriately and set reweights back to 1 to make this
That explains it. Thanks!
Digital signature attached for your safety.
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 198 bytes
Desc: not available
More information about the ceph-users