[ceph-users] Degraded objects afte: ceph osd in $osd
stefan at bit.nl
Sun Nov 25 11:53:57 PST 2018
Another interesting and unexpected thing we observed during cluster
expansion is the following. After we added extra disks to the cluster,
while "norebalance" flag was set, we put the new OSDs "IN". As soon as
we did that a couple of hundered objects would become degraded. During
that time no OSD crashed or restarted. Every "ceph osd crush add $osd
weight host=$storage-node" would cause extra degraded objects.
I don't expect objects to become degraded when extra OSDs are added.
Misplaced, yes. Degraded, no
Someone got an explantion for this?
| BIT BV http://www.bit.nl/ Kamer van Koophandel 09090351
| GPG: 0xD14839C6 +31 318 648 688 / info at bit.nl
More information about the ceph-users