[ceph-users] Reboot 1 OSD server, now ceph says 60% misplaced?
gfarnum at redhat.com
Sun Nov 19 01:53:47 PST 2017
On Sun, Nov 19, 2017 at 8:43 PM Tracy Reed <treed at ultraviolet.org> wrote:
> One of my 9 ceph osd nodes just spontaneously rebooted.
> This particular osd server only holds 4% of total storage.
> Why, after it has come back up and rejoined the cluster, does ceph
> health say that 60% of my objects are misplaced? I'm wondering if I
> have something setup wrong in my cluster. This cluster has been
> operating well for the most part for about a year but I have noticed
> this sort of behavior before. This is going to take many hours to
> recover. Ceph 10.2.3.
> Thanks for any insights you may be able to provide!
Can you include the results of "ceph osd dump" and your crush map?
It sounds rather as if your OSDs moved themselves in the crush map when
they rebooted. I'm not aware of any reason that should happen in Jewel
(although some people experienced it on upgrade to Luminous if they had
> Tracy Reed
> Digital signature attached for your safety.
> ceph-users mailing list
> ceph-users at lists.ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ceph-users