[ceph-users] OSD_ORPHAN issues after jewel->luminous upgrade

Gregory Farnum gfarnum at redhat.com
Thu Dec 7 14:25:22 PST 2017


Can you dump your osd map and post it in a tracker ticket?
Or if you're not comfortable with that you can upload it with
ceph-post-file and only developers will be able to see it.
-Greg

On Thu, Dec 7, 2017 at 10:36 AM Graham Allan <gta at umn.edu> wrote:

> Just updated a fairly long-lived (originally firefly) cluster from jewel
> to luminous 12.2.2.
>
> One of the issues I see is a new health warning:
>
> OSD_ORPHAN 3 osds exist in the crush map but not in the osdmap
>      osd.2 exists in crush map but not in osdmap
>      osd.14 exists in crush map but not in osdmap
>      osd.19 exists in crush map but not in osdmap
>
> Seemed reasonable enough, these low-numbered OSDs were on
> long-decommissioned hardware. I thought I had removed them completely
> though, and it seems I had:
>
> > # ceph osd crush ls osd.2
> > Error ENOENT: node 'osd.2' does not exist
> > # ceph osd crush remove osd.2
> > device 'osd.2' does not appear in the crush map
>
> so I wonder where it's getting this warning from, and if it's erroneous,
> how can I clear it?
>
> Graham
> --
> Graham Allan
> Minnesota Supercomputing Institute - gta at umn.edu
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171207/e46995c4/attachment.html>


More information about the ceph-users mailing list