[ceph-users] Fastest way to find raw device from OSD-ID? (osd -> lvm lv -> lvm pv -> disk)

Paul Emmerich paul.emmerich at croit.io
Mon Oct 8 14:51:16 PDT 2018


Yeah, it's usually hanging in some low-level LVM tool (lvs, usually).
They unfortunately like to get stuck indefinitely on some hardware
failures, but there isn't really anything that can be done.
But we've found that it's far more reliable to just call lvs ourselves
instead of relying on ceph-volume lvm list when trying to detect OSDs
in a server, not sure what else it does that sometimes just hangs.

One thing I've learned from working with *a lot* of different hardware
from basically all vendors: every command can hang when you got a disk
that died in some bad way. Sometimes there are work-arounds by
invoking the necessary tools once for every disk instead of once for
all disks.


Paul

Am Mo., 8. Okt. 2018 um 23:34 Uhr schrieb Alfredo Deza <adeza at redhat.com>:
>
> On Mon, Oct 8, 2018 at 5:04 PM Paul Emmerich <paul.emmerich at croit.io> wrote:
> >
> > ceph-volume unfortunately doesn't handle completely hanging IOs too
> > well compared to ceph-disk.
>
> Not sure I follow, would you mind expanding on what you mean by
> "ceph-volume unfortunately doesn't handle completely hanging IOs" ?
>
> ceph-volume just provisions the OSD, nothing else. If LVM is hanging,
> there is nothing we could do there, just like ceph-disk wouldn't be
> able to do anything if the partitioning
> tool would hang.
>
>
>
> > It needs to read actual data from each
> > disk and it'll just hang completely if any of the disks doesn't
> > respond.
> >
> > The low-level command to get the information from LVM is:
> >
> > lvs -o lv_tags
> >
> > this allows you to map a LV to an OSD id.
> >
> >
> > Paul
> > Am Mo., 8. Okt. 2018 um 12:09 Uhr schrieb Kevin Olbrich <ko at sv01.de>:
> > >
> > > Hi!
> > >
> > > Yes, thank you. At least on one node this works, the other node just freezes but this might by caused by a bad disk that I try to find.
> > >
> > > Kevin
> > >
> > > Am Mo., 8. Okt. 2018 um 12:07 Uhr schrieb Wido den Hollander <wido at 42on.com>:
> > >>
> > >> Hi,
> > >>
> > >> $ ceph-volume lvm list
> > >>
> > >> Does that work for you?
> > >>
> > >> Wido
> > >>
> > >> On 10/08/2018 12:01 PM, Kevin Olbrich wrote:
> > >> > Hi!
> > >> >
> > >> > Is there an easy way to find raw disks (eg. sdd/sdd1) by OSD id?
> > >> > Before I migrated from filestore with simple-mode to bluestore with lvm,
> > >> > I was able to find the raw disk with "df".
> > >> > Now, I need to go from LVM LV to PV to disk every time I need to
> > >> > check/smartctl a disk.
> > >> >
> > >> > Kevin
> > >> >
> > >> >
> > >> > _______________________________________________
> > >> > ceph-users mailing list
> > >> > ceph-users at lists.ceph.com
> > >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >> >
> > >
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users at lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > --
> > Paul Emmerich
> >
> > Looking for help with your Ceph cluster? Contact us at https://croit.io
> >
> > croit GmbH
> > Freseniusstr. 31h
> > 81247 München
> > www.croit.io
> > Tel: +49 89 1896585 90
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users at lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


More information about the ceph-users mailing list