[ceph-users] Fastest way to find raw device from OSD-ID? (osd -> lvm lv -> lvm pv -> disk)

Paul Emmerich paul.emmerich at croit.io
Mon Oct 8 14:03:46 PDT 2018


ceph-volume unfortunately doesn't handle completely hanging IOs too
well compared to ceph-disk. It needs to read actual data from each
disk and it'll just hang completely if any of the disks doesn't
respond.

The low-level command to get the information from LVM is:

lvs -o lv_tags

this allows you to map a LV to an OSD id.


Paul
Am Mo., 8. Okt. 2018 um 12:09 Uhr schrieb Kevin Olbrich <ko at sv01.de>:
>
> Hi!
>
> Yes, thank you. At least on one node this works, the other node just freezes but this might by caused by a bad disk that I try to find.
>
> Kevin
>
> Am Mo., 8. Okt. 2018 um 12:07 Uhr schrieb Wido den Hollander <wido at 42on.com>:
>>
>> Hi,
>>
>> $ ceph-volume lvm list
>>
>> Does that work for you?
>>
>> Wido
>>
>> On 10/08/2018 12:01 PM, Kevin Olbrich wrote:
>> > Hi!
>> >
>> > Is there an easy way to find raw disks (eg. sdd/sdd1) by OSD id?
>> > Before I migrated from filestore with simple-mode to bluestore with lvm,
>> > I was able to find the raw disk with "df".
>> > Now, I need to go from LVM LV to PV to disk every time I need to
>> > check/smartctl a disk.
>> >
>> > Kevin
>> >
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users at lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


More information about the ceph-users mailing list