[ceph-users] ceph-volume: migration and disk partition support

Stefan Kooman stefan at bit.nl
Tue Oct 10 00:28:31 PDT 2017


Hi,

Quoting Alfredo Deza (adeza at redhat.com):
> Hi,
> 
> Now that ceph-volume is part of the Luminous release, we've been able
> to provide filestore support for LVM-based OSDs. We are making use of
> LVM's powerful mechanisms to store metadata which allows the process
> to no longer rely on UDEV and GPT labels (unlike ceph-disk).
> 
> Bluestore support should be the next step for `ceph-volume lvm`, and
> while that is planned we are thinking of ways to improve the current
> caveats (like OSDs not coming up) for clusters that have deployed OSDs
> with ceph-disk.

I'm a bit confused after reading this. Just to make things clear. Would
bluestore be put on top of a LVM volume (in an ideal world)? Has
bluestore in Ceph luminious support for LVM? I.e. is there code in
bluestore to support LVM? Or is it _just_ support of `ceph-volume lvm`
for bluestore?

> --- New clusters ---
> The `ceph-volume lvm` deployment is straightforward (currently
> supported in ceph-ansible), but there isn't support for plain disks
> (with partitions) currently, like there is with ceph-disk.
> 
> Is there a pressing interest in supporting plain disks with
> partitions? Or only supporting LVM-based OSDs fine?

We're still in a green field situation. Users with an installed base
will have to comment on this. If the assumption that bluestore would be
put on top of LVM is true, it would make things simpler (in our own Ceph
ansible playbook).

Gr. Stefan

-- 
| BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info at bit.nl


More information about the ceph-users mailing list