[ceph-users] ceph-volume: migration and disk partition support

Dan van der Ster dan at vanderster.com
Tue Oct 10 01:14:32 PDT 2017

On Fri, Oct 6, 2017 at 6:56 PM, Alfredo Deza <adeza at redhat.com> wrote:
> Hi,
> Now that ceph-volume is part of the Luminous release, we've been able
> to provide filestore support for LVM-based OSDs. We are making use of
> LVM's powerful mechanisms to store metadata which allows the process
> to no longer rely on UDEV and GPT labels (unlike ceph-disk).
> Bluestore support should be the next step for `ceph-volume lvm`, and
> while that is planned we are thinking of ways to improve the current
> caveats (like OSDs not coming up) for clusters that have deployed OSDs
> with ceph-disk.
> --- New clusters ---
> The `ceph-volume lvm` deployment is straightforward (currently
> supported in ceph-ansible), but there isn't support for plain disks
> (with partitions) currently, like there is with ceph-disk.
> Is there a pressing interest in supporting plain disks with
> partitions? Or only supporting LVM-based OSDs fine?
> --- Existing clusters ---
> Migration to ceph-volume, even with plain disk support means
> re-creating the OSD from scratch, which would end up moving data.
> There is no way to make a GPT/ceph-disk OSD become a ceph-volume one
> without starting from scratch.
> A temporary workaround would be to provide a way for existing OSDs to
> be brought up without UDEV and ceph-disk, by creating logic in
> ceph-volume that could load them with systemd directly. This wouldn't
> make them lvm-based, nor it would mean there is direct support for
> them, just a temporary workaround to make them start without UDEV and
> ceph-disk.
> I'm interested in what current users might look for here,: is it fine
> to provide this workaround if the issues are that problematic? Or is
> it OK to plan a migration towards ceph-volume OSDs?

Without fully understanding the technical details and plans, it will
be hard to answer this.

In general, I wouldn't plan to recreate all OSDs. In our case, we
don't currently plan to recreate FileStore OSDs as Bluestore after the
Luminous upgrade, as that would be too much work. *New* OSDs will be
created the *new* way (is that ceph-disk bluestore? ceph-volume lvm
bluestore??) It wouldn't be nice if we created new OSDs today with
ceph-disk bluestore, then have to recreate all those with ceph-volume
bluestore in a few months.

Disks/servers have a ~5 year lifetime, and we want to format OSDs
exactly once. I'd hope those OSDs remain bootable for the upcoming

(ceph-disk activation works reliably enough here -- just don't remove
the existing functionality and we'll be happy).

-- dan

> -Alfredo
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

More information about the ceph-users mailing list