[ceph-users] ceph-disk is now deprecated

Alfredo Deza adeza at redhat.com
Tue Nov 28 04:32:11 PST 2017

On Tue, Nov 28, 2017 at 3:39 AM, Piotr Dałek <piotr.dalek at corp.ovh.com> wrote:
> On 17-11-28 09:12 AM, Wido den Hollander wrote:
>>> Op 27 november 2017 om 14:36 schreef Alfredo Deza <adeza at redhat.com>:
>>> For the upcoming Luminous release (12.2.2), ceph-disk will be
>>> officially in 'deprecated' mode (bug fixes only). A large banner with
>>> deprecation information has been added, which will try to raise
>>> awareness.
>> As much as I like ceph-volume and the work being done, is it really a good
>> idea to use a minor release to deprecate a tool?
>> Can't we just introduce ceph-volume and deprecate ceph-disk at the release
>> of M? Because when you upgrade to 12.2.2 suddenly existing integrations will
>> have deprecation warnings being thrown at them while they haven't upgraded
>> to a new major version.
>> As ceph-deploy doesn't support ceph-disk either I don't think it's a good
>> idea to deprecate it right now.
>> How do others feel about this?
> Same, although we don't have a *big* problem with this (we haven't upgraded
> to Luminous yet, so we can skip to next point release and move to
> ceph-volume together with Luminous). It's still a problem, though - now we
> have more of our infrastructure to migrate and test, meaning even more
> delays in production upgrades.

I understand that this would involve a significant effort to fully
port over and drop ceph-disk entirely, and I don't think that dropping
ceph-disk in Mimic is set in stone (yet).

We could treat Luminous as a "soft" deprecation where ceph-disk will
still receive bug-fixes, and then in Mimic, it would be frozen - with
no updates whatsoever.

At some point a migration will have to happen for older clusters,
which is why we've added support in ceph-volume for existing OSDs. An
upgrade to Luminous doesn't mean ceph-disk
will not work, the only thing that has been added to ceph-disk is a
deprecation warning.

> --
> Piotr Dałek
> piotr.dalek at corp.ovh.com
> https://www.ovh.com/us/
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

More information about the ceph-users mailing list