[ceph-users] features required for live migration

Jason Dillaman jdillama at redhat.com
Tue Nov 14 07:56:30 PST 2017


>From the documentation [1]:

shareable
If present, this indicates the device is expected to be shared between
domains (assuming the hypervisor and OS support this), which means that
caching should be deactivated for that device.

Basically, it's the use-case for putting a clustered file system (or
similar) on top of the block device. For the vast majority of cases, you
shouldn't enable this in libvirt.

[1] https://libvirt.org/formatdomain.html#elementsDisks

On Tue, Nov 14, 2017 at 10:49 AM, Oscar Segarra <oscar.segarra at gmail.com>
wrote:

> Hi Jason,
>
> The big use-case for sharing a block device is if you set up a clustered
> file system on top of it, and I'd argue that you'd probably be better off
> using CephFS.
> --> Nice to know!
>
> Thanks a lot for your clarifications, in this case I referenced the
> shareable flag that one can see in the KVM. I'd like to know the suggested
> configuration for rbd images and live migration....
>
> [image: Imágenes integradas 1]
>
> Thanks a lot.
>
> 2017-11-14 16:36 GMT+01:00 Jason Dillaman <jdillama at redhat.com>:
>
>> On Tue, Nov 14, 2017 at 10:25 AM, Oscar Segarra <oscar.segarra at gmail.com>
>> wrote:
>> > In my environment, I have a Centos7 updated todate.... therefore, all
>> > features might work as expected to do...
>> >
>> > Regarding the other question, do you suggest making the virtual disk
>> > "shareable" in rbd?
>>
>> Assuming you are refering to the "--image-shared" option when creating
>> an image, the answer is no. That is just a short-cut to disable all
>> features that depend on the exclusive lock. The big use-case for
>> sharing a block device is if you set up a clustered file system on top
>> of it, and I'd argue that you'd probably be better off using CephFS.
>>
>> > Thanks a lot
>> >
>> > 2017-11-14 15:58 GMT+01:00 Jason Dillaman <jdillama at redhat.com>:
>> >>
>> >> Concur -- there aren't any RBD image features that should prevent live
>> >> migration when using a compatible version of librbd. If, however, you
>> >> had two hosts where librbd versions were out-of-sync and they didn't
>> >> support the same features, you could hit an issue if a VM with fancy
>> >> new features was live migrated to a host where those features aren't
>> >> supported since the destination host wouldn't be able to open the
>> >> image.
>> >>
>> >> On Tue, Nov 14, 2017 at 7:55 AM, Cassiano Pilipavicius
>> >> <cassiano at tips.com.br> wrote:
>> >> > Hi Oscar, exclusive-locking should not interfere with
>> live-migration. I
>> >> > have
>> >> > a small virtualization cluster backed by ceph/rbd and I can migrate
>> all
>> >> > the
>> >> > VMs which RBD image have exclusive-lock enabled without any issue.
>> >> >
>> >> >
>> >> >
>> >> > Em 11/14/2017 9:47 AM, Oscar Segarra escreveu:
>> >> >
>> >> > Hi Konstantin,
>> >> >
>> >> > Thanks a lot for your advice...
>> >> >
>> >> > I'm specially interested in feature "Exclusive locking". Enabling
>> this
>> >> > feature can affect live/offline migration? In this scenario
>> >> > (online/offline
>> >> > migration)  I don't know if two hosts (source and destination) need
>> >> > access
>> >> > to the same rbd image at the same time
>> >> >
>> >> > It looks that enabling Exlucisve locking you can enable some other
>> >> > interessant features like "Object map" and/or "Fast diff" for
>> backups.
>> >> >
>> >> > Thanks a lot!
>> >> >
>> >> > 2017-11-14 12:26 GMT+01:00 Konstantin Shalygin <k0ste at k0ste.ru>:
>> >> >>
>> >> >> On 11/14/2017 06:19 PM, Oscar Segarra wrote:
>> >> >>
>> >> >> What I'm trying to do is reading documentation in order to
>> understand
>> >> >> how
>> >> >> features work and what are they for.
>> >> >>
>> >> >> http://tracker.ceph.com/issues/15000
>> >> >>
>> >> >>
>> >> >> I would also be happy to read what features have negative sides.
>> >> >>
>> >> >>
>> >> >> The problem is that documentation is not detailed enough.
>> >> >>
>> >> >> The proof-test method you suggest I think is not a good procedure
>> >> >> because
>> >> >> I want to a void a corrpution in the future due to a bad
>> configuration
>> >> >>
>> >> >>
>> >> >> So my recommendation: if you can wait - may be from some side you
>> >> >> receive
>> >> >> a new information about features. Otherwise - you can set minimal
>> >> >> features
>> >> >> (like '3') - this is enough for virtualization (snapshots, clones).
>> >> >>
>> >> >> And start your project.
>> >> >>
>> >> >> --
>> >> >> Best regards,
>> >> >> Konstantin Shalygin
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > _______________________________________________
>> >> > ceph-users mailing list
>> >> > ceph-users at lists.ceph.com
>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >
>> >> >
>> >> >
>> >> > _______________________________________________
>> >> > ceph-users mailing list
>> >> > ceph-users at lists.ceph.com
>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Jason
>> >
>> >
>>
>>
>>
>> --
>> Jason
>>
>
>


-- 
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171114/63d784e4/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 80217 bytes
Desc: not available
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171114/63d784e4/attachment.png>


More information about the ceph-users mailing list