[ceph-users] VM Data corruption shortly after Luminous Upgrade

Jason Dillaman jdillama at redhat.com
Wed Nov 8 07:53:24 PST 2017


Are your QEMU VMs using a different CephX user than client.admin? If so,
can you double-check your caps to ensure that the QEMU user can blacklist?
See step 6 in the upgrade instructions [1]. The fact that "rbd resize"
fixed something hints that your VMs had hard-crashed with the exclusive
lock left in the locked position and QEMU wasn't able to break the lock
when the VMs were restarted.

[1]
http://docs.ceph.com/docs/master/release-notes/#upgrade-from-jewel-or-kraken


On Wed, Nov 8, 2017 at 10:29 AM, James Forde <jimf at mninc.net> wrote:

> Title probably should have read “Ceph Data corruption shortly after
> Luminous Upgrade”
>
>
>
> Problem seems to have been sorted out. Still not sure why original problem
> other than Upgrade latency?, or mgr errors?
>
> After I resolved the boot problem I attempted to reproduce error, but was
> unsuccessful which is good. HEALTH_OK
>
>
>
> Anyway, to future users running into Windows "Unmountable Boot Volume", or
> CentOS7 boot to emergency mode, HERE IS SOLUTION.
>
>
>
> Get rbd image size and increase by 1GB and restart VM. That’s it. All VM’s
> booted right up after increasing rbd image by 1024MB. Takes just a couple
> of seconds.
>
>
>
>
>
> Rbd info vmtest
>
> Rbd image ‘vmtest’:
>
> Size 20480 MB
>
>
>
> Rbd resize –image vmtest –size 21504
>
>
>
>
>
> Rbd info vmtest
>
> Rbd image ‘vmtest’:
>
> Size 21504 MB
>
>
>
>
>
> Good luck
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171108/f7fb5d59/attachment.html>


More information about the ceph-users mailing list