[ceph-users] VM Data corruption shortly after Luminous Upgrade

James Forde jimf at mninc.net
Wed Nov 8 14:36:34 PST 2017

Wow, Thanks for the heads-up Jason. That explains a lot. I followed the instructions here http://ceph.com/releases/v12-2-0-luminous-released/ which apparently left out that step. I have now executed that command.

Is there a new master list of the cli’s?

From: Jason Dillaman [mailto:jdillama at redhat.com]
Sent: Wednesday, November 8, 2017 9:53 AM
To: James Forde <jimf at mninc.net>
Cc: ceph-users at lists.ceph.com
Subject: Re: [ceph-users] VM Data corruption shortly after Luminous Upgrade

Are your QEMU VMs using a different CephX user than client.admin? If so, can you double-check your caps to ensure that the QEMU user can blacklist? See step 6 in the upgrade instructions [1]. The fact that "rbd resize" fixed something hints that your VMs had hard-crashed with the exclusive lock left in the locked position and QEMU wasn't able to break the lock when the VMs were restarted.

[1] http://docs.ceph.com/docs/master/release-notes/#upgrade-from-jewel-or-kraken

On Wed, Nov 8, 2017 at 10:29 AM, James Forde <jimf at mninc.net<mailto:jimf at mninc.net>> wrote:
Title probably should have read “Ceph Data corruption shortly after Luminous Upgrade”

Problem seems to have been sorted out. Still not sure why original problem other than Upgrade latency?, or mgr errors?
After I resolved the boot problem I attempted to reproduce error, but was unsuccessful which is good. HEALTH_OK

Anyway, to future users running into Windows "Unmountable Boot Volume", or CentOS7 boot to emergency mode, HERE IS SOLUTION.

Get rbd image size and increase by 1GB and restart VM. That’s it. All VM’s booted right up after increasing rbd image by 1024MB. Takes just a couple of seconds.

Rbd info vmtest
Rbd image ‘vmtest’:
Size 20480 MB

Rbd resize –image vmtest –size 21504

Rbd info vmtest
Rbd image ‘vmtest’:
Size 21504 MB

Good luck

ceph-users mailing list
ceph-users at lists.ceph.com<mailto:ceph-users at lists.ceph.com>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171108/bb143f79/attachment.html>

More information about the ceph-users mailing list