[Ceph-community] Glance and RBD export checksum mismatch

Christian Wuerdig christian.wuerdig at gmail.com
Fri Apr 19 15:11:36 PDT 2019


ceph-users at lists.ceph.comis the better list to ask questions like these

On Wed, 10 Apr 2019 at 17:39, Brayan Perera <brayan.perera at gmail.com> wrote:

> Dear All,
>
> Ceph Version : 12.2.5-2.ge988fb6.el7
>
> We are facing an issue on glance which have backend set to ceph, when
> we try to create an instance or volume out of an image, it throws
> checksum error.
> When we use rbd export and use md5sum, value is matching with glance
> checksum.
>
> When we use following script, it provides same error checksum as glance.
>
> We have used below images for testing.
> 1. Failing image (checksum mismatch): ffed4088-74e1-4f22-86cb-35e7e97c377c
> 2. Passing image (checksum identical): c048f0f9-973d-4285-9397-939251c80a84
>
> Output from storage node:
>
> 1. Failing image: ffed4088-74e1-4f22-86cb-35e7e97c377c
> checksum from glance database: 34da2198ec7941174349712c6d2096d8
> [root at storage01moc ~]# python test_rbd_format.py
> ffed4088-74e1-4f22-86cb-35e7e97c377c admin
> Image size: 681181184
> checksum from ceph: b82d85ae5160a7b74f52be6b5871f596
> Remarks: checksum is different
>
> 2. Passing image: c048f0f9-973d-4285-9397-939251c80a84
> checksum from glance database: 4f977f748c9ac2989cff32732ef740ed
> [root at storage01moc ~]# python test_rbd_format.py
> c048f0f9-973d-4285-9397-939251c80a84 admin
> Image size: 1411121152
> checksum from ceph: 4f977f748c9ac2989cff32732ef740ed
> Remarks: checksum is identical
>
> Wondering whether this issue is from ceph python libs or from ceph itself.
>
> Please note that we do not have ceph pool tiering configured.
>
> Please let us know whether anyone faced similar issue and any fixes for
> this.
>
> test_rbd_format.py
> ===================================================
> import rados, sys, rbd
>
> image_id = sys.argv[1]
> try:
>     rados_id = sys.argv[2]
> except:
>     rados_id = 'openstack'
>
>
> class ImageIterator(object):
>     """
>     Reads data from an RBD image, one chunk at a time.
>     """
>
>     def __init__(self, conn, pool, name, snapshot, store, chunk_size='8'):
>         self.pool = pool
>         self.conn = conn
>         self.name = name
>         self.snapshot = snapshot
>         self.chunk_size = chunk_size
>         self.store = store
>
>     def __iter__(self):
>         try:
>             with conn.open_ioctx(self.pool) as ioctx:
>                 with rbd.Image(ioctx, self.name,
>                                snapshot=self.snapshot) as image:
>                     img_info = image.stat()
>                     size = img_info['size']
>                     bytes_left = size
>                     while bytes_left > 0:
>                         length = min(self.chunk_size, bytes_left)
>                         data = image.read(size - bytes_left, length)
>                         bytes_left -= len(data)
>                         yield data
>                     raise StopIteration()
>         except rbd.ImageNotFound:
>             raise exceptions.NotFound(
>                 _('RBD image %s does not exist') % self.name)
>
> conn = rados.Rados(conffile='/etc/ceph/ceph.conf',rados_id=rados_id)
> conn.connect()
>
>
> with conn.open_ioctx('images') as ioctx:
>     try:
>         with rbd.Image(ioctx, image_id,
>                        snapshot='snap') as image:
>             img_info = image.stat()
>             print "Image size: %s " % img_info['size']
>             iter, size = (ImageIterator(conn, 'images', image_id,
> 'snap', 'rbd'), img_info['size'])
>             import six, hashlib
>             md5sum = hashlib.md5()
>             for chunk in iter:
>                 if isinstance(chunk, six.string_types):
>                     chunk = six.b(chunk)
>                 md5sum.update(chunk)
>             md5sum = md5sum.hexdigest()
>             print "checksum from ceph: " + md5sum
>     except:
>         raise
> ===================================================
>
>
> Thank You !
> --
> Best Regards,
> Brayan Perera
> _______________________________________________
> Ceph-community mailing list
> Ceph-community at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-community-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-community-ceph.com/attachments/20190420/9f11feec/attachment.html>


More information about the Ceph-community mailing list