[ceph-users] Ceph cluster uses substantially more disk space after rebalancing

vitalif at yourcmc.ru vitalif at yourcmc.ru
Fri Nov 2 07:55:21 PDT 2018


> If you simply multiply number of objects and rbd object size
> you will get 7611672*4M ~= 29T and that is what you should see in USED
> field, and 29/2*3=43.5T of raw space.
> Unfortunately no idea why they consume less; probably because not all
> objects are fully written.

It seems some objects correspond to snapshots and Bluestore is smart and 
uses copy-on-write (virtual clone) on them, so they aren't provisioned 
at all.......................................

...............UNTIL REBALANCE

> What ceph version?

Mimic 13.2.2

> Can you show osd config, "ceph daemon osd.0 config show"?

See the attachment. But it mostly contains defaults, only the following 
variables are overridden in /etc/ceph/ceph.conf:

[osd]
rbd_op_threads = 4
osd_op_queue = mclock_opclass
osd_max_backfills = 2
bluestore_prefer_deferred_size_ssd = 1
bdev_enable_discard = true

> Can you show some "rbd info ecpool_hdd/rbd_name"?

[root at sill-01 ~]# rbd info rpool_hdd/rms-201807-golden
rbd image 'rms-201807-golden':
         size 14 TiB in 3670016 objects
         order 22 (4 MiB objects)
         id: 3d3e1d6b8b4567
         data_pool: ecpool_hdd
         block_name_prefix: rbd_data.15.3d3e1d6b8b4567
         format: 2
         features: layering, exclusive-lock, object-map, fast-diff, 
deep-flatten, data-pool
         op_features:
         flags:
         create_timestamp: Tue Aug  7 13:00:10 2018
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: config.json
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181102/2ed7e89e/attachment.ksh>


More information about the ceph-users mailing list