[ceph-users] Raw space usage in Ceph with Bluestore

Igor Fedotov ifedotov at suse.de
Wed Nov 28 14:05:44 PST 2018


Hi Jody,

yes, this is a known issue.

Indeed, currently 'ceph df detail' reports raw space usage in GLOBAL 
section and 'logical' in the POOLS one. While logical one has some flaws.

There is a pending PR targeted to Nautilus to fix that:

https://github.com/ceph/ceph/pull/19454

If you want to do an analysis at exactly per-pool level this PR is the 
only mean AFAIK.


If per-cluster stats are fine then you can also inspect corresponding 
OSD performance counters and sum over all OSDs to get per-cluster info.

This is the most precise but quite inconvenient method for low-level 
per-osd space analysis.

  "bluestore": {
...

        "bluestore_allocated": 655360, # space allocated at BlueStore 
for the specific OSD
         "bluestore_stored": 34768,  # amount of data stored at 
BlueStore for the specific OSD
...

Please note, that aggregate numbers built from these parameters include 
all the replication/EC overhead.  And bluestore_stored vs. 
bluestore_allocated difference is due to allocation overhead and/or 
applied compression.


Thanks,

Igor


On 11/29/2018 12:27 AM, Glider, Jody wrote:
>
> Hello,
>
> I’m trying to find a way to determine real/physical/raw storage 
> capacity usage when storing a similar set of objects in different 
> pools, for example a 3-way replicated pool vs. a 4+2 erasure coded 
> pool, and in particular how this ratio changes from small (where 
> Bluestore block size matters more) to large object sizes.
>
> I find that /ceph df detail/ and /rados df/ don’t report on really-raw 
> storage, I guess because they’re perceiving ‘raw’ storage from their 
> perspective only. If I write a set of objects to each pool, rados df 
> shows the space used as the summation of the logical size of the 
> objects, while ceph df detail shows the raw used storage as the object 
> size * the redundancy factor (e.g. 3 for 3-way replication and 1.5 for 
> 4+2 erasure code).
>
> Any suggestions?
>
> Jody Glider, Principal Storage Architect
>
> Cloud Architecture and Engineering, SAP Labs LLC
>
> 3412 Hillview Ave (PAL 02 23.357), Palo Alto, CA 94304
>
> E j.glider at sap.com <mailto:j.glider at sap.com>, T   +1 650-320-3306, M   
> +1 650-441-0241
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181129/0201fb39/attachment.html>


More information about the ceph-users mailing list