[ceph-users] Raw space usage in Ceph with Bluestore

Paul Emmerich paul.emmerich at croit.io
Wed Nov 28 13:58:46 PST 2018


You can get all the details from the admin socket of the OSDs:

ceph daemon osd.X perf dump

(must be run on the server the OSD is running on)

Examples of relevant metrics are: bluestore_allocated/stored and the
bluefs block for metadata.
Running perf schema might contain some details on the meaning of the
individual metrics.



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Am Mi., 28. Nov. 2018 um 22:28 Uhr schrieb Glider, Jody <j.glider at sap.com>:
>
>
>
> Hello,
>
>
>
> I’m trying to find a way to determine real/physical/raw storage capacity usage when storing a similar set of objects in different pools, for example a 3-way replicated pool vs. a 4+2 erasure coded pool, and in particular how this ratio changes from small (where Bluestore block size matters more) to large object sizes.
>
>
>
> I find that ceph df detail and rados df don’t report on really-raw storage, I guess because they’re perceiving ‘raw’ storage from their perspective only. If I write a set of objects to each pool, rados df shows the space used as the summation of the logical size of the objects, while ceph df detail shows the raw used storage as the object size * the redundancy factor (e.g. 3 for 3-way replication and 1.5 for 4+2 erasure code).
>
>
>
> Any suggestions?
>
>
>
> Jody Glider, Principal Storage Architect
>
> Cloud Architecture and Engineering, SAP Labs LLC
>
> 3412 Hillview Ave (PAL 02 23.357), Palo Alto, CA 94304
>
> E   j.glider at sap.com, T   +1 650-320-3306, M   +1 650-441-0241
>
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


More information about the ceph-users mailing list