[ceph-users] CEPH Cluster Usage Discrepancy

Sergey Malinin hell at newmail.com
Sun Oct 21 09:29:42 PDT 2018


It is just a block size and it has no impact on data safety except that OSDs need to be redeployed in order for them to create bluefs with given block size.


> On 21.10.2018, at 19:04, Waterbly, Dan <dan.waterbly at sos.wa.gov> wrote:
> 
> Thanks Sergey!
> 
> Do you know where I can find details on the repercussions of adjusting this value? Performance (read/writes), for once, not critical for us, data durability and disaster recovery is our focus.
> 
> -Dan
> 
> Get Outlook for iOS <https://aka.ms/o0ukef>
> 
> 
> On Sun, Oct 21, 2018 at 8:37 AM -0700, "Sergey Malinin" <hell at newmail.com <mailto:hell at newmail.com>> wrote:
> 
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024589.html <http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024589.html>
> 
> 
>> On 21.10.2018, at 16:12, Waterbly, Dan <dan.waterbly at sos.wa.gov <mailto:dan.waterbly at sos.wa.gov>> wrote:
>> 
>> Awesome! Thanks Serian!
>> 
>> Do you know where the 64KB comes from? Can that be tuned down for a cluster holding smaller objects?
>> 
>> Get Outlook for iOS <https://aka.ms/o0ukef>
>> 
>> 
>> On Sat, Oct 20, 2018 at 10:49 PM -0700, "Serkan Çoban" <cobanserkan at gmail.com <mailto:cobanserkan at gmail.com>> wrote:
>> 
>> you have 24M objects, not 2.4M.
>> Each object will eat 64KB of storage, so 24M objects uses 1.5TB storage.
>> Add 3x replication to that, it is 4.5TB
>> 
>> On Sat, Oct 20, 2018 at 11:47 PM Waterbly, Dan  wrote:
>> >
>> > Hi Jakub,
>> >
>> > No, my setup seems to be the same as yours. Our system is mainly for archiving loads of data. This data has to be stored forever and allow reads, albeit seldom considering the number of objects we will store vs the number of objects that ever will be requested.
>> >
>> > It just really seems odd that the metadata surrounding the 25M objects is so high.
>> >
>> > We have 144 osds on 9 storage nodes. Perhaps it makes perfect sense but I’d like to know why we are seeing what we are and how it all adds up.
>> >
>> > Thanks!
>> > Dan
>> >
>> > Get Outlook for iOS
>> >
>> >
>> >
>> > On Sat, Oct 20, 2018 at 12:36 PM -0700, "Jakub Jaszewski"  wrote:
>> >
>> >> Hi Dan,
>> >>
>> >> Did you configure block.wal/block.db as separate devices/partition (osd_scenario: non-collocated or lvm for clusters installed using ceph-ansbile playbooks )?
>> >>
>> >> I run Ceph version 13.2.1 with non-collocated data.db and have the same situation - the sum of block.db partitions' size is displayed as RAW USED in ceph df.
>> >> Perhaps it is not the case for collocated block.db/wal.
>> >>
>> >> Jakub
>> >>
>> >> On Sat, Oct 20, 2018 at 8:34 PM Waterbly, Dan  wrote:
>> >>>
>> >>> I get that, but isn’t 4TiB to track 2.45M objects excessive? These numbers seem very high to me.
>> >>>
>> >>> Get Outlook for iOS
>> >>>
>> >>>
>> >>>
>> >>> On Sat, Oct 20, 2018 at 10:27 AM -0700, "Serkan Çoban"  wrote:
>> >>>
>> >>>> 4.65TiB includes size of wal and db partitions too.
>> >>>> On Sat, Oct 20, 2018 at 7:45 PM Waterbly, Dan  wrote:
>> >>>> >
>> >>>> > Hello,
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > I have inserted 2.45M 1,000 byte objects into my cluster (radosgw, 3x replication).
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > I am confused by the usage ceph df is reporting and am hoping someone can shed some light on this. Here is what I see when I run ceph df
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > GLOBAL:
>> >>>> >
>> >>>> >     SIZE        AVAIL       RAW USED     %RAW USED
>> >>>> >
>> >>>> >     1.02PiB     1.02PiB      4.65TiB          0.44
>> >>>> >
>> >>>> > POOLS:
>> >>>> >
>> >>>> >     NAME                                           ID     USED        %USED     MAX AVAIL     OBJECTS
>> >>>> >
>> >>>> >     .rgw.root                                      1      3.30KiB         0        330TiB           17
>> >>>> >
>> >>>> >     .rgw.buckets.data      2      22.9GiB         0        330TiB     24550943
>> >>>> >
>> >>>> >     default.rgw.control                            3           0B         0        330TiB            8
>> >>>> >
>> >>>> >     default.rgw.meta                               4         373B         0        330TiB            3
>> >>>> >
>> >>>> >     default.rgw.log                                5           0B         0        330TiB            0
>> >>>> >
>> >>>> >     .rgw.control           6           0B         0        330TiB            8
>> >>>> >
>> >>>> >     .rgw.meta              7      2.18KiB         0        330TiB           12
>> >>>> >
>> >>>> >     .rgw.log               8           0B         0        330TiB          194
>> >>>> >
>> >>>> >     .rgw.buckets.index     9           0B         0        330TiB         2560
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > Why does my bucket pool report usage of 22.9GiB but my cluster as a whole is reporting 4.65TiB? There is nothing else on this cluster as it was just installed and configured.
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > Thank you for your help with this.
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > -Dan
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > Dan Waterbly | Senior Application Developer | 509.235.7500 x225 | dan.waterbly at sos.wa.gov <mailto:dan.waterbly at sos.wa.gov>
>> >>>> >
>> >>>> > WASHINGTON STATE ARCHIVES | DIGITAL ARCHIVES
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > _______________________________________________
>> >>>> > ceph-users mailing list
>> >>>> > ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>> >>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>> >>>
>> >>> _______________________________________________
>> >>> ceph-users mailing list
>> >>> ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181021/fa0ea786/attachment.html>


More information about the ceph-users mailing list