[ceph-users] OMAP size on disk

Brent Kennedy bkennedy at cfl.rr.com
Thu Mar 21 07:18:28 PDT 2019

They released Luminous 12.2.11 and now my large objects are starting to
count down ( after running the rm command suggested in the release notes ).
Seems dynamic sharding will clean up after itself now to!  So case closed!


-----Original Message-----
From: ceph-users <ceph-users-bounces at lists.ceph.com> On Behalf Of Brent
Sent: Thursday, October 11, 2018 2:47 PM
To: 'Matt Benjamin' <mbenjami at redhat.com>
Cc: 'Ceph Users' <ceph-users at lists.ceph.com>
Subject: Re: [ceph-users] OMAP size on disk

Does anyone have a good blog entry or explanation of bucket sharding
requirements/commands?  Plus perhaps a howto?  

I upgraded our cluster to Luminous and now I have a warning about 5 large
objects.  The official blog says that sharding is turned on by default but
we upgraded, so I cant quite tell if our existing buckets had sharding
turned on during the upgrade or if that is something I need to do after(blog
doesn't state that).  Also, when I looked into the sharding commands, they
wanted a shard size, which if its automated, why would need to provide that?
Not to mention I don't know what to start with...

I found this:  https://tracker.ceph.com/issues/24457 which talks about the
issue and the #14 says he worked through it, but information seems outside
of my googlefu.


-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces at lists.ceph.com] On Behalf Of
Matt Benjamin
Sent: Tuesday, October 9, 2018 7:28 AM
To: Luis Periquito <periquito at gmail.com>
Cc: Ceph Users <ceph-users at lists.ceph.com>
Subject: Re: [ceph-users] OMAP size on disk

Hi Luis,

There are currently open issues with space reclamation after dynamic bucket
index resharding, esp. http://tracker.ceph.com/issues/34307

Changes are being worked on to address this, and to permit administratively
reclaiming space.


On Tue, Oct 9, 2018 at 5:50 AM, Luis Periquito <periquito at gmail.com> wrote:
> Hi all,
> I have several clusters, all running Luminous (12.2.7) proving S3 
> interface. All of them have enabled dynamic resharding and is working.
> One of the newer clusters is starting to give warnings on the used 
> space for the OMAP directory. The default.rgw.buckets.index pool is 
> replicated with 3x copies of the data.
> I created a new crush ruleset to only use a few well known SSDs, and 
> the OMAP directory size changed as expected: if I set the OSD as out 
> and them tell to compact, the size of the OMAP will shrink. If I set 
> the OSD as in the OMAP will grow to its previous state. And while the 
> backfill is going we get loads of key recoveries.
> Total physical space for OMAP in the OSDs that have them is ~1TB, so 
> given a 3x replica ~330G before replication.
> The data size for the default.rgw.buckets.data is just under 300G.
> There is one bucket who has ~1.7M objects and 22 shards.
> After deleting that bucket the size of the database didn't change - 
> even after running gc process and telling the OSD to compact its 
> database.
> This is not happening in older clusters, i.e created with hammer.
> Could this be a bug?
> I looked at getting all the OMAP keys and sizes
> (https://ceph.com/geen-categorie/get-omap-keyvalue-size/) and they add 
> up to close the value I expected them to take, looking at the physical 
> storage.
> Any ideas where to look next?
> thanks for all the help.
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103


tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
ceph-users mailing list
ceph-users at lists.ceph.com

ceph-users mailing list
ceph-users at lists.ceph.com

More information about the ceph-users mailing list