[ceph-users] OMAP size on disk
mbenjami at redhat.com
Tue Oct 9 04:28:06 PDT 2018
There are currently open issues with space reclamation after dynamic
bucket index resharding, esp. http://tracker.ceph.com/issues/34307
Changes are being worked on to address this, and to permit
administratively reclaiming space.
On Tue, Oct 9, 2018 at 5:50 AM, Luis Periquito <periquito at gmail.com> wrote:
> Hi all,
> I have several clusters, all running Luminous (12.2.7) proving S3
> interface. All of them have enabled dynamic resharding and is working.
> One of the newer clusters is starting to give warnings on the used
> space for the OMAP directory. The default.rgw.buckets.index pool is
> replicated with 3x copies of the data.
> I created a new crush ruleset to only use a few well known SSDs, and
> the OMAP directory size changed as expected: if I set the OSD as out
> and them tell to compact, the size of the OMAP will shrink. If I set
> the OSD as in the OMAP will grow to its previous state. And while the
> backfill is going we get loads of key recoveries.
> Total physical space for OMAP in the OSDs that have them is ~1TB, so
> given a 3x replica ~330G before replication.
> The data size for the default.rgw.buckets.data is just under 300G.
> There is one bucket who has ~1.7M objects and 22 shards.
> After deleting that bucket the size of the database didn't change -
> even after running gc process and telling the OSD to compact its
> This is not happening in older clusters, i.e created with hammer.
> Could this be a bug?
> I looked at getting all the OMAP keys and sizes
> (https://ceph.com/geen-categorie/get-omap-keyvalue-size/) and they add
> up to close the value I expected them to take, looking at the physical
> Any ideas where to look next?
> thanks for all the help.
> ceph-users mailing list
> ceph-users at lists.ceph.com
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
More information about the ceph-users