[ceph-users] Bluestore OSD_DATA, WAL & DB

Nigel Williams nigel.williams at tpac.org.au
Thu Nov 2 16:09:55 PDT 2017

On 3 November 2017 at 07:45, Martin Overgaard Hansen <moh at multihouse.dk> wrote:
> I want to bring this subject back in the light and hope someone can provide
> insight regarding the issue, thanks.

Thanks Martin, I was going to do the same.

Is it possible to make the DB partition (on the fastest device) too
big? in other words is there a point where for a given set of OSDs
(number + size) the DB partition is sized too large and is wasting
resources. I recall a comment by someone proposing to split up a
single large (fast) SSD into 100GB partitions for each OSD.

The answer could be couched as some intersection of pool type (RBD /
RADOS / CephFS), object change(update?) intensity, size of OSD etc and

An idea occurred to me that by monitoring for the logged spill message
(the event when the DB partition spills/overflows to the OSD), OSDs
could be (lazily) destroyed and recreated with a new DB partition
increased in size say by 10% each time.

More information about the ceph-users mailing list