[ceph-users] Bluestore WAL/DB decisions
M.Roos at f1-outsourcing.eu
Fri Mar 29 01:42:58 PDT 2019
For now I have everything on the hdd's and I have some pools on just
ssd's that require more speed. It looked to me the best way to start
simple. I do not seem to need the iops yet to change this setup.
However I am curious about what the kind of performance increase you
will get from moving the db/wal to ssd with spinners. So if you are able
to, please publish some test results of the same environment from before
and after your change.
From: Erik McCormick [mailto:emccormick at cirrusseven.com]
Sent: 29 March 2019 06:22
Subject: [ceph-users] Bluestore WAL/DB decisions
Having dug through the documentation and reading mailing list threads
until my eyes rolled back in my head, I am left with a conundrum still.
Do I separate the DB / WAL or not.
I had a bunch of nodes running filestore with 8 x 8TB spinning OSDs and
2 x 240 GB SSDs. I had put the OS on the first SSD, and then split the
journals on the remaining SSD space.
My initial minimal understanding of Bluestore was that one should stick
the DB and WAL on an SSD, and if it filled up it would just spill back
onto the OSD itself where it otherwise would have been anyway.
So now I start digging and see that the minimum recommended size is 4%
of OSD size. For me that's ~2.6 TB of SSD. Clearly I do not have that
available to me.
I've also read that it's not so much the data size that matters but the
number of objects and their size. Just looking at my current usage and
extrapolating that to my maximum capacity, I get to ~1.44 million
objects / OSD.
So the question is, do I:
1) Put everything on the OSD and forget the SSDs exist.
2) Put just the WAL on the SSDs
3) Put the DB (and therefore the WAL) on SSD, ignore the size
recommendations, and just give each as much space as I can. Maybe 48GB /
4) Some scenario I haven't considered.
Is the penalty for a too small DB on an SSD partition so severe that
it's not worth doing?
ceph-users mailing list
ceph-users at lists.ceph.com
More information about the ceph-users