[ceph-users] Some questions concerning filestore --> bluestore migration
ceph at elchaka.de
ceph at elchaka.de
Thu Oct 4 22:25:51 PDT 2018
Am 4. Oktober 2018 02:38:35 MESZ schrieb solarflow99 <solarflow99 at gmail.com>:
>I use the same configuration you have, and I plan on using bluestore.
>SSDs are only 240GB and it worked with filestore all this time, I
>bluestore should be fine too.
>On Wed, Oct 3, 2018 at 4:25 AM Massimo Sgaravatto <
>massimo.sgaravatto at gmail.com> wrote:
>> I have a ceph cluster, running luminous, composed of 5 OSD nodes,
>> using filestore.
>> Each OSD node has 2 E5-2620 v4 processors, 64 GB of RAM, 10x6TB SATA
>> + 2x200GB SSD disk (then I have 2 other disks in RAID for the OS), 10
>> So each SSD disk is used for the journal for 5 OSDs. With this
>> configuration everything is running smoothly ...
>> We are now buying some new storage nodes, and I am trying to buy
>> which is bluestore compliant. So the idea is to consider a
>> something like:
>> - 10 SATA disks (8TB / 10TB / 12TB each. TBD)
>> - 2 processor (~ 10 core each)
>> - 64 GB of RAM
>> - 2 SSD to be used for WAL+DB
>> - 10 Gbps
>> For what concerns the size of the SSD disks I read in this mailing
>> that it is suggested to have at least 10GB of SSD disk/10TB of SATA
>> So, the questions:
>> 1) Does this hardware configuration seem reasonable ?
>> 2) Are there problems to live (forever, or until filestore
>> with some OSDs using filestore (the old ones) and some OSDs using
>> (the old ones) ?
>> 3) Would you suggest to update to bluestore also the old OSDs, even
>> available SSDs are too small (they don't satisfy the "10GB of SSD
>> of SATA disk" rule) ?
AFAIR should the db size 4% of the osd in question.
For example, if the block size is 1TB, then block.db shouldn’t be less than 40GB
>> Thanks, Massimo
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
More information about the ceph-users