[ceph-users] Some questions concerning filestore --> bluestore migration

solarflow99 solarflow99 at gmail.com
Fri Oct 5 11:38:59 PDT 2018


oh my.. yes 2TB enterprise class SSDs, that a much higher requirement than
filestore needed.  That would be cost prohibitive to any lower end ceph
cluster,



On Thu, Oct 4, 2018 at 11:19 PM Massimo Sgaravatto <
massimo.sgaravatto at gmail.com> wrote:

> Argg !!
> With 10x10TB SATA DB and 2 SSD disks this would mean 2 TB for each SSD !
> If this is really required I am afraid I will keep using filestore ...
>
> Cheers, Massimo
>
> On Fri, Oct 5, 2018 at 7:26 AM <ceph at elchaka.de> wrote:
>
>> Hello
>>
>> Am 4. Oktober 2018 02:38:35 MESZ schrieb solarflow99 <
>> solarflow99 at gmail.com>:
>> >I use the same configuration you have, and I plan on using bluestore.
>> >My
>> >SSDs are only 240GB and it worked with filestore all this time, I
>> >suspect
>> >bluestore should be fine too.
>> >
>> >
>> >On Wed, Oct 3, 2018 at 4:25 AM Massimo Sgaravatto <
>> >massimo.sgaravatto at gmail.com> wrote:
>> >
>> >> Hi
>> >>
>> >> I have a ceph cluster, running luminous, composed of 5 OSD nodes,
>> >which is
>> >> using filestore.
>> >> Each OSD node has 2 E5-2620 v4 processors, 64 GB of RAM, 10x6TB SATA
>> >disk
>> >> + 2x200GB SSD disk (then I have 2 other disks in RAID for the OS), 10
>> >Gbps.
>> >> So each SSD disk is used for the journal for 5 OSDs. With this
>> >> configuration everything is running smoothly ...
>> >>
>> >>
>> >> We are now buying some new storage nodes, and I am trying to buy
>> >something
>> >> which is bluestore compliant. So the idea is to consider a
>> >configuration
>> >> something like:
>> >>
>> >> - 10 SATA disks (8TB / 10TB / 12TB each. TBD)
>> >> - 2 processor (~ 10 core each)
>> >> - 64 GB of RAM
>> >> - 2 SSD to be used for WAL+DB
>> >> - 10 Gbps
>> >>
>> >> For what concerns the size of the SSD disks I read in this mailing
>> >list
>> >> that it is suggested to have at least 10GB of SSD disk/10TB of SATA
>> >disk.
>> >>
>> >>
>> >> So, the questions:
>> >>
>> >> 1) Does this hardware configuration seem reasonable ?
>> >>
>> >> 2) Are there problems to live (forever, or until filestore
>> >deprecation)
>> >> with some OSDs using filestore (the old ones) and some OSDs using
>> >bluestore
>> >> (the old ones) ?
>> >>
>> >> 3) Would you suggest to update to bluestore also the old OSDs, even
>> >if the
>> >> available SSDs are too small (they don't satisfy the "10GB of SSD
>> >disk/10TB
>> >> of SATA disk" rule) ?
>>
>> AFAIR should the db size 4% of the osd in question.
>>
>> So
>>
>> For example, if the block size is 1TB, then block.db shouldn’t be less
>> than 40GB
>>
>> See:
>> http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
>>
>> Hth
>> - Mehmet
>>
>> >>
>> >> Thanks, Massimo
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users at lists.ceph.com
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181005/f1e4cdf9/attachment.html>


More information about the ceph-users mailing list