[ceph-users] Some questions concerning filestore --> bluestore migration

solarflow99 solarflow99 at gmail.com
Wed Oct 3 17:38:35 PDT 2018


I use the same configuration you have, and I plan on using bluestore.  My
SSDs are only 240GB and it worked with filestore all this time, I suspect
bluestore should be fine too.


On Wed, Oct 3, 2018 at 4:25 AM Massimo Sgaravatto <
massimo.sgaravatto at gmail.com> wrote:

> Hi
>
> I have a ceph cluster, running luminous, composed of 5 OSD nodes, which is
> using filestore.
> Each OSD node has 2 E5-2620 v4 processors, 64 GB of RAM, 10x6TB SATA disk
> + 2x200GB SSD disk (then I have 2 other disks in RAID for the OS), 10 Gbps.
> So each SSD disk is used for the journal for 5 OSDs. With this
> configuration everything is running smoothly ...
>
>
> We are now buying some new storage nodes, and I am trying to buy something
> which is bluestore compliant. So the idea is to consider a configuration
> something like:
>
> - 10 SATA disks (8TB / 10TB / 12TB each. TBD)
> - 2 processor (~ 10 core each)
> - 64 GB of RAM
> - 2 SSD to be used for WAL+DB
> - 10 Gbps
>
> For what concerns the size of the SSD disks I read in this mailing list
> that it is suggested to have at least 10GB of SSD disk/10TB of SATA disk.
>
>
> So, the questions:
>
> 1) Does this hardware configuration seem reasonable ?
>
> 2) Are there problems to live (forever, or until filestore deprecation)
> with some OSDs using filestore (the old ones) and some OSDs using bluestore
> (the old ones) ?
>
> 3) Would you suggest to update to bluestore also the old OSDs, even if the
> available SSDs are too small (they don't satisfy the "10GB of SSD disk/10TB
> of SATA disk" rule) ?
>
> Thanks, Massimo
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181003/8c5affbd/attachment.html>


More information about the ceph-users mailing list