[ceph-users] Some questions concerning filestore --> bluestore migration
massimo.sgaravatto at gmail.com
Thu Oct 4 23:18:50 PDT 2018
With 10x10TB SATA DB and 2 SSD disks this would mean 2 TB for each SSD !
If this is really required I am afraid I will keep using filestore ...
On Fri, Oct 5, 2018 at 7:26 AM <ceph at elchaka.de> wrote:
> Am 4. Oktober 2018 02:38:35 MESZ schrieb solarflow99 <
> solarflow99 at gmail.com>:
> >I use the same configuration you have, and I plan on using bluestore.
> >SSDs are only 240GB and it worked with filestore all this time, I
> >bluestore should be fine too.
> >On Wed, Oct 3, 2018 at 4:25 AM Massimo Sgaravatto <
> >massimo.sgaravatto at gmail.com> wrote:
> >> Hi
> >> I have a ceph cluster, running luminous, composed of 5 OSD nodes,
> >which is
> >> using filestore.
> >> Each OSD node has 2 E5-2620 v4 processors, 64 GB of RAM, 10x6TB SATA
> >> + 2x200GB SSD disk (then I have 2 other disks in RAID for the OS), 10
> >> So each SSD disk is used for the journal for 5 OSDs. With this
> >> configuration everything is running smoothly ...
> >> We are now buying some new storage nodes, and I am trying to buy
> >> which is bluestore compliant. So the idea is to consider a
> >> something like:
> >> - 10 SATA disks (8TB / 10TB / 12TB each. TBD)
> >> - 2 processor (~ 10 core each)
> >> - 64 GB of RAM
> >> - 2 SSD to be used for WAL+DB
> >> - 10 Gbps
> >> For what concerns the size of the SSD disks I read in this mailing
> >> that it is suggested to have at least 10GB of SSD disk/10TB of SATA
> >> So, the questions:
> >> 1) Does this hardware configuration seem reasonable ?
> >> 2) Are there problems to live (forever, or until filestore
> >> with some OSDs using filestore (the old ones) and some OSDs using
> >> (the old ones) ?
> >> 3) Would you suggest to update to bluestore also the old OSDs, even
> >if the
> >> available SSDs are too small (they don't satisfy the "10GB of SSD
> >> of SATA disk" rule) ?
> AFAIR should the db size 4% of the osd in question.
> For example, if the block size is 1TB, then block.db shouldn’t be less
> than 40GB
> - Mehmet
> >> Thanks, Massimo
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users at lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ceph-users mailing list
> ceph-users at lists.ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ceph-users