[ceph-users] hardware heterogeneous in same pool

Janne Johansson icepic.dz at gmail.com
Thu Oct 4 00:50:02 PDT 2018

Den tors 4 okt. 2018 kl 00:09 skrev Bruno Carvalho <brunowcs at gmail.com>:

> Hi Cephers, I would like to know how you are growing the cluster.
> Using dissimilar hardware in the same pool or creating a pool for each
> different hardware group.
> What problem would I have many problems using different hardware (CPU,
> memory, disk) in the same pool?

I don't think CPU and RAM (and other hw related things like HBA controller
card brand) matters
a lot, more is always nicer, but as long as you don't add worse machines
like Jonathan wrote you
should not see any degradation.

What you might want to look out for is if the new disks are very uneven
compared to the old
setup, so if you used to have servers with 10x2TB drives and suddenly add
one with 2x10TB,
things might become very unbalanced, since those differences will not be
handled seamlessly
by the crush map.

Apart from that, the only issues for us is "add drives, quickly set crush
reweight to 0.0 before
all existing OSD hosts shoot massive amounts of I/O on them, then script a
slower raise of
crush weight upto what they should end up at", to lessen the impact for our
24/7 operations.

If you have weekends where noone accesses the cluster or night-time low-IO
usage patterns,
just upping the weight at the right hour might suffice.

Lastly, for ssd/nvme setups with good networking, this is almost moot, they
converge so fast
its almost unfair. A real joy working with expanding flash-only

May the most significant bit of your life be positive.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181004/7a9d239e/attachment.html>

More information about the ceph-users mailing list