[ceph-users] New OSD with weight 0, rebalance still happen...

Matthew H matthew.heler at hotmail.com
Fri Nov 23 08:04:36 PST 2018


You need to set the following configuration option under [osd] in your ceph.conf file for your new OSDs.

osd_crush_initial_weight = 0

This will ensure your new OSDs come up with a 0 crush weight, thus preventing the automatic rebalance that you see occuring.

Good luck,

From: ceph-users <ceph-users-bounces at lists.ceph.com> on behalf of Marco Gaiarin <gaio at sv.lnf.it>
Sent: Thursday, November 22, 2018 3:22 AM
To: ceph-users at ceph.com
Subject: [ceph-users] New OSD with weight 0, rebalance still happen...

Ceph still surprise me, when i'm sure i've fully understood it,
something 'strange' (to my knowledge) happen.

I need to move out a server of my ceph hammer cluster (3 nodes, 4 OSD
per node), and for some reasons i cannot simply move disks.
So i've added a new node, and yesterday i've setup the new 4 OSD.
In my mind i will add 4 OSD with weight 0, and then slowly i will lower
the old OSD weight and increase the weight of the new.

I've done before:

        ceph osd set noin

and then added OSD, and (as expected) new OSD start with weight 0.

But, despite of the fact that weight is zero, rebalance happen, and
using percentage of rebalance 'weighted' to the size of new disk (eg,
i've had 18TB circa of space, i've added a 2TB disks and roughly 10% of
data start to rebalance).

Why? Thanks.

dott. Marco Gaiarin                                     GNUPG Key ID: 240A3D66
  Associazione ``La Nostra Famiglia''          http://www.lanostrafamiglia.it/
  Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
  marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797

                Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
        (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
ceph-users mailing list
ceph-users at lists.ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181123/ed5c545e/attachment.html>

More information about the ceph-users mailing list