[ceph-users] tunable question

lists lists at merit.unu.edu
Tue Oct 3 05:38:12 PDT 2017


Hi,

What would make the decision easier: if we knew that we could easily 
revert the
 > "ceph osd crush tunables optimal"
once it has begun rebalancing data?

Meaning: if we notice that impact is too high, or it will take too long, 
that we could simply again say
 > "ceph osd crush tunables hammer"
and the cluster would calm down again?

MJ

On 2-10-2017 9:41, Manuel Lausch wrote:
> Hi,
> 
> We have similar issues.
> After upgradeing from hammer to jewel the tunable "choose leave stabel"
> was introduces. If we activate it nearly all data will be moved. The
> cluster has 2400 OSD on 40 nodes over two datacenters and is filled with
> 2,5 PB Data.
> 
> We tried to enable it but the backfillingtraffic is to high to be
> handled without impacting other services on the Network.
> 
> Do someone know if it is neccessary to enable this tunable? And could
> it be a problem in the future if we want to upgrade to newer versions
> wihout it enabled?
> 
> Regards,
> Manuel Lausch
> 


More information about the ceph-users mailing list