[ceph-users] tunable question

Jake Young jak3kaj at gmail.com
Tue Oct 3 06:21:40 PDT 2017


On Tue, Oct 3, 2017 at 8:38 AM lists <lists at merit.unu.edu> wrote:

> Hi,
>
> What would make the decision easier: if we knew that we could easily
> revert the
>  > "ceph osd crush tunables optimal"
> once it has begun rebalancing data?
>
> Meaning: if we notice that impact is too high, or it will take too long,
> that we could simply again say
>  > "ceph osd crush tunables hammer"
> and the cluster would calm down again?


Yes you can revert the tunables back; but it will then move all the data
back where it was, so be prepared for that.

Verify you have the following values in ceph.conf. Note that these are the
defaults in Jewel, so if they aren’t defined, you’re probably good:
osd_max_backfills=1
osd_recovery_threads=1

You can try to set these (using ceph —inject) if you notice a large impact
to your client performance:
osd_recovery_op_priority=1
osd_recovery_max_active=1
osd_recovery_threads=1

I recall this tunables change when we went from hammer to jewel last year.
It took over 24 hours to rebalance 122TB on our 110 osd  cluster.

Jake


>
> MJ
>
> On 2-10-2017 9:41, Manuel Lausch wrote:
> > Hi,
> >
> > We have similar issues.
> > After upgradeing from hammer to jewel the tunable "choose leave stabel"
> > was introduces. If we activate it nearly all data will be moved. The
> > cluster has 2400 OSD on 40 nodes over two datacenters and is filled with
> > 2,5 PB Data.
> >
> > We tried to enable it but the backfillingtraffic is to high to be
> > handled without impacting other services on the Network.
> >
> > Do someone know if it is neccessary to enable this tunable? And could
> > it be a problem in the future if we want to upgrade to newer versions
> > wihout it enabled?
> >
> > Regards,
> > Manuel Lausch
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171003/7f639e6c/attachment.html>


More information about the ceph-users mailing list