[ceph-users] CRUSH puzzle: step weighted-take

Gregory Farnum gfarnum at redhat.com
Mon Oct 1 11:09:31 PDT 2018


On Fri, Sep 28, 2018 at 12:03 AM Dan van der Ster <dan at vanderster.com> wrote:
>
> On Thu, Sep 27, 2018 at 9:57 PM Maged Mokhtar <mmokhtar at petasan.org> wrote:
> >
> >
> >
> > On 27/09/18 17:18, Dan van der Ster wrote:
> > > Dear Ceph friends,
> > >
> > > I have a CRUSH data migration puzzle and wondered if someone could
> > > think of a clever solution.
> > >
> > > Consider an osd tree like this:
> > >
> > >    -2       4428.02979     room 0513-R-0050
> > >   -72        911.81897         rack RA01
> > >    -4        917.27899         rack RA05
> > >    -6        917.25500         rack RA09
> > >    -9        786.23901         rack RA13
> > >   -14        895.43903         rack RA17
> > >   -65       1161.16003     room 0513-R-0060
> > >   -71        578.76001         ipservice S513-A-IP38
> > >   -70        287.56000             rack BA09
> > >   -80        291.20001             rack BA10
> > >   -76        582.40002         ipservice S513-A-IP63
> > >   -75        291.20001             rack BA11
> > >   -78        291.20001             rack BA12
> > >
> > > In the beginning, for reasons that are not important, we created two pools:
> > >    * poolA chooses room=0513-R-0050 then replicates 3x across the racks.
> > >    * poolB chooses room=0513-R-0060, replicates 2x across the
> > > ipservices, then puts a 3rd replica in room 0513-R-0050.
> > >
> > > For clarity, here is the crush rule for poolB:
> > >          type replicated
> > >          min_size 1
> > >          max_size 10
> > >          step take 0513-R-0060
> > >          step chooseleaf firstn 2 type ipservice
> > >          step emit
> > >          step take 0513-R-0050
> > >          step chooseleaf firstn -2 type rack
> > >          step emit
> > >
> > > Now to the puzzle.
> > > For reasons that are not important, we now want to change the rule for
> > > poolB to put all three 3 replicas in room 0513-R-0060.
> > > And we need to do this in a way which is totally non-disruptive
> > > (latency-wise) to the users of either pools. (These are both *very*
> > > active RBD pools).
> > >
> > > I see two obvious ways to proceed:
> > >    (1) simply change the rule for poolB to put a third replica on any
> > > osd in room 0513-R-0060. I'm afraid though that this would involve way
> > > too many concurrent backfills, cluster-wide, even with
> > > osd_max_backfills=1.
> > >    (2) change poolB size to 2, then change the crush rule to that from
> > > (1), then reset poolB size to 3. This would risk data availability
> > > during the time that the pool is size=2, and also risks that every osd
> > > in room 0513-R-0050 would be too busy deleting for some indeterminate
> > > time period (10s of minutes, I expect).
> > >
> > > So I would probably exclude those two approaches.
> > >
> > > Conceptually what I'd like to be able to do is a gradual migration,
> > > which if I may invent some syntax on the fly...
> > >
> > > Instead of
> > >         step take 0513-R-0050
> > > do
> > >         step weighted-take 99 0513-R-0050 1 0513-R-0060
> > >
> > > That is, 99% of the time take room 0513-R-0050 for the 3rd copies, 1%
> > > of the time take room 0513-R-0060.
> > > With a mechanism like that, we could gradually adjust those "step
> > > weighted-take" lines until 100% of the 3rd copies were in 0513-R-0060.
> > >
> > > I have a feeling that something equivalent to that is already possible
> > > with weight-sets or some other clever crush trickery.
> > > Any ideas?
> > >
> > > Best Regards,
> > >
> > > Dan
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users at lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > would it be possible in your case to create a parent datacenter bucket
> > to hold both rooms and assign their relative weights there, then for the
> > third replica do a step take to this parent bucket ? its not elegant but
> > may do the trick.
>
> Hey, that might work! both rooms are already in the default root:
>
>   -1       5589.18994 root default
>   -2       4428.02979     room 0513-R-0050
>  -65       1161.16003     room 0513-R-0060
>  -71        578.76001         ipservice S513-A-IP38
>  -76        582.40002         ipservice S513-A-IP63
>
> so I'll play with a test pool and weighting down room 0513-R-0060 to
> see if this can work.

I don't think this will work — it will probably change the seed that
is used and mean that the rule tries to move *everything*, not just
the third PG replicas. But perhaps I'm mistaken about the details of
this mechanic...

The crush weighted-take is interesting, but I'm not sure I would want
to do something probabilistic like that in this situation. What we've
discussed before — but *not* implemented or even scheduled, sadly for
you here — is having multiple CRUSH "epochs" active at the same time,
and letting the OSDMap specify a pg as the crossover point from one
CRUSH epoch to the next. (Among other things, this would let us
finally limit the number of backfills in progress at the cluster
level!)

I'm less familiar with the weight-set mechanism, so you might have a
chance there? Mostly though this is just not something RADOS is set up
to do, because we expect the cluster to be able to handle the backfill
you throw at it, once the per-OSD config is correct. (It has become
clear that the per-OSD configs need to do a better prioritization job
if that's ever going to work, or maybe we're just completely wrong
anyway. But obviously it takes more time to change the architecture
and the code to handle it than to just identify there's a problem.)
*sigh*
-Greg

>
> Thanks!
>
> -- dan
>
> > The suggested step weighted-take would be more flexible as it can be
> > changed on a replica level, but i do not know if you can do this with
> > existing code.
> >
> > Maged
> >
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


More information about the ceph-users mailing list