[ceph-users] read performance, separate client CRUSH maps or limit osd read access from each client

Vlad Kopylov vladkopy at gmail.com
Fri Nov 16 10:07:32 PST 2018


This is what Jean suggested. I understand it and it works with primary.
*But what I need is for all clients to access same files, not separate sets
(like red blue green)*

Thanks Konstantin.

On Fri, Nov 16, 2018 at 3:43 AM Konstantin Shalygin <k0ste at k0ste.ru> wrote:

> On 11/16/18 11:57 AM, Vlad Kopylov wrote:
> > Exactly. But write operations should go to all nodes.
>
> This can be set via primary affinity [1], when a ceph client reads or
> writes data, it always contacts the primary OSD in the acting set.
>
>
> If u want to totally segregate IO, you can use device classes:
>
> Just create osds with different classes:
>
> dc1
>
>    host1
>
>      red osd.0 primary
>
>      blue osd.1
>
>      green osd.2
>
> dc2
>
>    host2
>
>      red osd.3
>
>      blue osd.4 primary
>
>      green osd.5
>
> dc3
>
>    host3
>
>      red osd.6
>
>      blue osd.7
>
>      green osd.8 primary
>
>
> create 3 crush rules:
>
> ceph osd crush rule create-replicated red default host red
>
> ceph osd crush rule create-replicated blue default host blue
>
> ceph osd crush rule create-replicated green default host green
>
>
> and 3 pools:
>
> ceph osd pool create red 64 64 replicated red
>
> ceph osd pool create blue 64 64 replicated blue
>
> ceph osd pool create blue 64 64 replicated green
>
>
> [1]
>
> http://docs.ceph.com/docs/master/rados/operations/crush-map/#primary-affinity
> '
>
>
>
> k
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181116/3e14a1c6/attachment.html>


More information about the ceph-users mailing list