[ceph-users] Erasure coding with more chunks than servers

Paul Emmerich paul.emmerich at croit.io
Fri Oct 5 07:01:40 PDT 2018


Oh, and you'll need to use m>=3 to ensure availability during a node failure.


Paul
Am Fr., 5. Okt. 2018 um 11:22 Uhr schrieb Caspar Smit <casparsmit at supernas.eu>:
>
> Hi Vlad,
>
> You can check this blog: http://cephnotes.ksperis.com/blog/2017/01/27/erasure-code-on-small-clusters
>
> Note! Be aware that these settings do not automatically cover a node failure.
>
> Check out this thread why:
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024423.html
>
> Kind regards,
> Caspar
>
>
> Op do 4 okt. 2018 om 20:27 schreef Vladimir Brik <vladimir.brik at icecube.wisc.edu>:
>>
>> Hello
>>
>> I have a 5-server cluster and I am wondering if it's possible to create
>> pool that uses k=5 m=2 erasure code. In my experiments, I ended up with
>> pools whose pgs are stuck in creating+incomplete state even when I
>> created the erasure code profile with --crush-failure-domain=osd.
>>
>> Assuming that what I want to do is possible, will CRUSH distribute
>> chunks evenly among servers, so that if I need to bring one server down
>> (e.g. reboot), clients' ability to write or read any object would not be
>> disrupted? (I guess something would need to ensure that no server holds
>> more than two chunks of an object)
>>
>> Thanks,
>>
>> Vlad
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


More information about the ceph-users mailing list