[ceph-users] Erasure coding with more chunks than servers

Caspar Smit casparsmit at supernas.eu
Fri Oct 5 02:21:32 PDT 2018


Hi Vlad,

You can check this blog:
http://cephnotes.ksperis.com/blog/2017/01/27/erasure-code-on-small-clusters

Note! Be aware that these settings do not automatically cover a node
failure.

Check out this thread why:

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024423.html

Kind regards,
Caspar


Op do 4 okt. 2018 om 20:27 schreef Vladimir Brik <
vladimir.brik at icecube.wisc.edu>:

> Hello
>
> I have a 5-server cluster and I am wondering if it's possible to create
> pool that uses k=5 m=2 erasure code. In my experiments, I ended up with
> pools whose pgs are stuck in creating+incomplete state even when I
> created the erasure code profile with --crush-failure-domain=osd.
>
> Assuming that what I want to do is possible, will CRUSH distribute
> chunks evenly among servers, so that if I need to bring one server down
> (e.g. reboot), clients' ability to write or read any object would not be
> disrupted? (I guess something would need to ensure that no server holds
> more than two chunks of an object)
>
> Thanks,
>
> Vlad
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181005/a8b9129e/attachment.html>


More information about the ceph-users mailing list