[ceph-users] Erasure pool

Caspar Smit casparsmit at supernas.eu
Thu Nov 9 00:55:20 PST 2017

2017-11-08 22:05 GMT+01:00 Marc Roos <M.Roos at f1-outsourcing.eu>:

> Can anyone advice on a erasure pool config to store
> - files between 500MB and 8GB, total 8TB
> - just for archiving, not much reading (few files a week)
> - hdd pool
> - now 3 node cluster (4th coming)
> - would like to save on storage space
> I was thinking of a profile with jerasure  k=3 m=2, but maybe this lrc
> is better? Or wait for 4th node and choose k=4 m=2?
Just to keep in mind:

In a three node setup with k=3 and m=2 you will have to set the failure
domain to 'osd' (the default failure domain of 'host' would require 5 nodes)
Furthermore when using 'osd' as failure domain you would probably have
(some) inaccessable data when a node reboots and/or fails since there is a
chance 3 (or more) out of 5 chunks are on the same node.
Same goes for 4 nodes and k=4 m=2 (failure domain host would require 6


> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171109/d9b61f6f/attachment.html>

More information about the ceph-users mailing list