[ceph-users] EC K + M Size

Janne Johansson icepic.dz at gmail.com
Sat Nov 3 01:47:22 PDT 2018


Den lör 3 nov. 2018 kl 09:10 skrev Ashley Merrick <singapore at amerrick.co.uk>:
>
> Hello,
>
> Tried to do some reading online but was unable to find much.
>
> I can imagine a higher K + M size with EC requires more CPU to re-compile the shards into the required object.
>
> But is there any benefit or negative going with a larger K + M, obviously their is the size benefit but technically could it also improve reads due to more OSD's providing a smaller section of the data required to compile the shard?
>
> Is there any gotchas that should be known for example going with a 4+2 vs 10+2
>

If one host goes down in a 10+2 scenarion, then 11 or 12 other
machines need to get involved in order to repair the lost data. This
means that if your cluster has close to 12 hosts, it would mean most
or all the servers get extra work now. I saw some old yahoo post from
long ago that stated that the primary (whose job it is to piece them
together) would only send out 8 requests at any given time, and IF
still true, that would make 6+2 somewhat more efficient. Still, EC is
seldom about performance, but rather to save space while still
allowing 1-2-3 drives to die without losing data by using 1-2-3
checksum pieces.


-- 
May the most significant bit of your life be positive.


More information about the ceph-users mailing list