[ceph-users] Adding multiple OSD
richard.hesketh at rd.bbc.co.uk
Tue Dec 5 03:57:55 PST 2017
On 05/12/17 09:20, Ronny Aasen wrote:
> On 05. des. 2017 00:14, Karun Josy wrote:
>> Thank you for detailed explanation!
>> Got one another doubt,
>> This is the total space available in the cluster :
>> TOTAL : 23490G
>> Use : 10170G
>> Avail : 13320G
>> But ecpool shows max avail as just 3 TB. What am I missing ?
>> Karun Josy
> without knowing details of your cluster, this is just assumption guessing, but...
> perhaps one of your hosts have less free space then the others, replicated can pick 3 of the hosts that have plenty of space, but erasure perhaps require more hosts, so the host with least space is the limiting factor.
> ceph osd df tree
> to see how it looks.
> kinds regards
> Ronny Aasen
From previous emails the erasure code profile is k=5,m=3, with a host failure domain, so the EC pool does use all eight hosts for every object. I agree it's very likely that the problem is that your hosts currently have heterogeneous capacity and the maximum data in the EC pool will be limited by the size of the smallest host.
Also remember that with this profile, you have a 3/5 overhead on your data, so 1GB of real data stored in the pool translates to 1.6GB of raw data on disk. The pool usage and max available stats are given in terms of real data, but the cluster TOTAL usage/availability is expressed in terms of the raw space (since real usable data will vary depending on pool settings). If you check, you will probably find that your lowest-capacity host has near 6TB of space free, which would let you store a little over 3.5TB of real data in your EC pool.
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 819 bytes
Desc: OpenPGP digital signature
More information about the ceph-users