[ceph-users] ceph pg/pgp number calculation

Zhenshi Zhou deaderzzs at gmail.com
Thu Oct 18 19:53:57 PDT 2018


Hi David,

Thanks for the explanation!
I'll make a search on how much data each pool will use.

Thanks!

David Turner <drakonstein at gmail.com> 于2018年10月18日周四 下午9:26写道:

> Not all pools need the same amount of PGs. When you get to so many pools
> you want to start calculating how much data each pool will have. If 1 of
> your pools will have 80% of your data in it, it should have 80% of your
> PGs. The metadata pools for rgw likely won't need more than 8 or so PGs
> each. If your rgw data pool is only going to have a little scratch data,
> then it won't need very many PGs either.
>
> On Tue, Oct 16, 2018, 3:35 AM Zhenshi Zhou <deaderzzs at gmail.com> wrote:
>
>> Hi,
>>
>> I have a cluster serving rbd and cephfs storage for a period of
>> time. I added rgw in the cluster yesterday and wanted it to server
>> object storage. Everything seems good.
>>
>> What I'm confused is how to calculate the pg/pgp number. As we
>> all know, the formula of calculating pgs is:
>>
>> Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) /
>> pool_count
>>
>> Before I created rgw, the cluster had 3 pools(rbd, cephfs_data,
>> cephfs_meta).
>> But now it has 8 pools, which object service may use, including
>> '.rgw.root',
>> 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log' and
>> 'defualt.rgw.buckets.index'.
>>
>> Should I calculate pg number again using new pool number as 8, or should
>> I
>> continue to use the old pg number?
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181019/300fc899/attachment.html>


More information about the ceph-users mailing list