[ceph-users] CephFS pg ratio of metadata/data pool
wooertim at gmail.com
Thu Mar 16 04:12:59 PDT 2017
We have a CephFS which its metadata pool and data pool share same set of OSDs. According to the PGs calculation:
(100*num_osds) / num_replica
If we have 56 OSDs, we should set 5120 PGs to each pool to make the data evenly distributed to all the OSDs. However, if we set metadata pool and data pool to both 5120 there will have warning about “too many pgs”. We currently set 2048 to metadata pool and data pool but it seems data may not evenly distribute to OSDs due to no sufficient PGs. Can we set a smaller PGs to metadata pool and larger PGs to data pool? E.g. 1024 pg to metadata and 4096 to data pool. Is there a recommend ratio? Will this result in any performance issue?
More information about the ceph-users