[ceph-users] why set pg_num do not update pgp_num

Wido den Hollander wido at 42on.com
Fri Oct 19 01:06:06 PDT 2018



On 10/19/18 7:51 AM, xiang.dai at iluvatar.ai wrote:
> Hi!
> 
> I use ceph 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic
> (stable), and find that:
> 
> When expand whole cluster, i update pg_num, all succeed, but the status
> is as below:
>   cluster:
>     id:     41ef913c-2351-4794-b9ac-dd340e3fbc75
>     health: HEALTH_WARN
>             3 pools have pg_num > pgp_num
> 
> Then i update pgp_num too, warning miss.
> 
> What makes me confused is that when i create whole cluster at first time,
> i use "ceph osd create pool pool_name pg_num", the pgp_num is auto equal
> to pg_num.
> 
> But "ceph osd set pool pool_name pg_num" not.
> 
> Why does this design?
> 
> Why do not auto update pgp_num when update pg_num?
> 

Because when changing pg_num only the Placement Groups are created, data
isn't moving yet. pgp_num, Placement Groups for Placement influences how
CRUSH works.

When you change that value data actually  starts to move.

pgp_num can never be larger then pg_num though.

Some people choose to increase pgp_num in small steps so that the data
migration isn't massive.

Wido

> Thanks
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


More information about the ceph-users mailing list