[ceph-users] decreasing number of PGs

David Turner drakonstein at gmail.com
Tue Oct 3 06:07:41 PDT 2017


Just remember that the warning appears at > 300 PGs/OSD, but the
recommendation is 100.  I would try to reduce your PGs by 1/3 or as close
as you can to that. My learning cluster I had to migrate data between pools
multiple times reducing the number of PGs as I went until I got to a more
normal amount. It affected the clients a fair bit, but that cluster is
still a 3 node cluster in active use.

Note that the data movements were rsync, dd, etc for rbds and cephfs.

On Tue, Oct 3, 2017, 8:54 AM Andrei Mikhailovsky <andrei at arhont.com> wrote:

> Thanks for your suggestions and help
>
> Andrei
> ------------------------------
>
> *From: *"David Turner" <drakonstein at gmail.com>
> *To: *"Jack" <ceph at jack.fr.eu.org>, "ceph-users" <
> ceph-users at lists.ceph.com>
> *Sent: *Monday, 2 October, 2017 22:28:33
> *Subject: *Re: [ceph-users] decreasing number of PGs
>
> Adding more OSDs or deleting/recreating pools that have too many PGs are
> your only 2 options to reduce the number of PG's per OSD.  It is on the
> Ceph roadmap, but is not a currently supported feature.  You can
> alternatively adjust the setting threshold for the warning, but it is still
> a problem you should address in your cluster.
>
> On Mon, Oct 2, 2017 at 4:02 PM Jack <ceph at jack.fr.eu.org> wrote:
>
>> You cannot;
>>
>>
>> On 02/10/2017 21:43, Andrei Mikhailovsky wrote:
>> > Hello everyone,
>> >
>> > what is the safest way to decrease the number of PGs in the cluster.
>> Currently, I have too many per osd.
>> >
>> > Thanks
>> >
>> >
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users at lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171003/7af64ba6/attachment.html>


More information about the ceph-users mailing list