[ceph-users] CRUSH rule seems to work fine not for all PGs in erasure coded pools

Jakub Jaszewski jaszewski.jakub at gmail.com
Tue Nov 28 06:24:21 PST 2017


Hi David, thanks for quick feedback.

Then why some PGs were remapped and some were not?

# LOOKS THAT 338 PGs IN ERASURE CODED POOLS HAVE BEEN REMAPPED
# I DONT GET WHY 540 PGs STILL ENCOUNTER active+undersized+degraded STATE

root at host01 <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>:~#
ceph pg dump pgs_brief  |grep 'active+remapped'
dumped pgs_brief in format plain
16.6f active+remapped [43,2147483647 <(214)%20748-3647>,2,31,12] 43
[43,33,2,31,12] 43
16.6e active+remapped [10,5,35,44,2147483647] 10 [10,5,35,44,41] 10
....root at host01
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>:~# egrep
'16.6f|16.6e' PGs_on_HOST_host05
16.6f active+clean [43,33,2,59,12] 43 [43,33,2,59,12] 43
16.6e active+clean [10,5,49,35,41] 10 [10,5,49,35,41] 10root at host01
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>:~#

like PG 16.6f, prior to ceph services stop it was on [43,33,2,59,12]
then was remapped to [43,33,2,31,12], so OSD at 31 and OSD at 33 are on the
same HOST.

But for example PG 16.ee get to active+undersized+degraded state,
prior to services stop it was on

pg_stat state up up_primary acting acting_primary
16.ee active+clean [5,22,33,55,45] 5 [5,22,33,55,45] 5

after the stop of services on the host it was not remapped

16.ee	active+undersized+degraded	[5,22,33,2147483647
<(214)%20748-3647>,45]	5	[5,22,33,2147483647 <(214)%20748-3647>,45]	5
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171128/ed0a58e4/attachment.html>


More information about the ceph-users mailing list