[ceph-users] CRUSH rule seems to work fine not for all PGs in erasure coded pools
drakonstein at gmail.com
Thu Nov 30 05:45:45 PST 2017
active+clean+remapped is not a healthy state for a PG. If it actually we're
going to a new osd it would say backfill+wait or backfilling and eventually
would get back to active+clean.
I'm not certain what the active+clean+remapped state means. Perhaps a PG
query, PG dump, etc can give more insight. In any case, this is not a
healthy state and you're still testing removing a node to have less than
you need to be healthy.
On Thu, Nov 30, 2017, 5:38 AM Jakub Jaszewski <jaszewski.jakub at gmail.com>
> I've just did ceph upgrade jewel -> luminous and am facing the same case...
> # EC profile
> 5 hosts in the cluster and I run systemctl stop ceph.target on one of them
> some PGs from EC pool were remapped (active+clean+remapped state) even
> when there was not enough hosts in the cluster but some are still in
> active+undersized+degraded state
> root at host01:~# ceph status
> id: a6f73750-1972-47f6-bcf5-a99753be65ad
> health: HEALTH_WARN
> Degraded data redundancy: 876/9115 objects degraded (9.611%),
> 540 pgs unclean, 540 pgs degraded, 540 pgs undersized
> mon: 3 daemons, quorum host01,host02,host03
> mgr: host01(active), standbys: host02, host03
> osd: 60 osds: 48 up, 48 in; 484 remapped pgs
> rgw: 3 daemons active
> pools: 19 pools, 3736 pgs
> objects: 1965 objects, 306 MB
> usage: 5153 MB used, 174 TB / 174 TB avail
> pgs: 876/9115 objects degraded (9.611%)
> 2712 active+clean
> 540 active+undersized+degraded
> 484 active+clean+remapped
> client: 17331 B/s rd, 20 op/s rd, 0 op/s wr
> root at host01:~#
> Anyone here able to explain this behavior to me ?
> ceph-users mailing list
> ceph-users at lists.ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ceph-users