[ceph-users] HEALTH_ERR : PG_DEGRADED_FULL

Karun Josy karunjosy1 at gmail.com
Wed Dec 6 23:06:07 PST 2017


Hello,

I am seeing health error in our production cluster.

 health: HEALTH_ERR
            1105420/11038158 objects misplaced (10.015%)
            Degraded data redundancy: 2046/11038158 objects degraded
(0.019%), 102 pgs unclean, 2 pgs degraded
            Degraded data redundancy (low space): 4 pgs backfill_toofull

The cluster space was running out.
So I was in the process of adding a disk.
Since I got this error, we deleted some of the data to create more space.


This is the current usage, after clearing some space, earlier 3 disks were
at 85%.
========

$ ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE   USE   AVAIL %USE  VAR  PGS
 0   ssd 1.86469  1.00000  1909G  851G 1058G 44.59 0.78 265
16   ssd 0.87320  1.00000   894G  361G  532G 40.43 0.71 112
 1   ssd 0.87320  1.00000   894G  586G  307G 65.57 1.15 163
 2   ssd 0.87320  1.00000   894G  490G  403G 54.84 0.96 145
17   ssd 0.87320  1.00000   894G  163G  731G 18.24 0.32  58
 3   ssd 0.87320  1.00000   894G  616G  277G 68.98 1.21 176
 4   ssd 0.87320  1.00000   894G  593G  300G 66.42 1.17 179
 5   ssd 0.87320  1.00000   894G  419G  474G 46.89 0.82 130
 6   ssd 0.87320  1.00000   894G  422G  472G 47.21 0.83 129
 7   ssd 0.87320  1.00000   894G  397G  496G 44.50 0.78 115
 8   ssd 0.87320  1.00000   894G  656G  237G 73.44 1.29 184
 9   ssd 0.87320  1.00000   894G  560G  333G 62.72 1.10 170
10   ssd 0.87320  1.00000   894G  623G  270G 69.78 1.22 183
11   ssd 0.87320  1.00000   894G  586G  307G 65.57 1.15 172
12   ssd 0.87320  1.00000   894G  610G  283G 68.29 1.20 172
13   ssd 0.87320  1.00000   894G  597G  296G 66.87 1.17 180
14   ssd 0.87320  1.00000   894G  597G  296G 66.79 1.17 168
15   ssd 0.87320  1.00000   894G  610G  283G 68.32 1.20 179
                    TOTAL 17110G 9746G 7363G 56.97

How to fix this? Please help!

Karun
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171207/8a78a48c/attachment.html>


More information about the ceph-users mailing list