[ceph-users] inconsistent pg on erasure coded pool

Kenneth Waegeman kenneth.waegeman at ugent.be
Wed Oct 4 05:02:03 PDT 2017


Hi,

We have some inconsistency / scrub error on a Erasure coded pool, that I 
can't seem to solve.

[root at osd008 ~]# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
pg 5.144 is active+clean+inconsistent, acting 
[81,119,148,115,142,100,25,63,48,11,43]
1 scrub errors

In the log files, it seems there is 1 missing shard:

/var/log/ceph/ceph-osd.81.log.2.gz:2017-10-02 23:49:11.940624 
7f0a9d7e2700 -1 log_channel(cluster) log [ERR] : 5.144s0 shard 63(7) 
missing 5:2297a2e1:::10014e2d8d5.00000000:head
/var/log/ceph/ceph-osd.81.log.2.gz:2017-10-03 00:48:06.681941 
7f0a9d7e2700 -1 log_channel(cluster) log [ERR] : 5.144s0 deep-scrub 1 
missing, 0 inconsistent objects
/var/log/ceph/ceph-osd.81.log.2.gz:2017-10-03 00:48:06.681947 
7f0a9d7e2700 -1 log_channel(cluster) log [ERR] : 5.144 deep-scrub 1 errors

I tried running ceph pg repair on the pg, but nothing changed. I also 
tried starting a new deep-scrub on the  osd 81 (ceph osd deep-scrub 81) 
but I don't see any deep-scrub starting at the osd.

How can we solve this ?

Thank you!


Kenneth



More information about the ceph-users mailing list