[ceph-users] How to just delete PGs stuck incomplete on EC pool

jesper at krogh.cc jesper at krogh.cc
Sat Mar 2 06:00:47 PST 2019

Did they break, or did something went wronng trying to replace them?


Sent from myMail for iOS

Saturday, 2 March 2019, 14.34 +0100 from Daniel K  <sathackr at gmail.com>:
>I bought the wrong drives trying to be cheap. They were 2TB WD Blue 5400rpm 2.5 inch laptop drives.
>They've been replace now with HGST 10K 1.8TB SAS drives.
>On Sat, Mar 2, 2019, 12:04 AM  < jesper at krogh.cc > wrote:
>>Saturday, 2 March 2019, 04.20 +0100 from  sathackr at gmail.com < sathackr at gmail.com >:
>>>56 OSD, 6-node 12.2.5 cluster on Proxmox
>>>We had multiple drives fail(about 30%) within a few days of each other, likely faster than the cluster could recover.
>>Hov did so many drives break?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20190302/6efc4051/attachment.html>

More information about the ceph-users mailing list