[ceph-users] How you handle failing/slow disks?
paul.emmerich at croit.io
Wed Nov 21 08:26:20 PST 2018
Yeah, we also observed problems with HP raid controllers misbehaving
when a single disk starts to fail. We would never recommend building a
Ceph cluster on HP raid controllers until they can fix that issue.
There are several features in Ceph which detect dead disks: there are
timeouts for OSDs checking each other and there's a timeout for OSDs
checking in with the mons. But that's usually not enough in this
scenario. The good news is that recent Ceph versions will show which
OSDs are implicated in slow requests (check ceph health detail) which
at least gives you some way to figure out which OSDs are becoming
We have found it to be useful to monitor the op_*_latency values of
all OSDs (especially subop latencies) from the admin daemon to detect
such failures earlier.
Looking for help with your Ceph cluster? Contact us at https://croit.io
Tel: +49 89 1896585 90
Am Mi., 21. Nov. 2018 um 16:22 Uhr schrieb Arvydas Opulskis
<zebediejus at gmail.com>:
> Hi all,
> it's not first time we have this kind of problem, usually with HP raid controllers:
> 1. One disk is failing, bringing all controller to slow state, where it's performance degrades dramatically
> 2. Some OSDs are reported as down by other OSDs and marked as down
> 3. At same time other OSDs on same node are not detected as failed and are still participating in cluster. I think, it's because OSD is not aware about backend disk problems and answers to health checks
> 4. Because of this, requests to PGs, which are on problematic node, are becoming "slow", later becoming "stuck"
> 5. Cluster is struggling and client operations are not performed, so cluster is in some kind "locked" state
> 6. We need to mark them down manually (or stop problematic daemons), so cluster starts to recover and process requests
> Is there any mechanism in Ceph, which monitors slow request containing OSDs and mark them down after some kind of threshold?
> ceph-users mailing list
> ceph-users at lists.ceph.com
More information about the ceph-users