[ceph-users] Libvirt hosts freeze after ceph osd+mon problem

Jan Pekař - Imatic jan.pekar at imatic.cz
Wed Nov 8 11:59:51 PST 2017


You were right, it was frozen at virtual machine level.
panic kernel parameter worked, so server resumed with reboot.

But there were no panic displayed on the VNC console even if I was logged.

The main problem is, that combination of MON and OSD silent failure at 
once will cause much longer resuming from that state.

In my case

approx at 18:38:11 I paused MON+OSD
at 18:38:17 I have first heartbeat_check: no reply from
at 18:38:30 I have libceph: mon1 [X]:6789 session lost, hunting for new mon
at 18:38:30 I have libceph: mon2 [X]:6789 session established
at 18:39:05 imatic-hydra01 kernel: [2384345.121219] libceph: osd6 down

So it took 54 seconds in my case to resume IO and recover. Is that 
normal and expected?
I think that long time is because MON hunt was run during the OSD error 
and another monitor won election, so after that timeouts for kicking OSD 
out are running from the very beginning.

When considering timeouts, everybody must count with MON recover timeout 
+ OSD recover timeout as worst scenario for IO outage. Even they are 
hosted on different machines, they can fail in the same time.

Do you have any recommendation for reliable heartbeat and other settings 
for virtual machines with ext4, xfs and NTFS to be safe?

Thank you
With regards
Jan Pekar




On 7.11.2017 00:30, Jason Dillaman wrote:
> If you could install the debug packages and get a gdb backtrace from all 
> threads it would be helpful. librbd doesn't utilize any QEMU threads so 
> even if librbd was deadlocked, the worst case that I would expect would 
> be your guest OS complaining about hung kernel tasks related to disk IO 
> (since the disk wouldn't be responding).
> 
> On Mon, Nov 6, 2017 at 6:02 PM, Jan Pekař - Imatic <jan.pekar at imatic.cz 
> <mailto:jan.pekar at imatic.cz>> wrote:
> 
>     Hi,
> 
>     I'm using debian stretch with ceph 12.2.1-1~bpo80+1 and qemu
>     1:2.8+dfsg-6+deb9u3
>     I'm running 3 nodes with 3 monitors and 8 osds on my nodes, all on IPV6.
> 
>     When I tested the cluster, I detected strange and severe problem.
>     On first node I'm running qemu hosts with librados disk connection
>     to the cluster and all 3 monitors mentioned in connection.
>     On second node I stopped mon and osd with command
> 
>     kill -STOP MONPID OSDPID
> 
>     Within one minute all my qemu hosts on first node freeze, so they
>     even don't respond to ping. On VNC screen there is no error (disk or
>     kernel panic), they just hung forever with no console response. Even
>     starting MON and OSD on stopped host doesn't make them running.
>     Destroying the qemu domain and running again is the only solution.
> 
>     This happens even if virtual machine has all primary OSD on other
>     OSDs from that I have stopped - so it is not writing primary to the
>     stopped OSD.
> 
>     If I stop only OSD and MON keep running, or I stop only MON and OSD
>     keep running everything looks OK.
> 
>     When I stop MON and OSD, I can see in log  osd.0 1300
>     heartbeat_check: no reply from ... as usual when OSD fails. During
>     this are virtuals still running, but after that they all stop.
> 
>     What should I send you to debug this problem? Without fixing that,
>     ceph is not reliable to me.
> 
>     Thank you
>     With regards
>     Jan Pekar
>     Imatic
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> 
> 
> 
> 
> -- 
> Jason

-- 
============
Ing. Jan Pekař
jan.pekar at imatic.cz | +420603811737
----
Imatic | Jagellonská 14 | Praha 3 | 130 00
http://www.imatic.cz
============
--


More information about the ceph-users mailing list