[ceph-users] Luminous cluster stuck when adding monitor

Gregory Farnum gfarnum at redhat.com
Wed Oct 4 11:30:49 PDT 2017


You'll need to change the config so that it's running "debug mon = 20" for
the log to be very useful here. It does say that it's dropping client
connections because it's been out of quorum for too long, which is the
correct behavior in general. I'd imagine that you've got clients trying to
connect to the new monitor instead of the ones already in the quorum and
not passing around correctly; this is all configurable.

On Wed, Oct 4, 2017 at 4:09 AM Nico Schottelius <
nico.schottelius at ungleich.ch> wrote:

>
> Good morning,
>
> we have recently upgraded our kraken cluster to luminous and since then
> noticed an odd behaviour: we cannot add a monitor anymore.
>
> As soon as we start a new monitor (server2), ceph -s and ceph -w start to
> hang.
>
> The situation became worse, since one of our staff stopped an existing
> monitor (server1), as restarting that monitor results in the same
> situation, ceph -s hangs until we stop the monitor again.
>
> We kept the monitor running for some minutes, but the situation never
> cleares up.
>
> The network does not have any firewall in between the nodes and there
> are no host firewalls.
>
> I have attached the output of the monitor on server1, running in
> foreground using
>
> root at server1:~# ceph-mon -i server1 --pid-file
> /var/lib/ceph/run/mon.server1.pid -c /etc/ceph/ceph.conf --cluster ceph
> --setuser ceph --setgroup ceph -d 2>&1 | tee cephmonlog
>
> Does anyone see any obvious problem in the attached log?
>
> Any input or hint would be appreciated!
>
> Best,
>
> Nico
>
>
>
> --
> Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171004/242c36a6/attachment.html>


More information about the ceph-users mailing list