[ceph-users] Adding a monitor freezes the cluster

Bishoy Mikhael b.s.mikhael at gmail.com
Mon Nov 13 10:35:11 PST 2017

Hi All,

I've tried adding 2 monitors to a 3 nodes cluster with 1 monitor, 1 MGR and
1 MDS.
The cluster was at CLEAN state when it just had 1 monitor.

# ceph status


    id:     46a122a0-8670-4935-b644-399e744c1c03

    health: HEALTH_OK


    mon: 1 daemons, quorum lingcod

    mgr: lingcod(active)

    mds: NIO-1/1/1 up  {0=lingcod=up:active}

    osd: 18 osds: 18 up, 18 in


    pools:   4 pools, 1700 pgs

    objects: 77489 objects, 301 GB

    usage:   906 GB used, 112 TB / 113 TB avail

    pgs:     1700 active+clean

I've done the following on the second node in the cluster trying to add a
monitor, but things went wrong, and now the cluster is frozen, I can't even
query the cluster status.

>From the node I wanted to add it as a monitor, I issued the following

# scp -p ${initial_monitor_ip}:/etc/ceph/ceph.client.admin.keyring

# ceph-authtool --create-keyring /etc/ceph/${cluster_name}.mon.keyring
--gen-key -n mon. --cap mon 'allow *'

# ceph-authtool /etc/ceph/${cluster_name}.mon.keyring --import-keyring

# ceph auth caps client.admin osd 'allow *' mds 'allow *' mon 'allow *' mgr
'allow *'

# monmaptool --create --add ${hostname} ${ip_address} --fsid ${uuid}

# mkdir /var/lib/ceph/mon/${cluster_name}-${hostname}

# chown -R ceph:ceph /var/lib/ceph/mon/${cluster_name}-${hostname}

# chmod +r /etc/ceph/${cluster_name}.mon.keyring

# chmod +r /etc/ceph/monmap

# sudo -u ceph ceph-mon --cluster ${cluster_name} --mkfs -i ${hostname}
--monmap /etc/ceph/monmap --keyring /etc/ceph/${cluster_name}.mon.keyring
--fsid ${uuid}

# touch /var/lib/ceph/mon/${cluster_name}-${hostname}/done

# systemctl start ceph-mon@${hostname}

# ceph daemon mon.taulog add_bootstrap_peer_hint lingcod

Then, when I found that the cluster was still reporting 1 monitor, I issued
the following commands on the first monitor node.
# ceph mon add taulog ${taulog_IP}

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171113/b02952ec/attachment.html>

More information about the ceph-users mailing list