[ceph-users] rbd mount unmap network outage

David Turner drakonstein at gmail.com
Thu Nov 30 05:54:34 PST 2017


This doesn't answer your question, but maybe nudges you in a different
direction. CephFS seams like the much better solution for what you're
doing. You linked a 5 year old blog post. CephFS was not a stable
technology at the time, but it's an excellent method to share a network FS
to multiple clients. There are even methods to export it over nfs, although
I'd personally set them up to mount it using ceph-fuse.

On Thu, Nov 30, 2017, 2:34 AM Hauke Homburg <hhomburg at w3-creative.de> wrote:

> Hello,
>
> Actually i am working on a NFS HA Cluster to export rbd Images with NFS.
> To test the failover i tried the following:
>
> https://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
>
> i set the rbdimage to exclusive Lock and the osd and mon timeout to 20
> Seconds.
>
> on 1 NFS Server i mapped the rbd image with rbd map. after mapping i
> blocked the TCP ports with iptables to simulate a network outage. Ports
> tcp 6789 and 6800:7300.
>
> I can see with rbd status that the watchers on the CLuster itself are
> unmapped after timeout.
>
> The nfs server gives encountered watch error -110.
>
> The NFS Server tries to connect libceph to another mon.
>
> When all this happens i cannot unmap the image.
>
> Ceph CLuster is 10.2.10 with Centos 7 the NFS Server is Debian 9. The
> Pacemaker ra is ceph-resource.agents 10.2.10.
>
> My consideration is to unmap the image when the network outage happens,
> because the failover and the Problem that i don't want to mount 1 rbd
> image with 2 Server to prevent data damage. after network outage is solved.
>
> Thanks fpr Help
>
> Hauke
>
>
>
> --
> www.w3-creative.de
>
> www.westchat.de
>
> https://friendica.westchat.de/profile/hauke
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171130/6a2382a3/attachment.html>


More information about the ceph-users mailing list