[ceph-users] cephfs nfs-ganesha rados_cluster

Steven Vacaroaia stef97 at gmail.com
Thu Nov 15 05:36:56 PST 2018


Thanks Jeff for taking the trouble to respond and your willingness to help

Here are some questions:

- Apparently rados_cluster is gone in 2.8.  There is "fs" and 'fs_ng' now
  However, I was not able to find a config depicting usage
  Would you be able to share your working one  ?

- how would one interpret the output of ganesha-rados-grace ?
( what NE and E means and what are the actions one should take when they
see it )

- how would one check whether active/active is working properly ( i.e both
nfs servers are being used )

I was able to get active passive working using rados_ng and pacemaker
Is there anything pacemaker specific that has to be done to get
active-active working ( assuming , of course , ganesha is configured
properly?

many thanks
Steven

On Thu, 15 Nov 2018 at 06:53, Jeff Layton <jlayton at redhat.com> wrote:

> > Hi,
> >
> > I've been trying to setup an active active ( or even active passive) NFS
> share for a while without any success
> >
> > Using Mimic 13.2.2 and nfs-ganesha 2.8 with rados_cluster as recovery
> mechanism
> >
> > I focused on corosync/pacemaker as a HA controlling software but I would
> not mind using anything else
> >
> > Has any of you managed to get this working ?
> > If yes, could you please provide some detail / instructions / resources
> / configuration ?
>
> I've gotten it working, but I wrote most of the code so that shouldn't
> be too surprising. The docs are still pretty sketchy at this point, but
> most of the info is distilled into the sample config file and the
> ganesha-rados-grace manpage.
>
> Writing a real howto is on my to-do list but I'm not sure when I'll get
> to it. If you have specific questions, I'm happy to try and answer them
> though.
>
> --
> Jeff Layton <jlayton at redhat.com>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181115/ecf2a0c9/attachment.html>


More information about the ceph-users mailing list