[ceph-users] who is using nfs-ganesha and cephfs?
lincolnb at uchicago.edu
Wed Nov 8 14:42:32 PST 2017
We have been running the Ganesha FSAL for a while (as far back as Hammer / Ganesha 2.2.0), primarily for uid/gid squashing.
Things are basically OK for our application, but we've seen the following weirdness*:
- Sometimes there are duplicated entries when directories are listed. Same filename, same inode, just shows up twice in 'ls'.
- There can be a considerable latency between new files added to CephFS and those files becoming visible on our NFS clients. I understand this might be related to dentry caching.
- Occasionally, the Ganesha FSAL seems to max out at 100,000 caps claimed which don't get released until the MDS is restarted.
*note: these issues are with Ganesha 2.2.0 and Hammer/Jewel, and have perhaps since been fixed upstream.
(We've recently updated to Luminous / Ganesha 2.5.2, and will be happy to complain if any issues show up :))
> On Nov 8, 2017, at 3:41 PM, Sage Weil <sweil at redhat.com> wrote:
> Who is running nfs-ganesha's FSAL to export CephFS? What has your
> experience been?
> (We are working on building proper testing and support for this into
> Mimic, but the ganesha FSAL has been around for years.)
> ceph-users mailing list
> ceph-users at lists.ceph.com
More information about the ceph-users