[ceph-users] "failed to open ino"
dcsysengineer at gmail.com
Mon Nov 27 04:59:15 PST 2017
We also see these messages quite frequently, mainly the "replicating
dir...". Only seen "failed to open ino" a few times so didn't do any real
investigation. Our set up is very similar to yours, 12.2.1, active/standby
MDS and exporting cephfs through KNFS (hoping to replace with Ganesha
soon). Interestingly, the paths reported in "replicating dir" are usually
dirs exported through Samba (generally Windows profile dirs). Samba runs
really well for us and there doesn't seem to be any impact on users. I
expect we wouldn't see these messages if running active/active MDS but I'm
still a bit cautious about implementing that (am I being overly cautious I
On Mon, Nov 27, 2017 at 10:57 AM, Jens-U. Mozdzen <jmozdzen at nde.ag> wrote:
> Zitat von "Yan, Zheng" <ukernel at gmail.com>:
>> On Sat, Nov 25, 2017 at 2:27 AM, Jens-U. Mozdzen <jmozdzen at nde.ag> wrote:
>>> In the log of the active MDS, we currently see the following two inodes
>>> reported over and over again, about every 30 seconds:
>>> --- cut here ---
>>> 2017-11-24 18:24:16.496397 7fa308cf0700 0 mds.0.cache failed to open
>> It's likely caused by NFS export. MDS reveals this error message if
>> NFS client tries to access a deleted file. The error causes NFS client
>> to return -ESTALE.
> thank you for pointing me at this potential cause - as we're still using
> NFS access during that job (old clients without native CephFS support), it
> may be we have some yet unnoticed stale NFS file handles. I'll have a
> closer look, indeed!
> ceph-users mailing list
> ceph-users at lists.ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ceph-users