[ceph-users] "failed to open ino"

Jens-U. Mozdzen jmozdzen at nde.ag
Tue Nov 28 03:37:26 PST 2017


Hi,

Zitat von "Yan, Zheng" <ukernel at gmail.com>:
> On Sat, Nov 25, 2017 at 2:27 AM, Jens-U. Mozdzen <jmozdzen at nde.ag> wrote:
>> Hi all,
>> [...]
>> In the log of the active MDS, we currently see the following two inodes
>> reported over and over again, about every 30 seconds:
>>
>> --- cut here ---
>> 2017-11-24 18:24:16.496397 7fa308cf0700  0 mds.0.cache  failed to open ino
>> 0x10001e45e1d err -22/0
>> 2017-11-24 18:24:16.497037 7fa308cf0700  0 mds.0.cache  failed to open ino
>> 0x10001e4d6a1 err -22/-22
>> [...]
>> --- cut here ---
>>
>> There were other reported inodes with other errors, too ("err -5/0", for
>> instance), the root cause seems to be the same (see below).
> [...]
> It's likely caused by NFS export.  MDS reveals this error message if
> NFS client tries to access a deleted file. The error causes NFS client
> to return -ESTALE.

you were right on the spot - a process remained active after the test  
runs and had that directory as its current working directory. Stopping  
the process stops the messages.

Thank you for pointing me there!

Best regards,
Jens



More information about the ceph-users mailing list