[ceph-users] CephFS - large omap object

Gregory Farnum gfarnum at redhat.com
Mon Mar 18 20:44:11 PDT 2019


On Mon, Mar 18, 2019 at 7:28 PM Yan, Zheng <ukernel at gmail.com> wrote:
>
> On Mon, Mar 18, 2019 at 9:50 PM Dylan McCulloch <dmc at unimelb.edu.au> wrote:
> >
> >
> > >please run following command. It will show where is 4.00000000
> > >
> > >rados -p -p hpcfs_metadata getxattr 4.00000000 parent >/tmp/parent
> > >ceph-dencoder import /tmp/parent type inode_backtrace_t decode dump_json
> > >
> >
> > $ ceph-dencoder import /tmp/parent type inode_backtrace_t decode dump_json
> > {
> >     "ino": 4,
> >     "ancestors": [
> >         {
> >             "dirino": 1,
> >             "dname": "lost+found",
> >             "version": 1
> >         }
> >     ],
> >     "pool": 20,
> >     "old_pools": []
> > }
> >
> > I guess it may have a very large number of files from previous recovery operations?
> >
>
> Yes, these files are created by cephfs-data-scan. If you don't want
> them, you can delete "lost+found"

This certainly makes sense, but even with that pointer I can't find
how it's picking inode 4. That should probably be documented? :)
-Greg


More information about the ceph-users mailing list