[ceph-users] CephFS: effects of using hard links

Gregory Farnum gfarnum at redhat.com
Thu Mar 21 00:51:22 PDT 2019


On Wed, Mar 20, 2019 at 6:06 PM Dan van der Ster <dan at vanderster.com> wrote:

> On Tue, Mar 19, 2019 at 9:43 AM Erwin Bogaard <erwin.bogaard at gmail.com>
> wrote:
> >
> > Hi,
> >
> >
> >
> > For a number of application we use, there is a lot of file duplication.
> This wastes precious storage space, which I would like to avoid.
> >
> > When using a local disk, I can use a hard link to let all duplicate
> files point to the same inode (use “rdfind”, for example).
> >
> >
> >
> > As there isn’t any deduplication in Ceph(FS) I’m wondering if I can use
> hard links on CephFS in the same way as I use for ‘regular’ file systems
> like ext4 and xfs.
> >
> > 1. Is it advisible to use hard links on CephFS? (It isn’t in the ‘best
> practices’: http://docs.ceph.com/docs/master/cephfs/app-best-practices/)
> >
> > 2. Is there any performance (dis)advantage?
> >
> > 3. When using hard links, is there an actual space savings, or is there
> some trickery happening?
> >
> > 4. Are there any issues (other than the regular hard link ‘gotcha’s’) I
> need to keep in mind combining hard links with CephFS?
>
> The only issue we've seen is if you hardlink b to a, then rm a, then
> never stat b, the inode is added to the "stray" directory. By default
> there is a limit of 1 million stray entries -- so if you accumulate
> files in this state eventually users will be unable to rm any files,
> until you stat the `b` files.
>

Eek. Do you know if we have any tickets about that issue? It's easy to see
how that happens but definitely isn't a good user experience!
-Greg


>
> -- dan
>
>
> -- dan
>
>
> >
> >
> >
> > Thanks
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users at lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20190321/2fa3a0c8/attachment.html>


More information about the ceph-users mailing list