[ceph-users] CephFS and many small files

Patrick Donnelly pdonnell at redhat.com
Fri Mar 29 10:32:55 PDT 2019


Hi Jörn,

On Fri, Mar 29, 2019 at 5:20 AM Clausen, Jörn <jclausen at geomar.de> wrote:
>
> Hi!
>
> In my ongoing quest to wrap my head around Ceph, I created a CephFS
> (data and metadata pool with replicated size 3, 128 pgs each).

What version?

> When I
> mount it on my test client, I see a usable space of ~500 GB, which I
> guess is okay for the raw capacity of 1.6 TiB I have in my OSDs.
>
> I run bonnie with
>
> -s 0G -n 20480:1k:1:8192
>
> i.e. I should end up with ~20 million files, each file 1k in size
> maximum. After about 8 million files (about 4.7 GBytes of actual use),
> my cluster runs out of space.

Meaning, you got ENOSPC?

> Is there something like a "block size" in CephFS? I've read
>
> http://docs.ceph.com/docs/master/cephfs/file-layouts/
>
> and thought maybe object_size is something I can tune, but I only get
>
> $ setfattr -n ceph.dir.layout.object_size -v 524288 bonnie
> setfattr: bonnie: Invalid argument

You can only set a layout on an empty directory. The layouts here are
not likely to be the cause.

> Is this even the right approach? Or are "CephFS" and "many small files"
> such opposing concepts that it is simply not worth the effort?

You should not have had issues growing to that number of files. Please
post more information about your cluster including configuration
changes and `ceph osd df`.

-- 
Patrick Donnelly


More information about the ceph-users mailing list