[ceph-users] CephFS file contains garbage zero padding after an unclean cluster shutdown

Paul Emmerich paul.emmerich at croit.io
Sun Nov 25 07:16:29 PST 2018

Maybe rsync called fallocate() on the file?


Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
Tel: +49 89 1896585 90

Am Fr., 23. Nov. 2018 um 16:55 Uhr schrieb Hector Martin
<hector at marcansoft.com>:
> Background: I'm running single-node Ceph with CephFS as an experimental
> replacement for "traditional" filesystems. In this case I have 11 OSDs,
> 1 mon, and 1 MDS.
> I just had an unclean shutdown (kernel panic) while a large (>1TB) file
> was being copied to CephFS (via rsync). Upon bringing the system back
> up, I noticed that the (incomplete) file has about 320MB worth of zeroes
> at the end.
> This is the kind of behavior I would expect of traditional local
> filesystems, where file metadata was updated to reflect the new size of
> a growing file before disk extents were allocated and filled with data,
> so an unclean shutdown results in files with tails of zeroes, but I'm
> surprised to see it with Ceph. I expected the OSD side of things should
> be atomic with all the BlueStore goodness, checksums, etc. I figured
> CephFS would build upon those primitives in a way that this kind of
> inconsistency isn't possible.
> Is this expected behavior? It's not a huge dealbreaker, but I'd like to
> understand how this kind of situation happens in CephFS (and how it
> could affect a proper cluster, if at all - can this happen if e.g. a
> client, or an MDS, or an OSD dies uncleanly? Or only if several things
> go down at once?)
> --
> Hector Martin (hector at marcansoft.com)
> Public Key: https://mrcn.st/pub
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

More information about the ceph-users mailing list