[ceph-users] CephFS file contains garbage zero padding after an unclean cluster shutdown
hector at marcansoft.com
Sun Nov 25 12:30:43 PST 2018
On 26/11/2018 00.19, Paul Emmerich wrote:
> No, wait. Which system did kernel panic? Your CephFS client running rsync?
> In this case this would be expected behavior because rsync doesn't
> sync on every block and you lost your file system cache.
It was all on the same system. So is it expected behavior for size
metadata to be updated non-atomically with respect to file contents
being written when using the CephFS kernel client? I.e. after appending
data to the file, the metadata in CephFS is updated to reflect the new
size but the data remains in the page cache until those pages are flushed?
Hector Martin (hector at marcansoft.com)
Public Key: https://mrcn.st/pub
More information about the ceph-users