[ceph-users] CephFS file contains garbage zero padding after an unclean cluster shutdown

Hector Martin hector at marcansoft.com
Sun Nov 25 19:58:01 PST 2018


On 26/11/2018 11.05, Yan, Zheng wrote:
> On Mon, Nov 26, 2018 at 4:30 AM Hector Martin <hector at marcansoft.com> wrote:
>>
>> On 26/11/2018 00.19, Paul Emmerich wrote:
>>> No, wait. Which system did kernel panic? Your CephFS client running rsync?
>>> In this case this would be expected behavior because rsync doesn't
>>> sync on every block and you lost your file system cache.
>>
>> It was all on the same system. So is it expected behavior for size
>> metadata to be updated non-atomically with respect to file contents
>> being written when using the CephFS kernel client? I.e. after appending
>> data to the file, the metadata in CephFS is updated to reflect the new
>> size but the data remains in the page cache until those pages are flushed?
>>
> 
> Yes, it's expected behavior. We haven't implement ordered write
> (data=ordered mount option of ext4)

Makes sense, thanks for confirming. Good to know it's a client issue
then (I'd be more worried if what caused this was the Ceph server side
stack).

-- 
Hector Martin (hector at marcansoft.com)
Public Key: https://mrcn.st/pub


More information about the ceph-users mailing list