[ceph-users] cephfs issue with moving files between data pools gives Input/output error

Janne Johansson icepic.dz at gmail.com
Tue Oct 2 06:44:01 PDT 2018


Den mån 1 okt. 2018 kl 22:08 skrev John Spray <jspray at redhat.com>:

>
> > totally new for me, also not what I would expect of a mv on a fs. I know
> > this is normal to expect coping between pools, also from the s3cmd
> > client. But I think more people will not expect this behaviour. Can't
> > the move be implemented as a move?
>
> In almost all filesystems, a rename (like "mv") is a pure metadata
> operation -- it doesn't involve reading all the file's data and
> re-writing it.  It would be very surprising for most users if they
> found that their "mv" command blocked for a very long time while
> waiting for a large file's content to be e.g. read out of one pool and
> written into another.


There are other networked filesystems which do behave like that, where
the OS thinks the whole mount is one single FS, but when you move stuff
with mv around it actually needs to move all data to other servers/disks and
incur the slowness of a copy/delete operation.

-- 
May the most significant bit of your life be positive.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181002/97067cb1/attachment.html>


More information about the ceph-users mailing list