[ceph-users] cephfs issue with moving files between data pools gives Input/output error

Marc Roos M.Roos at f1-outsourcing.eu
Tue Oct 2 12:31:41 PDT 2018


 
I would 'also' choose for a solution where in case there is mv across 
pools, the user has to wait a bit longer for the cp to finish. And as 
said before if you export cephfs via smb or nfs, I wonder how the 
nfs/smb server will execute the move. 

If I use 1x replicated pool on /tmp and move the file to a 3x replicated 
pool, I assume my data is moved there and more secure. 




-----Original Message-----
From: Janne Johansson [mailto:icepic.dz at gmail.com] 
Sent: dinsdag 2 oktober 2018 15:44
To: jspray at redhat.com
Cc: Marc Roos; Ceph Users
Subject: Re: [ceph-users] cephfs issue with moving files between data 
pools gives Input/output error

Den mån 1 okt. 2018 kl 22:08 skrev John Spray <jspray at redhat.com>:



	> totally new for me, also not what I would expect of a mv on a fs. 
I know
	> this is normal to expect coping between pools, also from the 
s3cmd
	> client. But I think more people will not expect this behaviour. 
Can't
	> the move be implemented as a move?
	
	In almost all filesystems, a rename (like "mv") is a pure metadata
	operation -- it doesn't involve reading all the file's data and
	re-writing it.  It would be very surprising for most users if they
	found that their "mv" command blocked for a very long time while
	waiting for a large file's content to be e.g. read out of one pool 
and
	written into another.


There are other networked filesystems which do behave like that, where 
the OS thinks the whole mount is one single FS, but when you move stuff 
with mv around it actually needs to move all data to other servers/disks 
and incur the slowness of a copy/delete operation.
 

-- 

May the most significant bit of your life be positive.





More information about the ceph-users mailing list