[ceph-users] Recover files from cephfs data pool

Rhian Resnick xantho at sepiidae.com
Mon Nov 5 16:18:50 PST 2018


Our metadata pool went from 700 MB to 1 TB in size in a few hours. Used all
space on OSD and now 2 ranks report damage. The recovery tools on the
journal fail as they run out of memory leaving us with the option of
truncating the journal and loosing data or recovering using the scan tools.

Any ideas on solutions are welcome. I posted all the logs and and cluster
design previously but am happy to do so again. We are not desperate but we
are hurting with this long downtime.

On Mon, Nov 5, 2018 at 7:05 PM Sergey Malinin <hell at newmail.com> wrote:

> What kind of damage have you had? Maybe it is worth trying to get MDS to
> start and backup valuable data instead of doing long running recovery?
>
>
> On 6.11.2018, at 02:59, Rhian Resnick <xantho at sepiidae.com> wrote:
>
> Sounds like I get to have some fun tonight.
>
> On Mon, Nov 5, 2018, 6:39 PM Sergey Malinin <hell at newmail.com wrote:
>
>> inode linkage (i.e. folder hierarchy) and file names are stored in omap
>> data of objects in metadata pool. You can write a script that would
>> traverse through all the metadata pool to find out file names correspond to
>> objects in data pool and fetch required files via 'rados get' command.
>>
>> > On 6.11.2018, at 02:26, Sergey Malinin <hell at newmail.com> wrote:
>> >
>> > Yes, 'rados -h'.
>> >
>> >
>> >> On 6.11.2018, at 02:25, Rhian Resnick <xantho at sepiidae.com> wrote:
>> >>
>> >> Does a tool exist to recover files from a cephfs data partition? We
>> are rebuilding metadata but have a user who needs data asap.
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users at lists.ceph.com
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181105/7404928c/attachment.html>


More information about the ceph-users mailing list