[ceph-users] Problem with CephFS

Rodrigo Embeita rodrigo at pagefreezer.com
Fri Nov 23 04:36:19 PST 2018


Hi Daniel, thanks a lot for your help.
Do you know how I can recover the data again in this scenario since I lost
1 node with 6 OSD?
My configuration had 12 OSD (6 per host).

Regards

On Wed, Nov 21, 2018 at 3:16 PM Daniel Baumann <daniel.baumann at bfh.ch>
wrote:

> Hi,
>
> On 11/21/2018 07:04 PM, Rodrigo Embeita wrote:
> >             Reduced data availability: 7 pgs inactive, 7 pgs down
>
> this is your first problem: unless you have all data available again,
> cephfs will not be back.
>
> after that, I would take care about the redundancy next, and get the one
> missing monitor back online.
>
> once that is done, get the mds working again and your cephfs should be
> back in service.
>
> if you encounter problems with any of the steps, send all the necessary
> commands and outputs to the list and I (or others) can try to help.
>
> Regards,
> Daniel
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181123/d10b77bd/attachment.html>


More information about the ceph-users mailing list