[ceph-users] Don't upgrade to 13.2.2 if you use cephfs

Daniel Carrasco d.carrasco at i2tic.com
Mon Oct 8 02:53:01 PDT 2018


El lun., 8 oct. 2018 5:44, Yan, Zheng <ukernel at gmail.com> escribió:

> On Mon, Oct 8, 2018 at 11:34 AM Daniel Carrasco <d.carrasco at i2tic.com>
> wrote:
> >
> > I've got several problems on 12.2.8 too. All my standby MDS uses a lot
> of memory (while active uses normal memory), and I'm receiving a lot of
> slow MDS messages (causing the webpage to freeze and fail until MDS are
> restarted)... Finally I had to copy the entire site to DRBD and use NFS to
> solve all problems...
> >
>
> was standby-replay enabled?
>

I've tried both and I've seen more less the same behavior, maybe less when
is not in replay mode.

Anyway, we've deactivated CephFS for now there. I'll try with older
versions on a test environment


> > El lun., 8 oct. 2018 a las 5:21, Alex Litvak (<
> alexander.v.litvak at gmail.com>) escribió:
> >>
> >> How is this not an emergency announcement?  Also I wonder if I can
> >> downgrade at all ?  I am using ceph with docker deployed with
> >> ceph-ansible.  I wonder if I should push downgrade or basically wait for
> >> the fix.  I believe, a fix needs to be provided.
> >>
> >> Thank you,
> >>
> >> On 10/7/2018 9:30 PM, Yan, Zheng wrote:
> >> > There is a bug in v13.2.2 mds, which causes decoding purge queue to
> >> > fail. If mds is already in damaged state, please downgrade mds to
> >> > 13.2.1, then run 'ceph mds repaired fs_name:damaged_rank' .
> >> >
> >> > Sorry for all the trouble I caused.
> >> > Yan, Zheng
> >> >
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users at lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > --
> > _________________________________________
> >
> >       Daniel Carrasco Marín
> >       Ingeniería para la Innovación i2TIC, S.L.
> >       Tlf:  +34 911 12 32 84 Ext: 223
> >       www.i2tic.com
> > _________________________________________
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users at lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181008/4400cfdc/attachment.html>


More information about the ceph-users mailing list