[ceph-users] Don't upgrade to 13.2.2 if you use cephfs
paul.emmerich at croit.io
Wed Oct 17 13:36:15 PDT 2018
CephFS will be offline and show up as "damaged" in ceph -s
The fix is to downgrade to 13.2.1 and issue a "ceph fs repaired <rank>" command.
Am Mi., 17. Okt. 2018 um 21:53 Uhr schrieb Michael Sudnick
<michael.sudnick at gmail.com>:
> What exactly are the symptoms of the problem? I use cephfs with 13.2.2 with two active MDS daemons and at least on the surface everything looks fine. Is there anything I should avoid doing until 13.2.3?
> On Wed, Oct 17, 2018, 14:10 Patrick Donnelly <pdonnell at redhat.com> wrote:
>> On Wed, Oct 17, 2018 at 11:05 AM Alexandre DERUMIER <aderumier at odiso.com> wrote:
>> > Hi,
>> > Is it possible to have more infos or announce about this problem ?
>> > I'm currently waiting to migrate from luminious to mimic, (I need new quota feature for cephfs)
>> > is it safe to upgrade to 13.2.2 ?
>> > or better to wait to 13.2.3 ? or install 13.2.1 for now ?
>> Upgrading to 13.2.1 would be safe.
>> Patrick Donnelly
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
> ceph-users mailing list
> ceph-users at lists.ceph.com
Looking for help with your Ceph cluster? Contact us at https://croit.io
Tel: +49 89 1896585 90
More information about the ceph-users