[ceph-users] ceph 12.2.9 release

Ashley Merrick singapore at amerrick.co.uk
Wed Nov 7 08:16:05 PST 2018


I am seeing this on the latest mimic on my test cluster aswel.

Every automatic deep-scrub comes back as inconsistent, but doing another
manual scrub comes back as fine and clear each time.

Not sure if related or not..

On Wed, 7 Nov 2018 at 11:57 PM, Christoph Adomeit <
Christoph.Adomeit at gatworks.de> wrote:

> Hello together,
>
> we have upgraded to 12.2.9 because it was in the official repos.
>
> Right after the update and some scrubs we have issues.
>
> This morning after regular scrubs we had around 10% of all pgs inconstent:
>
> pgs:     4036 active+clean
>           380  active+clean+inconsistent
>
> After repairung these 380 pgs we again have:
>
> 1/93611534 objects unfound (0.000%)
> 28   active+clean+inconsistent
> 1    active+recovery_wait+degraded
>
> Now we stopped repairing because it does not seem to solve the problem and
> more and more error messages are occuring. So far we did not see corruption
> but we do not feel well with the cluster.
>
> What do you suggest, wait for 12.2.10 ? Roll Back to 12.2.8 ?
>
> Is ist dangerous for our Data to leave the cluster running ?
>
> I am sure we do not have hardware errors and that these errors came with
> the update to 12.2.9.
>
> Thanks
>   Christoph
>
>
>
> On Wed, Nov 07, 2018 at 07:39:59AM -0800, Gregory Farnum wrote:
> > On Wed, Nov 7, 2018 at 5:58 AM Simon Ironside <sironside at caffetine.org>
> > wrote:
> >
> > >
> > >
> > > On 07/11/2018 10:59, Konstantin Shalygin wrote:
> > > >> I wonder if there is any release announcement for ceph 12.2.9 that I
> > > missed.
> > > >> I just found the new packages on download.ceph.com, is this an
> official
> > > >> release?
> > > >
> > > > This is because 12.2.9 have a several bugs. You should avoid to use
> this
> > > > release and wait for 12.2.10
> > >
> > > Argh! What's it doing in the repos then?? I've just upgraded to it!
> > > What are the bugs? Is there a thread about them?
> >
> >
> > If you’ve already upgraded and have no issues then you won’t have any
> > trouble going forward — except perhaps on the next upgrade, if you do it
> > while the cluster is unhealthy.
> >
> > I agree that it’s annoying when these issues make it out. We’ve had
> ongoing
> > discussions to try and improve the release process so it’s less drawn-out
> > and to prevent these upgrade issues from making it through testing, but
> > nobody has resolved it yet. If anybody has experience working with deb
> > repositories and handling releases, the Ceph upstream could use some
> > help... ;)
> > -Greg
> >
> >
> > >
> > > Simon
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users at lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users at lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181108/db48f597/attachment.html>


More information about the ceph-users mailing list