[ceph-users] Is the 12.2.1 really stable? Anybody have production cluster with Luminous Bluestore?

Eric Nelson ericnelson at gmail.com
Thu Nov 16 12:29:54 PST 2017


We upgraded to it a few weeks ago in order to get some of the new indexing
features, but have also had a few nasty bugs in the process (including this
one) as we have been upgrading osds from filestore to bluestore. Currently
these are isolated to our SSD cache tier so I've been evicting everything
in hopes of removing the tier and getting a functional cluster again :-/

On Thu, Nov 16, 2017 at 6:33 AM, Ashley Merrick <ashley at amerrick.co.uk>
wrote:

> Currently experiencing a nasty bug http://tracker.ceph.com/issues/21142
>
> I would say wait a while for the next point release.
>
> ,Ashley
>
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces at lists.ceph.com] On Behalf Of
> Jack
> Sent: 16 November 2017 22:22
> To: ceph-users at lists.ceph.com
> Subject: Re: [ceph-users] Is the 12.2.1 really stable? Anybody have
> production cluster with Luminous Bluestore?
>
> My cluster (55 OSDs) runs 12.2.x since the release, and bluestore too All
> good so far
>
> On 16/11/2017 15:14, Konstantin Shalygin wrote:
> > Hi cephers.
> > Some thoughts...
> > At this time my cluster on Kraken 11.2.0 - works smooth with FileStore
> > and RBD only.
> > I want upgrade to Luminous 12.2.1 and go to Bluestore because this
> > cluster want grows double with new disks, so is best opportunity
> > migrate to Bluestore.
> >
> > In ML I was found two problems:
> > 1. Increased memory usage, should be fixed in upstream
> > (http://lists.ceph.com/pipermail/ceph-users-ceph.com/
> 2017-October/021676.html).
> >
> > 2. OSD drops and goes cluster offline
> > (http://lists.ceph.com/pipermail/ceph-users-ceph.com/
> 2017-November/022494.html).
> > Don't know this Bluestore or FileStore OSD'.s.
> >
> > If the first case I can safely survive - hosts has enough memory to go
> > to Bluestore and with the growing I can wait until the next stable
> release.
> > That second case really scares me. As I understood clusters with this
> > problem for now not in production.
> >
> > By this point I have completed all the preparations for the update and
> > now I need to figure out whether I should update to 12.2.1 or wait for
> > the next stable release, because my cluster is in production and I
> > can't fail. Or I can upgrade and use FileStore until next release,
> > this is acceptable for me.
> >
> > Thanks.
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users at lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171116/6d189cdf/attachment.html>


More information about the ceph-users mailing list