[ceph-users] Is the 12.2.1 really stable? Anybody have production cluster with Luminous Bluestore?

Cassiano Pilipavicius cassiano at tips.com.br
Fri Nov 17 00:48:27 PST 2017


I have a small cluster on 12.2.1, used only for storing VHDS on RBD and 
it is pretty stable so far. I've upgraded from jewel to luminous and the 
only thing that caused me instability right after the upgrade is that I 
was using JEMalloc for the OSDs and after converting the OSDs to 
bluestore they have started crashing... commenting out the line on 
/etc/sysconfig/ceph solved all the issues with my crashing OSDs.



Em 11/16/2017 9:55 PM, Linh Vu escreveu:
>
> We have a small prod cluster with 12.2.1 and bluestore, running just 
> cephfs, for HPC use. It's been in prod for about 7 weeks now, and 
> pretty stable. 😊
>
> ------------------------------------------------------------------------
> *From:* ceph-users <ceph-users-bounces at lists.ceph.com> on behalf of 
> Eric Nelson <ericnelson at gmail.com>
> *Sent:* Friday, 17 November 2017 7:29:54 AM
> *To:* Ashley Merrick
> *Cc:* ceph-users at lists.ceph.com
> *Subject:* Re: [ceph-users] Is the 12.2.1 really stable? Anybody have 
> production cluster with Luminous Bluestore?
> We upgraded to it a few weeks ago in order to get some of the new 
> indexing features, but have also had a few nasty bugs in the process 
> (including this one) as we have been upgrading osds from filestore to 
> bluestore. Currently these are isolated to our SSD cache tier so I've 
> been evicting everything in hopes of removing the tier and getting a 
> functional cluster again :-/
>
> On Thu, Nov 16, 2017 at 6:33 AM, Ashley Merrick <ashley at amerrick.co.uk 
> <mailto:ashley at amerrick.co.uk>> wrote:
>
>     Currently experiencing a nasty bug
>     http://tracker.ceph.com/issues/21142
>     <http://tracker.ceph.com/issues/21142>
>
>     I would say wait a while for the next point release.
>
>     ,Ashley
>
>     -----Original Message-----
>     From: ceph-users [mailto:ceph-users-bounces at lists.ceph.com
>     <mailto:ceph-users-bounces at lists.ceph.com>] On Behalf Of Jack
>     Sent: 16 November 2017 22:22
>     To: ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>     Subject: Re: [ceph-users] Is the 12.2.1 really stable? Anybody
>     have production cluster with Luminous Bluestore?
>
>     My cluster (55 OSDs) runs 12.2.x since the release, and bluestore
>     too All good so far
>
>     On 16/11/2017 15:14, Konstantin Shalygin wrote:
>     > Hi cephers.
>     > Some thoughts...
>     > At this time my cluster on Kraken 11.2.0 - works smooth with
>     FileStore
>     > and RBD only.
>     > I want upgrade to Luminous 12.2.1 and go to Bluestore because this
>     > cluster want grows double with new disks, so is best opportunity
>     > migrate to Bluestore.
>     >
>     > In ML I was found two problems:
>     > 1. Increased memory usage, should be fixed in upstream
>     >
>     (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021676.html
>     <http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021676.html>).
>     >
>     > 2. OSD drops and goes cluster offline
>     >
>     (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022494.html
>     <http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022494.html>).
>     > Don't know this Bluestore or FileStore OSD'.s.
>     >
>     > If the first case I can safely survive - hosts has enough memory
>     to go
>     > to Bluestore and with the growing I can wait until the next
>     stable release.
>     > That second case really scares me. As I understood clusters with
>     this
>     > problem for now not in production.
>     >
>     > By this point I have completed all the preparations for the
>     update and
>     > now I need to figure out whether I should update to 12.2.1 or
>     wait for
>     > the next stable release, because my cluster is in production and I
>     > can't fail. Or I can upgrade and use FileStore until next release,
>     > this is acceptable for me.
>     >
>     > Thanks.
>     > _______________________________________________
>     > ceph-users mailing list
>     > ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171117/02a81819/attachment.html>


More information about the ceph-users mailing list