[ceph-users] Running Jewel and Luminous mixed for a longer period

Rafael Lopez rafael.lopez at monash.edu
Tue Dec 5 16:54:29 PST 2017


>
> Yes, you can run luminous on Trusty; one of my clusters is currently
> Luminous/Bluestore/Trusty as I've not had time to sort out doing OS
> upgrades on it. I second the suggestion that it would be better to do the
> luminous upgrade first, retaining existing filestore OSDs, and then do the
> OS upgrade/OSD recreation on each node in sequence. I don't think there
> should realistically be any problems with running a mixed cluster for a
> while but doing the jewel->luminous upgrade on the existing installs first
> shouldn't be significant extra effort/time as you're already predicting at
> least two months to upgrade everything, and it does minimise the amount of
> change at any one time in case things do start going horribly wrong.
>
> Also, at 48 nodes, I would've thought you could get away with cycling more
> than one of them at once. Assuming they're homogenous taking out even 4 at
> a time should only raise utilisation on the rest of the cluster to a little
> over 65%, which still seems safe to me, and you'd waste way less time
> waiting for recovery. (I recognise that depending on the nature of your
> employment situation this may not actually be desirable...)
>
> Rich
>
>
I also agree with this approach.... we actually did the reverse, updated OS
on all nodes from precise/trusty to xenial while cluster was still running
hammer. the only thing that we had to fiddle with was init (ie. no systemd
files provided with hammer), but you can write basic script(s) to
start/stop all osds manually. this was ok for us, particularly since we
didn't intend to run that state for a long period, and eventually upgraded
to jewel and soon to be luminous. In your case, since trusty is supported
in luminous I don't think you would have any trouble with this?


-- 
*Rafael Lopez*
Research Devops Engineer
Monash University eResearch Centre
E: rafael.lopez at monash.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171206/0c7ba035/attachment.html>


More information about the ceph-users mailing list