[ceph-users] Ubuntu upgrade Zesty => Aardvark, Implications for Ceph?

Gregory Farnum gfarnum at redhat.com
Sun Nov 19 02:32:57 PST 2017


On Mon, Nov 13, 2017 at 10:42 PM Ranjan Ghosh <ghosh at pw6.de> wrote:

> Hi everyone,
>
> In January, support for Ubuntu Zesty will run out and we're planning to
> upgrade our servers to Aardvark. We have a two-node-cluster (and one
> additional monitoring-only server) and we're using the packages that come
> with the distro. We have mounted CephFS on the same server with the kernel
> client in FSTab. AFAIK, Aardvark includes Ceph 12.0. What would happen if
> we used the usual "do-release-upgrade" to upgrade the servers one-by-one?
> I assume the procedure described here
> "http://ceph.com/releases/v12-2-0-luminous-released/"
> <http://ceph.com/releases/v12-2-0-luminous-released/> (section "Upgrade
> from Jewel or Kraken") probably won't work for us, because
> "do-release-upgrade" will upgrade all packages (including the ceph ones) at
> once and then reboots the machine. So we cannot really upgrade only the
> monitoring nodes. And I'd rather avoid switching to PPAs beforehand. So,
> what are the real consequences if we upgrade all servers one-by-one with
> "do-release-upgrade" and then reboot all the nodes? Is it only the downtime
> why this isnt recommended or do we lose data? Any other recommendations on
> how to tackle this?
>
It's just the downtime that prevents people doing stuff like this. If
that's not a concern for you, it won't hurt your data, although you may
need to poke at the services a bit to persuade them all to get going again.

Do keep in mind that while it should work, I'm not aware of anybody testing
this.
-Greg


> Thank you / BR
>
> Ranjan
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171119/f7041dd0/attachment.html>


More information about the ceph-users mailing list