[ceph-users] announcing ceph-helm (ceph on kubernetes orchestration)

Hunter Nield hunter at acale.ph
Mon Nov 6 01:25:57 PST 2017

I’m not sure how I missed this earlier in the lists but having done a lot
of work on Ceph helm charts, this is of definite interest to us. We’ve been
running various states of Ceph in Docker and Kubernetes (in production
environments) for over a year now.

There is a lot of overlap between the Rook and the Ceph related projects
(ceph-docker/ceph-container and now ceph-helm) and I agree with Bassam
about finding ways of bringing things closer. Having felt (and contributed
to) the pain of the complex ceph-container entrypoint scripts, the
importance of the simpler initial configuration and user experience with
Rook and it’s Operator approach can’t be understated. There is a definite
need for a vanilla project to run Ceph on Kubernetes but the most useful
part lies in encapsulating the day-to-day operation of a Ceph cluster that
builds on the strengths of Kubernetes (CRDs, Operators, dynamic scaling,

Looking forward to seeing where this goes (and joining the discussion)


On Sat, Nov 4, 2017 at 5:13 AM Bassam Tabbara <bassam at tabbara.com> wrote:

> (sorry for the late response, just catching up on ceph-users)
> > Probably the main difference is that ceph-helm aims to run Ceph as part
> of
> > the container infrastructure.  The containers are privileged so they can
> > interact with hardware where needed (e.g., lvm for dm-crypt) and the
> > cluster runs on the host network.  We use kubernetes some orchestration:
> > kube is a bit of a headache for mons and osds but will be very helpful
> for
> > scheduling everything else: mgrs, rgw, rgw-nfs, iscsi, mds, ganesha,
> > samba, rbd-mirror, etc.
> >
> > Rook, as I understand it at least (the rook folks on the list can speak
> up
> > here), aims to run Ceph more as a tenant of kubernetes.  The cluster runs
> > in the container network space, and the aim is to be able to deploy ceph
> > more like an unprivileged application on e.g., a public cloud providing
> > kubernetes as the cloud api.
> Yes Rook’s goal is to run wherever Kubernetes runs without making changes
> at the host level. Eventually we plan to remove the need to run some of the
> containers privileged, and automatically work with different kernel
> versions and heterogeneous environments. It's fair to think of Rook as an
> application of Kubernetes. As a result you could run it on AWS, Google,
> bare-metal or wherever.
> > The other difference is around rook-operator, which is the thing that
> lets
> > you declare what you want (ceph clusters, pools, etc) via kubectl and
> goes
> > off and creates the cluster(s) and tells it/them what to do.  It makes
> the
> > storage look like it is tightly integrated with and part of kubernetes
> but
> > means that kubectl becomes the interface for ceph cluster management.
> Rook extends Kubernetes to understand storage concepts like Pool, Object
> Store, FileSystems. Our goal is for storage to be integrated deeply into
> Kuberentes. That said, you can easily launch the Rook toolbox and use the
> ceph tools at any point. I don’t think the goal is for Rook to replace the
> ceph tools, but instead to offer a Kuberentes-native alternative to them.
> > Some of that seems useful to me (still developing opinions here!) and
> > perhaps isn't so different than the declarations in your chart's
> > values.yaml but I'm unsure about the wisdom of going too far down the
> road
> > of administering ceph via yaml.
> >
> > Anyway, I'm still pretty new to kubernetes-land and very interested in
> > hearing what people are interested in or looking for here!
> Would be great to find ways to get these two projects closer.
> Bassam
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171106/832aed7a/attachment.html>

More information about the ceph-users mailing list