[ceph-users] Full L3 Ceph

Lazuardi Nasution mrxlazuardin at gmail.com
Mon Mar 18 15:28:47 PDT 2019


Hi Stefan,

I think I have missed your reply. I'm interested to know how you manage the
performance on running Ceph with host based VXLAN overlay. May be you can
share the comparison for better understanding of possible performance
impact.

Best regards,


> Date: Sun, 25 Nov 2018 21:17:34 +0100
> From: Stefan Kooman <stefan at bit.nl>
> To: "Robin H. Johnson" <robbat2 at gentoo.org>
> Cc: Ceph Users <ceph-users at lists.ceph.com>
> Subject: Re: [ceph-users] Full L3 Ceph
> Message-ID: <20181125201734.GC17245 at shell.dmz.bit.nl>
> Content-Type: text/plain; charset="us-ascii"
>
> Quoting Robin H. Johnson (robbat2 at gentoo.org):
> > On Fri, Nov 23, 2018 at 04:03:25AM +0700, Lazuardi Nasution wrote:
> > > I'm looking example Ceph configuration and topology on full layer 3
> > > networking deployment. Maybe all daemons can use loopback alias
> address in
> > > this case. But how to set cluster network and public network
> configuration,
> > > using supernet? I think using loopback alias address can prevent the
> > > daemons down due to physical interfaces disconnection and can load
> balance
> > > traffic between physical interfaces without interfaces bonding, but
> with
> > > ECMP.
> > I can say I've done something similar**, but I don't have access to that
> > environment or most*** of the configuration anymore.
> >
> > One of the parts I do recall, was explicitly setting cluster_network
> > and public_network to empty strings, AND using public_addr+cluster_addr
> > instead, with routable addressing on dummy interfaces (NOT loopback).
>
> You can do this with MP-BGP (VXLAN) EVPN. We are running it like that.
> IPv6 overlay network only. ECMP to make use of all the links. We don't
> use a seperate cluster network. That only complicates things, and
> there's no real use for it (trademark by Wido den Hollander). If you
> want to use BGP on the hosts themselves have a look at this post by
> Vincent Bernat (great writeups of complex networking stuff) [1]. You can
> use "MC-LAG" on the host to get redundant connectivity, or use "Type 4"
> EVPN to get endpoint redundancy (Ethernet Segment Route). FRR 6.0 has
> support for most of this (not yet "Type 4" EVPN support IIRC) [2].
>
> We use a network namespace to seperate (IPv6) mangemant traffic
> from production traffic. This complicates Ceph deployment a lot, but in
> the end it's worth it.
>
> Gr. Stefan
>
> [1]: https://vincent.bernat.ch/en/blog/2017-vxlan-bgp-evpn
> [2]: https://frrouting.org/
>
>
> --
> | BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
> | GPG: 0xD14839C6                   +31 318 648 688 / info at bit.nl
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20190319/e28a74b0/attachment.html>


More information about the ceph-users mailing list