[ceph-users] Ceph cluster network bandwidth?

David Turner drakonstein at gmail.com
Thu Nov 16 07:32:45 PST 2017

That depends on another question.  Does the client write all 3 copies or
does the client send the copy to the primary OSD and then the primary OSD
sends the write to the secondaries?  Someone asked this recently, but I
don't recall if an answer was given.  I'm not actually certain which is the
case.  If it's the latter then the 10Gb pipe from the client is all you

If I had to guess, the client sends the writes to all OSDs, but that maxing
the 10Gb pipe for 1 client isn't really your concern.  Few use cases would
have a single client using 100% of the bandwidth.  For RGW, spin up a few
more RGW daemons and balance them with an LB.  CephFS the clients
communicate with the OSDs directly and you probably shouldn't use a network
FS for a single client.  RBD is the likely place where this could happen,
but few 6 server deployments are being used by a single client using all of
the RBDs.  What I'm getting at is 3 clients with 10Gb can come pretty close
to fully saturating the 10Gb ethernet on the cluster.  Likely at least to
the point where the network pipe is not the bottleneck (OSD node CPU, OSD
spindle speeds, etc).

On Thu, Nov 16, 2017 at 9:46 AM Sam Huracan <nowitzki.sammy at gmail.com>

> Hi,
> We intend build a new Ceph cluster with 6 Ceph OSD hosts, 10 SAS disks
> every host, using 10Gbps NIC for client network, object is replicated 3.
> So, how could I sizing the cluster network for best performance?
> As i have read, 3x replicate means 3x bandwidth client network = 30 Gbps,
> is it true? I think it is too much and make great cost
> Do you give me a suggestion?
> Thanks in advance.
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171116/598edcf0/attachment.html>

More information about the ceph-users mailing list