[ceph-users] Ceph cluster network bandwidth?
drakonstein at gmail.com
Thu Nov 16 07:34:40 PST 2017
Another ML thread currently happening is "[ceph-users] Cluster network
slower than public network" And It has some good information that might be
useful for you.
On Thu, Nov 16, 2017 at 10:32 AM David Turner <drakonstein at gmail.com> wrote:
> That depends on another question. Does the client write all 3 copies or
> does the client send the copy to the primary OSD and then the primary OSD
> sends the write to the secondaries? Someone asked this recently, but I
> don't recall if an answer was given. I'm not actually certain which is the
> case. If it's the latter then the 10Gb pipe from the client is all you
> If I had to guess, the client sends the writes to all OSDs, but that
> maxing the 10Gb pipe for 1 client isn't really your concern. Few use cases
> would have a single client using 100% of the bandwidth. For RGW, spin up a
> few more RGW daemons and balance them with an LB. CephFS the clients
> communicate with the OSDs directly and you probably shouldn't use a network
> FS for a single client. RBD is the likely place where this could happen,
> but few 6 server deployments are being used by a single client using all of
> the RBDs. What I'm getting at is 3 clients with 10Gb can come pretty close
> to fully saturating the 10Gb ethernet on the cluster. Likely at least to
> the point where the network pipe is not the bottleneck (OSD node CPU, OSD
> spindle speeds, etc).
> On Thu, Nov 16, 2017 at 9:46 AM Sam Huracan <nowitzki.sammy at gmail.com>
>> We intend build a new Ceph cluster with 6 Ceph OSD hosts, 10 SAS disks
>> every host, using 10Gbps NIC for client network, object is replicated 3.
>> So, how could I sizing the cluster network for best performance?
>> As i have read, 3x replicate means 3x bandwidth client network = 30 Gbps,
>> is it true? I think it is too much and make great cost
>> Do you give me a suggestion?
>> Thanks in advance.
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ceph-users