[ceph-users] Performance, and how much wiggle room there is with tunables

Robert Stanford rstanford8896 at gmail.com
Fri Nov 10 08:35:12 PST 2017


 Thank you for that excellent observation.  Are there any rumors / has
anyone had experience with faster clusters, on faster networks?  I wonder
how Ceph can get ("it depends"), of course, but I wonder about numbers
people have seen.

On Fri, Nov 10, 2017 at 10:31 AM, Denes Dolhay <denke at denkesys.com> wrote:

> So you are using a 40 / 100 gbit connection all the way to your client?
>
> John's question is valid because 10 gbit = 1.25GB/s ... subtract some
> ethernet, ip, tcp and protocol overhead take into account some additional
> network factors and you are about there...
>
>
> Denes
>
> On 11/10/2017 05:10 PM, Robert Stanford wrote:
>
>
>  The bandwidth of the network is much higher than that.  The bandwidth I
> mentioned came from "rados bench" output, under the "Bandwidth (MB/sec)"
> row.  I see from comparing mine to others online that mine is pretty good
> (relatively).  But I'd like to get much more than that.
>
> Does "rados bench" show a near maximum of what a cluster can do?  Or is it
> possible that I can tune it to get more bandwidth?
>
>
> On Fri, Nov 10, 2017 at 3:43 AM, John Spray <jspray at redhat.com> wrote:
>
>> On Fri, Nov 10, 2017 at 4:29 AM, Robert Stanford
>> <rstanford8896 at gmail.com> wrote:
>> >
>> >  In my cluster, rados bench shows about 1GB/s bandwidth.  I've done some
>> > tuning:
>> >
>> > [osd]
>> > osd op threads = 8
>> > osd disk threads = 4
>> > osd recovery max active = 7
>> >
>> >
>> > I was hoping to get much better bandwidth.  My network can handle it,
>> and my
>> > disks are pretty fast as well.  Are there any major tunables I can play
>> with
>> > to increase what will be reported by "rados bench"?  Am I pretty much
>> stuck
>> > around the bandwidth it reported?
>>
>> Are you sure your 1GB/s isn't just the NIC bandwidth limit of the
>> client you're running rados bench from?
>>
>> John
>>
>> >
>> >  Thank you
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users at lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>
>
>
> _______________________________________________
> ceph-users mailing listceph-users at lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171110/9d9117e9/attachment.html>


More information about the ceph-users mailing list