[ceph-users] Performance, and how much wiggle room there is with tunables

Maged Mokhtar mmokhtar at petasan.org
Fri Nov 10 10:07:50 PST 2017


rados benchmark is a client application that simulates client io to
stress the cluster. This applies whether you run the test from an
external client or from a cluster server that will act as a client. For
fast clusters it the client will saturate (cpu/net) before the cluster
does. To get accurate results it is better to run client sweeps..run the
test in steps adding 1 client in each step and aggregating the output
result. For small clusters the numbers will saturate quickly, for lager
cluster it will converge slowly but practically you can deduce where it
is heading to. Also it is best to run the clients from real clients and
not from cluster servers so you do not overstress your servers and get
more accurate results, but again practicality may limit this. 

It is also beneficial if you do measure your resource loads: cpu%, disk
% busy as well as network utilization using a tool such as
atop/collectl/sysstats.  

The are tools to automate this client sweeping, aggregation of results
and getting resource loafs, most notably the Ceph Benchmarking Tool  

https://github.com/ceph/cbt 

As for turntables, there are various recommendation for configuration
parameters for Jewel and earlier, i have not seen any for Luminous yet.
There are also various kernel sysctl.conf recommendationd for usage with
Ceph.  

/Maged  

On 2017-11-10 18:36, Robert Stanford wrote:

> But sorry, this was about "rados bench" which is run inside the Ceph cluster.  So there's no network between the "client" and my cluster. 
> 
> On Fri, Nov 10, 2017 at 10:35 AM, Robert Stanford <rstanford8896 at gmail.com> wrote:
> 
> Thank you for that excellent observation.  Are there any rumors / has anyone had experience with faster clusters, on faster networks?  I wonder how Ceph can get ("it depends"), of course, but I wonder about numbers people have seen. 
> 
> On Fri, Nov 10, 2017 at 10:31 AM, Denes Dolhay <denke at denkesys.com> wrote:
> 
> So you are using a 40 / 100 gbit connection all the way to your client? 
> 
> John's question is valid because 10 gbit = 1.25GB/s ... subtract some ethernet, ip, tcp and protocol overhead take into account some additional network factors and you are about there...
> 
> Denes
> 
> On 11/10/2017 05:10 PM, Robert Stanford wrote: 
> 
> The bandwidth of the network is much higher than that.  The bandwidth I mentioned came from "rados bench" output, under the "Bandwidth (MB/sec)" row.  I see from comparing mine to others online that mine is pretty good (relatively).  But I'd like to get much more than that.
> 
> Does "rados bench" show a near maximum of what a cluster can do?  Or is it possible that I can tune it to get more bandwidth?
> 
> On Fri, Nov 10, 2017 at 3:43 AM, John Spray <jspray at redhat.com> wrote:
> On Fri, Nov 10, 2017 at 4:29 AM, Robert Stanford
> <rstanford8896 at gmail.com> wrote:
>> 
>> In my cluster, rados bench shows about 1GB/s bandwidth.  I've done some
>> tuning:
>> 
>> [osd]
>> osd op threads = 8
>> osd disk threads = 4
>> osd recovery max active = 7
>> 
>> 
>> I was hoping to get much better bandwidth.  My network can handle it, and my
>> disks are pretty fast as well.  Are there any major tunables I can play with
>> to increase what will be reported by "rados bench"?  Am I pretty much stuck
>> around the bandwidth it reported?
> 
> Are you sure your 1GB/s isn't just the NIC bandwidth limit of the
> client you're running rados bench from?
> 
> John
> 
>> 
>> Thank you
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [1]
>> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [1]

_______________________________________________
ceph-users mailing list
ceph-users at lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [1]

_______________________________________________
ceph-users mailing list
ceph-users at lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

  

Links:
------
[1] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171110/c10ba637/attachment.html>


More information about the ceph-users mailing list