[ceph-users] Performance, and how much wiggle room there is with tunables

Robert Stanford rstanford8896 at gmail.com
Mon Nov 13 08:11:20 PST 2017


ceph osd pool create scbench 100 100
rados bench -p scbench 10 write --no-cleanup
rados bench -p scbench 10 seq


On Mon, Nov 13, 2017 at 1:28 AM, Rudi Ahlers <rudiahlers at gmail.com> wrote:

> Would you mind telling me what rados command set you use, and share the
> output? I would like to compare it to our server as well.
>
> On Fri, Nov 10, 2017 at 6:29 AM, Robert Stanford <rstanford8896 at gmail.com>
> wrote:
>
>>
>>  In my cluster, rados bench shows about 1GB/s bandwidth.  I've done some
>> tuning:
>>
>> [osd]
>> osd op threads = 8
>> osd disk threads = 4
>> osd recovery max active = 7
>>
>>
>> I was hoping to get much better bandwidth.  My network can handle it, and
>> my disks are pretty fast as well.  Are there any major tunables I can play
>> with to increase what will be reported by "rados bench"?  Am I pretty much
>> stuck around the bandwidth it reported?
>>
>>  Thank you
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
> Kind Regards
> Rudi Ahlers
> Website: http://www.rudiahlers.co.za
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171113/fd7cac55/attachment.html>


More information about the ceph-users mailing list