[ceph-users] rados_read versus rados_aio_read performance

Alexander Kushnirenko kushnirenko at gmail.com
Sun Oct 1 07:47:11 PDT 2017

Hi, Gregory!

Thanks for the comment.  I compiled simple program to play with write speed
measurements (from librados examples). Underline "write" functions are:
rados_write(io, "hw", read_res, 1048576, i*1048576);
rados_aio_write(io, "foo", comp, read_res, 1048576, i*1048576);

So I consecutively put 1MB blocks on CEPH.   What I measured is that
rados_aio_write gives me about 5 times the speed of rados_write.  I make
128 consecutive writes in for loop to create object of maximum allowed size
of 132MB.

Now if I do consecutive write from some client into CEPH storage, then what
is the recommended buffer size? (I'm trying to debug very poor Bareos write
speed of just 3MB/s to CEPH)

Thank you,

On Fri, Sep 29, 2017 at 5:18 PM, Gregory Farnum <gfarnum at redhat.com> wrote:

> It sounds like you are doing synchronous reads of small objects here. In
> that case you are dominated by the per-op already rather than the
> throughout of your cluster. Using aio or multiple threads will let you
> parallelism requests.
> -Greg
> On Fri, Sep 29, 2017 at 3:33 AM Alexander Kushnirenko <
> kushnirenko at gmail.com> wrote:
>> Hello,
>> We see very poor performance when reading/writing rados objects.  The
>> speed is only 3-4MB/sec, compared to 95MB rados benchmarking.
>> When you look on underline code it uses librados and linradosstripper
>> libraries (both have poor performance) and the code uses rados_read and
>> rados_write functions.  If you look on examples they recommend
>> rados_aio_read/write.
>> Could this be the reason for poor performance?
>> Thank you,
>> Alexander.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171001/54f25507/attachment.html>

More information about the ceph-users mailing list