[ceph-users] optimize bluestore for random write i/o

Paul Emmerich paul.emmerich at croit.io
Tue Mar 5 01:05:44 PST 2019


This workload is probably bottlenecked by rocksdb (since the small
writes are buffered there), so that's probably what needs tuning here.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Tue, Mar 5, 2019 at 9:29 AM Stefan Priebe - Profihost AG
<s.priebe at profihost.ag> wrote:
>
> Hello list,
>
> while the performance of sequential writes 4k on bluestore is very high
> and even higher than filestore i was wondering what i can do to optimize
> random pattern as well.
>
> While using:
> fio --rw=write --iodepth=32 --ioengine=libaio --bs=4k --numjobs=4
> --filename=/tmp/test --size=10G --runtime=60 --group_reporting
> --name=test --direct=1
>
> I get 36000 iop/s on bluestore while having 11500 on filestore.
>
> Using randwrite gives me 17000 on filestore and only 9500 on bluestore.
>
> This is on all flash / ssd running luminous 12.2.10.
>
> Greets,
> Stefan
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


More information about the ceph-users mailing list