[ceph-users] optimize bluestore for random write i/o

Mark Nelson mnelson at redhat.com
Tue Mar 5 14:12:40 PST 2019


Hi Stefan,


Could you try running your random write workload against bluestore and 
then take a wallclock profile of an OSD using gdbpmp? It's available here:


https://github.com/markhpc/gdbpmp


Thanks,

Mark


On 3/5/19 2:29 AM, Stefan Priebe - Profihost AG wrote:
> Hello list,
>
> while the performance of sequential writes 4k on bluestore is very high
> and even higher than filestore i was wondering what i can do to optimize
> random pattern as well.
>
> While using:
> fio --rw=write --iodepth=32 --ioengine=libaio --bs=4k --numjobs=4
> --filename=/tmp/test --size=10G --runtime=60 --group_reporting
> --name=test --direct=1
>
> I get 36000 iop/s on bluestore while having 11500 on filestore.
>
> Using randwrite gives me 17000 on filestore and only 9500 on bluestore.
>
> This is on all flash / ssd running luminous 12.2.10.
>
> Greets,
> Stefan
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


More information about the ceph-users mailing list