[ceph-users] Bluestore performance 50% of filestore

Milanov, Radoslav Nikiforov radonm at bu.edu
Tue Nov 14 13:41:52 PST 2017


16 MB block, single thread, sequential writes, this is



[cid:image001.emz at 01D35D67.61AF9D30]



- Rado



-----Original Message-----
From: Mark Nelson [mailto:mnelson at redhat.com]
Sent: Tuesday, November 14, 2017 4:36 PM
To: Milanov, Radoslav Nikiforov <radonm at bu.edu>; ceph-users at lists.ceph.com
Subject: Re: [ceph-users] Bluestore performance 50% of filestore



How big were the writes in the windows test and how much concurrency was there?



Historically bluestore does pretty well for us with small random writes so your write results surprise me a bit.  I suspect it's the low queue depth.  Sometimes bluestore does worse with reads, especially if readahead isn't enabled on the client.



Mark



On 11/14/2017 03:14 PM, Milanov, Radoslav Nikiforov wrote:

> Hi Mark,

> Yes RBD is in write back, and the only thing that changed was converting OSDs to bluestore. It is 7200 rpm drives and triple replication. I also get same results (bluestore 2 times slower) testing continuous writes on a 40GB partition on a Windows VM, completely different tool.

>

> Right now I'm going back to filestore for the OSDs so additional tests are possible if that helps.

>

> - Rado

>

> -----Original Message-----

> From: ceph-users [mailto:ceph-users-bounces at lists.ceph.com] On Behalf

> Of Mark Nelson

> Sent: Tuesday, November 14, 2017 4:04 PM

> To: ceph-users at lists.ceph.com<mailto:ceph-users at lists.ceph.com>

> Subject: Re: [ceph-users] Bluestore performance 50% of filestore

>

> Hi Radoslav,

>

> Is RBD cache enabled and in writeback mode?  Do you have client side readahead?

>

> Both are doing better for writes than you'd expect from the native performance of the disks assuming they are typical 7200RPM drives and you are using 3X replication (~150IOPS * 27 / 3 = ~1350 IOPS).  Given the small file size, I'd expect that you might be getting better journal coalescing in filestore.

>

> Sadly I imagine you can't do a comparison test at this point, but I'd be curious how it would look if you used libaio with a high iodepth and a much bigger partition to do random writes over.

>

> Mark

>

> On 11/14/2017 01:54 PM, Milanov, Radoslav Nikiforov wrote:

>> Hi

>>

>> We have 3 node, 27 OSDs cluster running Luminous 12.2.1

>>

>> In filestore configuration there are 3 SSDs used for journals of 9

>> OSDs on each hosts (1 SSD has 3 journal paritions for 3 OSDs).

>>

>> I've converted filestore to bluestore by wiping 1 host a time and

>> waiting for recovery. SSDs now contain block-db - again one SSD

>> serving

>> 3 OSDs.

>>

>>

>>

>> Cluster is used as storage for Openstack.

>>

>> Running fio on a VM in that Openstack reveals bluestore performance

>> almost twice slower than filestore.

>>

>> fio --name fio_test_file --direct=1 --rw=randwrite --bs=4k --size=1G

>> --numjobs=2 --time_based --runtime=180 --group_reporting

>>

>> fio --name fio_test_file --direct=1 --rw=randread --bs=4k --size=1G

>> --numjobs=2 --time_based --runtime=180 --group_reporting

>>

>>

>>

>>

>>

>> Filestore

>>

>>   write: io=3511.9MB, bw=19978KB/s, iops=4994, runt=180001msec

>>

>>   write: io=3525.6MB, bw=20057KB/s, iops=5014, runt=180001msec

>>

>>   write: io=3554.1MB, bw=20222KB/s, iops=5055, runt=180016msec

>>

>>

>>

>>   read : io=1995.7MB, bw=11353KB/s, iops=2838, runt=180001msec

>>

>>   read : io=1824.5MB, bw=10379KB/s, iops=2594, runt=180001msec

>>

>>   read : io=1966.5MB, bw=11187KB/s, iops=2796, runt=180001msec

>>

>>

>>

>> Bluestore

>>

>>   write: io=1621.2MB, bw=9222.3KB/s, iops=2305, runt=180002msec

>>

>>   write: io=1576.3MB, bw=8965.6KB/s, iops=2241, runt=180029msec

>>

>>   write: io=1531.9MB, bw=8714.3KB/s, iops=2178, runt=180001msec

>>

>>

>>

>>   read : io=1279.4MB, bw=7276.5KB/s, iops=1819, runt=180006msec

>>

>>   read : io=773824KB, bw=4298.9KB/s, iops=1074, runt=180010msec

>>

>>   read : io=1018.5MB, bw=5793.7KB/s, iops=1448, runt=180001msec

>>

>>

>>

>>

>>

>> - Rado

>>

>>

>>

>>

>>

>> _______________________________________________

>> ceph-users mailing list

>> ceph-users at lists.ceph.com<mailto:ceph-users at lists.ceph.com>

>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

>>

> _______________________________________________

> ceph-users mailing list

> ceph-users at lists.ceph.com<mailto:ceph-users at lists.ceph.com>

> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171114/e34c588f/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.emz
Type: application/octet-stream
Size: 48606 bytes
Desc: image001.emz
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171114/e34c588f/attachment.obj>


More information about the ceph-users mailing list