[ceph-users] Benchmark performance when using SSD as the journal

Dave.Chen at Dell.com Dave.Chen at Dell.com
Wed Nov 14 01:19:06 PST 2018


Thanks Mokhtar! This is what I am looking for, thanks for your explanation!


Best Regards,
Dave Chen

From: Maged Mokhtar <mmokhtar at petasan.org>
Sent: Wednesday, November 14, 2018 3:36 PM
To: Chen2, Dave; ceph-users at lists.ceph.com
Subject: Re: [ceph-users] Benchmark performance when using SSD as the journal


[EXTERNAL EMAIL]
Please report any suspicious attachments, links, or requests for sensitive information.

Hi Dave,

The SSD journal will help boost iops  & latency which will be more apparent for small block sizes. The rados benchmark default block size is 4M, use the -b option to specify the size. Try at 4k, 32k, 64k ...
As a side note, this is a rados level test, the rbd image size is not relevant here.

Maged.
On 14/11/18 06:21, Dave.Chen at Dell.com<mailto:Dave.Chen at Dell.com> wrote:
Hi all,

We want to compare the performance between HDD partition as the journal (inline from OSD disk) and SSD partition as the journal, here is what we have done, we have 3 nodes used as Ceph OSD,  each has 3 OSD on it. Firstly, we created the OSD with journal from OSD partition, and run "rados bench" utility to test the performance, and then migrate the journal from HDD to SSD (Intel S4500) and run "rados bench" again, the expected result is SSD partition should be much better than HDD, but the result shows us there is nearly no change,

The configuration of Ceph is as below,
pool size: 3
osd size: 3*3
pg (pgp) num: 300
osd nodes are separated across three different nodes
rbd image size: 10G (10240M)

The utility I used is,
rados bench -p rbd $duration write
rados bench -p rbd $duration seq
rados bench -p rbd $duration rand

Is there anything wrong from what I did?  Could anyone give me some suggestion?


Best Regards,
Dave Chen





_______________________________________________

ceph-users mailing list

ceph-users at lists.ceph.com<mailto:ceph-users at lists.ceph.com>

http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181114/82f76301/attachment.html>


More information about the ceph-users mailing list