[ceph-users] Benchmark performance when using SSD as the journal

Ashley Merrick singapore at amerrick.co.uk
Tue Nov 13 20:29:39 PST 2018


Only certain SSD's are good for CEPH Journals as can be seen @
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/

The SSD your using isn't listed but doing a quick search online it appears
to be a SSD designed for read workloads as a "upgrade" from a HD so
probably is not designed for the high write requirements a journal demands.
Therefore when it's been hit by 3 OSD's of workloads your not going to get
much more performance out of it than you would just using the disk as your
seeing.

On Wed, Nov 14, 2018 at 12:21 PM <Dave.Chen at dell.com> wrote:

> Hi all,
>
>
>
> We want to compare the performance between HDD partition as the journal
> (inline from OSD disk) and SSD partition as the journal, here is what we
> have done, we have 3 nodes used as Ceph OSD,  each has 3 OSD on it.
> Firstly, we created the OSD with journal from OSD partition, and run “rados
> bench” utility to test the performance, and then migrate the journal from
> HDD to SSD (Intel S4500) and run “rados bench” again, the expected result
> is SSD partition should be much better than HDD, but the result shows us
> there is nearly no change,
>
>
>
> The configuration of Ceph is as below,
>
> pool size: 3
>
> osd size: 3*3
>
> pg (pgp) num: 300
>
> osd nodes are separated across three different nodes
>
> rbd image size: 10G (10240M)
>
>
>
> The utility I used is,
>
> rados bench -p rbd $duration write
>
> rados bench -p rbd $duration seq
>
> rados bench -p rbd $duration rand
>
>
>
> Is there anything wrong from what I did?  Could anyone give me some
> suggestion?
>
>
>
>
>
> Best Regards,
>
> Dave Chen
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181114/0712abfe/attachment.html>


More information about the ceph-users mailing list