[ceph-users] Benchmark performance when using SSD as the journal

vitalif at yourcmc.ru vitalif at yourcmc.ru
Wed Nov 14 03:35:48 PST 2018


Hi Dave,

The main line in SSD specs you should look at is

> Enhanced Power Loss Data Protection: Yes

This makes SSD cache nonvolatile and makes SSD ignore fsync()s so 
transactional performance becomes equal to non-transactional. So your 
SSDs should be OK for journal.

rados bench is a bad tool for testing because of 4M default block size 
and a very small number of objects created for testing. Better test it 
with fio -ioengine=rbd -bs=4k -rw=randwrite and -sync=1 -iodepth=1 for 
latency or -iodepth=128 for max random load.

Another recent thing that I've discovered was that turning off write 
cache for all drives (for i in /dev/sd*; do hdparm -W 0 $i; done) 
increased write iops by an order of magnitude.

> Hi all,
> 
> We want to compare the performance between HDD partition as the
> journal (inline from OSD disk) and SSD partition as the journal, here
> is what we have done, we have 3 nodes used as Ceph OSD,  each has 3
> OSD on it. Firstly, we created the OSD with journal from OSD
> partition, and run "rados bench" utility to test the performance, and
> then migrate the journal from HDD to SSD (Intel S4500) and run "rados
> bench" again, the expected result is SSD partition should be much
> better than HDD, but the result shows us there is nearly no change,
> 
> The configuration of Ceph is as below,
> 
> pool size: 3
> 
> osd size: 3*3
> 
> pg (pgp) num: 300
> 
> osd nodes are separated across three different nodes
> 
> rbd image size: 10G (10240M)
> 
> The utility I used is,
> 
> rados bench -p rbd $duration write
> 
> rados bench -p rbd $duration seq
> 
> rados bench -p rbd $duration rand
> 
> Is there anything wrong from what I did?  Could anyone give me some
> suggestion?
> 
> Best Regards,
> 
> Dave Chen
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


More information about the ceph-users mailing list