[ceph-users] Poor ceph cluster performance

Vitaliy Filippov vitalif at yourcmc.ru
Tue Nov 27 03:31:25 PST 2018

> CPU: 2 x E5-2603 @1.8GHz
> RAM: 16GB
> Network: 1G port shared for Ceph public and cluster traffics
> Journaling device: 1 x 120GB SSD (SATA3, consumer grade)
> OSD device: 2 x 2TB 7200rpm spindle (SATA3, consumer grade)

0.84 MB/s sequential write is impossibly bad, it's not normal with any  
kind of devices and even with 1G network, you probably have some kind of  
problem in your setup - maybe the network RTT is very high or maybe osd or  
mon nodes are shared with other running tasks and overloaded or maybe your  
disks are already dead... :))

> As I moved on to test block devices, I got a following error message:
> # rbd map image01 --pool testbench --name client.admin

You don't need to map it to run benchmarks, use `fio --ioengine=rbd`  
(however you'll still need /etc/ceph/ceph.client.admin.keyring)

With best regards,
   Vitaliy Filippov

More information about the ceph-users mailing list