[ceph-users] rbd cache limiting IOPS

Alexandre DERUMIER aderumier at odiso.com
Fri Mar 8 06:03:23 PST 2019

>>Which options do we have to increase IOPS while writeback cache is used?

If I remember they are some kind of global lock/mutex with rbd cache,

and I think they are some work currently to improve it.

(I think I see a PR about this on performance meeting pad some months ago)

----- Mail original -----
De: "Engelmann Florian" <florian.engelmann at everyware.ch>
À: "ceph-users" <ceph-users at lists.ceph.com>
Envoyé: Jeudi 7 Mars 2019 11:41:41
Objet: [ceph-users] rbd cache limiting IOPS


we are running an Openstack environment with Ceph block storage. There 
are six nodes in the current Ceph cluster (12.2.10) with NVMe SSDs and a 
P4800X Optane for rocksdb and WAL. 
The decision was made to use rbd writeback cache with KVM/QEMU. The 
write latency is incredible good (~85 µs) and the read latency is still 
good (~0.6ms). But we are limited to ~23.000 IOPS in a KVM machine. So 
we did the same FIO benchmark after we disabled the rbd cache and got 
65.000 IOPS but of course the write latency (QD1) was increased to ~ 0.6ms. 
We tried to tune: 

rbd cache size -> 256MB 
rbd cache max dirty -> 192MB 
rbd cache target dirty -> 128MB 

but still we are locked at ~23.000 IOPS with enabled writeback cache. 

Right now we are not sure if the tuned settings have been honoured by 

Which options do we have to increase IOPS while writeback cache is used? 

All the best, 

ceph-users mailing list 
ceph-users at lists.ceph.com 

More information about the ceph-users mailing list