[ceph-users] cephfs poor performance

Marc Roos M.Roos at f1-outsourcing.eu
Mon Oct 8 00:35:01 PDT 2018


 
That is easy I think, so I will give it a try:

Faster CPU's, Use fast NVME disks, all 10Gbit or even better 100Gbit, 
added with a daily prayer.




-----Original Message-----
From: Tomasz Płaza [mailto:tomasz.plaza at grupawp.pl] 
Sent: maandag 8 oktober 2018 7:46
To: ceph-users at lists.ceph.com
Subject: [ceph-users] cephfs poor performance

Hi,

Can someone please help me, how do I improve performance on our CephFS 
cluster?

System in use is: Centos 7.5 with ceph 12.2.7.
The hardware in use are as follows:
3xMON/MGR:
1xIntel(R) Xeon(R) Bronze 3106
16GB RAM
2xSSD for system
1Gbe NIC

2xMDS:
2xIntel(R) Xeon(R) Bronze 3106
64GB RAM
2xSSD for system
10Gbe NIC

6xOSD:
1xIntel(R) Xeon(R) Silver 4108
2xSSD for system
6xHGST HUS726060ALE610 SATA HDD's
1xINTEL SSDSC2BB150G7 for osd db`s (10G partitions) rest for OSD to 
place cephfs_metadata 10Gbe NIC

pools (default crush rule aware of device class):
rbd with 1024 pg crush rule replicated_hdd cephfs_data with 256 pg crush 
rule replicated_hdd cephfs_metadata with 32 pg crush rule replicated_ssd

test done by fio: fio --randrepeat=1 --ioengine=libaio --direct=1
--gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k
--iodepth=64 --size=1G --readwrite=randrw --rwmixread=75

shows iops write/read performance as folows:
rbd 3663/1223
cephfs (fuse) 205/68 (wich is a little lower than raw performance of one 
hdd used in cluster)

Everything is connected to one Cisco 10Gbe switch.
Please help.

_______________________________________________
ceph-users mailing list
ceph-users at lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




More information about the ceph-users mailing list