[ceph-users] cephfs poor performance

Yan, Zheng ukernel at gmail.com
Mon Oct 8 00:21:55 PDT 2018


On Mon, Oct 8, 2018 at 1:54 PM Tomasz Płaza <tomasz.plaza at grupawp.pl> wrote:
>
> Hi,
>
> Can someone please help me, how do I improve performance on our CephFS
> cluster?
>
> System in use is: Centos 7.5 with ceph 12.2.7.
> The hardware in use are as follows:
> 3xMON/MGR:
> 1xIntel(R) Xeon(R) Bronze 3106
> 16GB RAM
> 2xSSD for system
> 1Gbe NIC
>
> 2xMDS:
> 2xIntel(R) Xeon(R) Bronze 3106
> 64GB RAM
> 2xSSD for system
> 10Gbe NIC
>
> 6xOSD:
> 1xIntel(R) Xeon(R) Silver 4108
> 2xSSD for system
> 6xHGST HUS726060ALE610 SATA HDD's
> 1xINTEL SSDSC2BB150G7 for osd db`s (10G partitions) rest for OSD to
> place cephfs_metadata
> 10Gbe NIC
>
> pools (default crush rule aware of device class):
> rbd with 1024 pg crush rule replicated_hdd
> cephfs_data with 256 pg crush rule replicated_hdd
> cephfs_metadata with 32 pg crush rule replicated_ssd
>
> test done by fio: fio --randrepeat=1 --ioengine=libaio --direct=1
> --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k
> --iodepth=64 --size=1G --readwrite=randrw --rwmixread=75
>

kernel version? maybe cephfs driver in your kernel does not support
AIO (--iodepth is 1 effectively)

Yan, Zheng

> shows iops write/read performance as folows:
> rbd 3663/1223
> cephfs (fuse) 205/68 (wich is a little lower than raw performance of one
> hdd used in cluster)
>
> Everything is connected to one Cisco 10Gbe switch.
> Please help.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


More information about the ceph-users mailing list