[ceph-users] how to improve performance

Rudi Ahlers rudiahlers at gmail.com
Mon Nov 20 03:53:04 PST 2017


root at virt2:~# iperf -c 10.10.10.81
------------------------------------------------------------
Client connecting to 10.10.10.81, TCP port 5001
TCP window size: 1.78 MByte (default)
------------------------------------------------------------
[  3] local 10.10.10.82 port 57132 connected with 10.10.10.81 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  10.5 GBytes  9.02 Gbits/sec


On Mon, Nov 20, 2017 at 1:22 PM, Sébastien VIGNERON <
sebastien.vigneron at criann.fr> wrote:

> Hi,
>
> MTU size? Did you ran an iperf test to see raw bandwidth?
>
> Cordialement / Best regards,
>
> Sébastien VIGNERON
> CRIANN,
> Ingénieur / Engineer
> Technopôle du Madrillet
> 745, avenue de l'Université
> <https://maps.google.com/?q=745,+avenue+de+l'Universit%C3%A9%C2%A0+76800+Saint-Etienne+du+Rouvray+-+France&entry=gmail&source=g>
>
> 76800 Saint-Etienne du Rouvray - France
> <https://maps.google.com/?q=745,+avenue+de+l'Universit%C3%A9%C2%A0+76800+Saint-Etienne+du+Rouvray+-+France&entry=gmail&source=g>
>
> tél. +33 2 32 91 42 91 <+33%202%2032%2091%2042%2091>
> fax. +33 2 32 91 42 92 <+33%202%2032%2091%2042%2092>
> http://www.criann.fr
> mailto:sebastien.vigneron at criann.fr <sebastien.vigneron at criann.fr>
> support: support at criann.fr
>
> Le 20 nov. 2017 à 11:58, Rudi Ahlers <rudiahlers at gmail.com> a écrit :
>
> As matter of interest, when I ran the test, the network throughput reached
> 3.98Gb/s:
>
>  ens2f0  /  traffic statistics
>
>                            rx         |       tx
> --------------------------------------+------------------
>   bytes                     2.59 GiB  |        4.63 GiB
> --------------------------------------+------------------
>           max            2.29 Gbit/s  |     3.98 Gbit/s
>       average          905.58 Mbit/s  |     1.62 Gbit/s
>           min             203 kbit/s  |      186 kbit/s
> --------------------------------------+------------------
>   packets                    1980792  |         3354372
> --------------------------------------+------------------
>           max             207630 p/s  |      342902 p/s
>       average              82533 p/s  |      139765 p/s
>           min                 51 p/s  |          56 p/s
> --------------------------------------+------------------
>   time                    24 seconds
>
> Some more stats:
>
> root at virt2:~# rados bench -p Data 10 seq
> hints = 1
>   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
> lat(s)
>     0       0         0         0         0         0           -
>  0
>     1      16       402       386   1543.69      1544  0.00182802
>  0.0395421
>     2      16       773       757   1513.71      1484  0.00243911
>  0.0409455
> Total time run:       2.340037
> Total reads made:     877
> Read size:            4194304
> Object size:          4194304
> Bandwidth (MB/sec):   1499.12
> Average IOPS:         374
> Stddev IOPS:          10
> Max IOPS:             386
> Min IOPS:             371
> Average Latency(s):   0.0419036
> Max latency(s):       0.176739
> Min latency(s):       0.00161271
>
>
>
>
> root at virt2:~# rados bench -p Data 10 rand
> hints = 1
>   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
> lat(s)
>     0       0         0         0         0         0           -
>  0
>     1      16       376       360   1439.71      1440   0.0356502
>  0.0409024
>     2      16       752       736   1471.74      1504   0.0163304
>  0.0419063
>     3      16      1134      1118   1490.43      1528    0.059643
>  0.0417043
>     4      16      1515      1499   1498.78      1524   0.0502131
>  0.0416087
>     5      15      1880      1865   1491.79      1464    0.017407
>  0.0414158
>     6      16      2254      2238   1491.79      1492   0.0657474
>  0.0420471
>     7      15      2509      2494   1424.95      1024  0.00182097
>  0.0440063
>     8      15      2873      2858   1428.81      1456   0.0302541
>  0.0439319
>     9      15      3243      3228   1434.47      1480    0.108037
>  0.0438106
>    10      16      3616      3600   1439.81      1488   0.0295953
>  0.0436184
> Total time run:       10.058519
> Total reads made:     3616
> Read size:            4194304
> Object size:          4194304
> Bandwidth (MB/sec):   1437.99
> Average IOPS:         359
> Stddev IOPS:          37
> Max IOPS:             382
> Min IOPS:             256
> Average Latency(s):   0.0438002
> Max latency(s):       0.664223
> Min latency(s):       0.00156885
>
>
>
>
>
>
>
> On Mon, Nov 20, 2017 at 12:38 PM, Rudi Ahlers <rudiahlers at gmail.com>
> wrote:
>
>> Hi,
>>
>> Can someone please help me, how do I improve performance on ou CEPH
>> cluster?
>>
>> The hardware in use are as follows:
>> 3x SuperMicro servers with the following configuration
>> 12Core Dual XEON 2.2Ghz
>> 128GB RAM
>> 2x 400GB Intel DC SSD drives
>> 4x 8TB Seagate 7200rpm 6Gbps SATA HDD's
>> 1x SuperMicro DOM for Proxmox / Debian OS
>> 4x Port 10Gbe NIC
>> Cisco 10Gbe switch.
>>
>>
>> root at virt2:~# rados bench -p Data 10 write --no-cleanup
>> hints = 1
>> Maintaining 16 concurrent writes of 4194304 bytes to objects of size
>> 4194304 for       up to 10 seconds or 0 objects
>> Object prefix: benchmark_data_virt2_39099
>>   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
>> lat(s)
>>     0       0         0         0         0         0           -
>>    0
>>     1      16        85        69   275.979       276    0.185576
>> 0.204146
>>     2      16       171       155   309.966       344   0.0625409
>> 0.193558
>>     3      16       243       227   302.633       288   0.0547129
>>  0.19835
>>     4      16       330       314   313.965       348   0.0959492
>> 0.199825
>>     5      16       413       397   317.565       332    0.124908
>> 0.196191
>>     6      16       494       478   318.633       324      0.1556
>> 0.197014
>>     7      15       591       576   329.109       392    0.136305
>> 0.192192
>>     8      16       670       654   326.965       312   0.0703808
>> 0.190643
>>     9      16       757       741   329.297       348    0.165211
>> 0.192183
>>    10      16       828       812   324.764       284   0.0935803
>> 0.194041
>> Total time run:         10.120215
>> Total writes made:      829
>> Write size:             4194304
>> Object size:            4194304
>> Bandwidth (MB/sec):     327.661
>> Stddev Bandwidth:       35.8664
>> Max bandwidth (MB/sec): 392
>> Min bandwidth (MB/sec): 276
>> Average IOPS:           81
>> Stddev IOPS:            8
>> Max IOPS:               98
>> Min IOPS:               69
>> Average Latency(s):     0.195191
>> Stddev Latency(s):      0.0830062 <083%200062>
>> Max latency(s):         0.481448
>> Min latency(s):         0.0414858
>> root at virt2:~# hdparm -I /dev/sda
>>
>>
>>
>> root at virt2:~# ceph osd tree
>> ID CLASS WEIGHT   TYPE NAME      STATUS REWEIGHT PRI-AFF
>> -1       72.78290 root default
>> -3       29.11316     host virt1
>>  1   hdd  7.27829         osd.1      up  1.00000 1.00000
>>  2   hdd  7.27829         osd.2      up  1.00000 1.00000
>>  3   hdd  7.27829         osd.3      up  1.00000 1.00000
>>  4   hdd  7.27829         osd.4      up  1.00000 1.00000
>> -5       21.83487     host virt2
>>  5   hdd  7.27829         osd.5      up  1.00000 1.00000
>>  6   hdd  7.27829         osd.6      up  1.00000 1.00000
>>  7   hdd  7.27829         osd.7      up  1.00000 1.00000
>> -7       21.83487     host virt3
>>  8   hdd  7.27829         osd.8      up  1.00000 1.00000
>>  9   hdd  7.27829         osd.9      up  1.00000 1.00000
>> 10   hdd  7.27829         osd.10     up  1.00000 1.00000
>>  0              0 osd.0            down        0 1.00000
>>
>>
>> root at virt2:~# ceph -s
>>   cluster:
>>     id:     278a2e9c-0578-428f-bd5b-3bb348923c27
>>     health: HEALTH_OK
>>
>>   services:
>>     mon: 3 daemons, quorum virt1,virt2,virt3
>>     mgr: virt1(active)
>>     osd: 11 osds: 10 up, 10 in
>>
>>   data:
>>     pools:   1 pools, 512 pgs
>>     objects: 6084 objects, 24105 MB
>>     usage:   92822 MB used, 74438 GB / 74529 GB avail
>>     pgs:     512 active+clean
>>
>> root at virt2:~# ceph -w
>>   cluster:
>>     id:     278a2e9c-0578-428f-bd5b-3bb348923c27
>>     health: HEALTH_OK
>>
>>   services:
>>     mon: 3 daemons, quorum virt1,virt2,virt3
>>     mgr: virt1(active)
>>     osd: 11 osds: 10 up, 10 in
>>
>>   data:
>>     pools:   1 pools, 512 pgs
>>     objects: 6084 objects, 24105 MB
>>     usage:   92822 MB used, 74438 GB / 74529 GB avail
>>     pgs:     512 active+clean
>>
>>
>> 2017-11-20 12:32:08.199450 mon.virt1 [INF] mon.1 10.10.10.82:6789/0
>>
>>
>>
>> The SSD drives are used as journal drives:
>>
>> root at virt3:~# ceph-disk list | grep /dev/sde | grep osd
>>  /dev/sdb1 ceph data, active, cluster ceph, osd.8, block /dev/sdb2,
>> block.db /dev/sde1
>> root at virt3:~# ceph-disk list | grep /dev/sdf | grep osd
>>  /dev/sdc1 ceph data, active, cluster ceph, osd.9, block /dev/sdc2,
>> block.db /dev/sdf1
>>  /dev/sdd1 ceph data, active, cluster ceph, osd.10, block /dev/sdd2,
>> block.db /dev/sdf2
>>
>>
>>
>> I see now /dev/sda doesn't have a journal, though it should have. Not
>> sure why.
>> This is the command I used to create it:
>>
>>
>>  pveceph createosd /dev/sda -bluestore 1  -journal_dev /dev/sde
>>
>>
>> --
>> Kind Regards
>> Rudi Ahlers
>> Website: http://www.rudiahlers.co.za
>>
>
>
>
> --
> Kind Regards
> Rudi Ahlers
> Website: http://www.rudiahlers.co.za
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>


-- 
Kind Regards
Rudi Ahlers
Website: http://www.rudiahlers.co.za
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171120/8afc85fd/attachment.html>


More information about the ceph-users mailing list