[ceph-users] slow read-performance inside the vm

Christian Balzer chibi at gol.com
Thu Jan 8 17:57:16 PST 2015


Hello,

On Thu, 8 Jan 2015 17:36:43 +0100 Patrik Plank wrote:

> Hi,
> 
> first of all, I am a “ceph-beginner“ so i am sorry for the trivial
> questions :).
> 
> I have build a ceph three node cluster for virtualization.
> 
>   
> 
> Hardware:
> 
>  
> Dell Poweredge 2900
> 
> 8 x 300GB SAS 15k7 with Dell Perc 6/i in Raid 0
> 
Bad idea.
Not only from a reliability point of view, but Ceph really starts to shine
with many OSDs. See Linday's mail.

> 2 x 120GB SSD in Raid 1 with Fujitsu Raid Controller for Journal + OS
> 
> 16GB RAM
> 
If you should go for 8 OSDs (with journals on the SSDs) this is going to
be tight, but still OK.

> 2 x Intel Xeon E5410 2,3 Ghz
> 
> 2 x Dual 1Gb Nic
> 
>  
> Configuration
> 
>   
> 
> Ceph 0.90
> 
You do realize that this is a non-production, testing version, right?
So not only all the bugs will be yours to find and battle, your experience
with this version may also not translate to a stable version like .80.x .

> 2x Network bonding with 2 x 1 Gb Network (public + cluster Network) with
> mtu 9000
> 
> read_ahead_kb = 2048
> 
Where are you setting that parameter? 
It has not much use/impact on the storage servers, but works wonders
inside a (Linux) VM using Ceph.

> /dev/sda1 on /var/lib/ceph/osd/ceph-0 type xfs
> (rw,noatime,attr2,inode64,noquota)
> 
> 
>   
> 
> ceph.conf:
> 
>   
> 
> [global]
> 
>  
> fsid = 1afaa484-1e18-4498-8fab-a31c0be230dd
> 
> mon_initial_members = ceph01
> 
> mon_host = 10.0.0.20,10.0.0.21,10.0.0.22
> 
> auth_cluster_required = cephx
> 
> auth_service_required = cephx
> 
> auth_client_required = cephx
> 
> filestore_xattr_use_omap = true
> 
> public_network = 10.0.0.0/24
> 
> cluster_network = 10.0.1.0/24
> 
> osd_pool_default_size = 3
> 
> osd_pool_default_min_size = 1
> 
> osd_pool_default_pg_num = 128
> 
> osd_pool_default_pgp_num = 128
> 
You will want to change that (and existing pools) if you should go for a
24 OSD cluster as suggested.

> filestore_flusher = false
> 
>   
> 
> [client]
> 
> rbd_cache = true
> 
> rbd_readahead_trigger_requests = 50
> 
> rbd_readahead_max_bytes = 4096
> 
> rbd_readahead_disable_after_bytes = 0
> 
> 
> 
> 
> 
> rados bench -p kvm 200 write –no-cleanup
> 
>  
> Total time run: 201.139795
> 
> Total writes made: 3403
> 
> Write size: 4194304
> 
> Bandwidth (MB/sec): 67.674
> 
> Stddev Bandwidth: 66.7865
> 
With more OSDs you'll probably get close to line speed here as well.

> Max bandwidth (MB/sec): 212
> 
> Min bandwidth (MB/sec): 0
> 
> Average Latency: 0.945577
> 
> Stddev Latency: 1.65121
> 
> Max latency: 13.6154
> 
> Min latency: 0.085628
> 
>  
> rados bench -p kvm 200 seq
> 
>  
> Total time run: 63.755990
> 
> Total reads made: 3403
> 
> Read size: 4194304
> 
> Bandwidth (MB/sec): 213.502
> 
> Average Latency: 0.299648
> 
> Max latency: 1.00783
> 
> Min latency: 0.057656
> 
> 
> 
> So here my questions:
> 
>  
> With these values above, I get a write performance of 90Mb/s and read
> performance of 29Mb/s, inside the VM. (Windows 2008/R2 with virtio
> driver and writeback-cache enabled)
> 
Google is your friend, too.
See my old thread: 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-April/028552.html

> Are these values normal with my configuration and hardware? -> The
> read-performance seems slow.
> 
See the above thread, in short yes.
Unless there is a Windows tunable that is equivalent to "read_ahead_kb" or
Ceph addresses this problem on their end (as planned, but with little to
no progress) this is as good as it gets.

> Would the read-performance better if I run for every single disk a osd?
>
Not much if any inside this VM, but in general yes.
 
Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi at gol.com   	Global OnLine Japan/Fusion Communications
http://www.gol.com/


More information about the ceph-users mailing list