[ceph-users] HW Raid vs. Multiple OSD
lionel-subscription at bouton.name
Mon Nov 13 06:56:49 PST 2017
Le 13/11/2017 à 15:47, Oscar Segarra a écrit :
> Thanks Mark, Peter,
> For clarification, the configuration with RAID5 is having many servers
> (2 or more) with RAID5 and CEPH on top of it. Ceph will replicate data
> between servers. Of course, each server will have just one OSD daemon
> managing a big disk.
> It looks functionally is the same using RAID5 + 1 Ceph daemon as 8
> CEPH daemons.
Functionally it's the same but RAID5 will kill your write performance.
For example if you start with 3 OSD hosts and a pool size of 3, due to
RAID5 each and every write on your Ceph cluster will imply a read on one
server on every disks minus one then a write on *all* the disks of the
If you use one OSD per disk you'll have a read on one disk only and a
write on 3 disks only : you'll get approximately 8 times the IOPS for
writes (with 8 disks per server). Clever RAID5 logic can minimize this
for some I/O patterns but it is a bet and will never be as good as what
you'll get with one disk per OSD.
More information about the ceph-users