[ceph-users] HW Raid vs. Multiple OSD

Oscar Segarra oscar.segarra at gmail.com
Mon Nov 13 06:26:15 PST 2017


Hi Peter,

Thanks a lot for your consideration in terms of storage consumption.

The other question is considering having one OSDs vs 8 OSDs... 8 OSDs will
consume more CPU than 1 OSD (RAID5) ?

As I want to share compute and osd in the same box, resources consumed by
OSD can be a handicap.

Thanks a lot.

2017-11-13 12:59 GMT+01:00 Peter Maloney <peter.maloney at brockmann-consult.de
>:

> Once you've replaced an OSD, you'll see it is quite simple... doing it for
> a few is not much more work (you've scripted it, right?). I don't see RAID
> as giving any benefit here at all. It's not tricky...it's perfectly normal
> operation. Just get used to ceph, and it'll be as normal as replacing a
> RAID disk. And for performance degradation, maybe it could be better on
> either... or better on ceph if you don't mind setting the rate to the
> lowest... but when the QoS functionality is ready, probably ceph will be
> much better. Also RAID will cost you more for hardware.
>
> And raid5 is really bad for IOPS. And ceph already replicates, so you will
> have 2 layers of redundancy... and ceph does it cluster wide, not just one
> machine. Using ceph with replication is like all your free space as hot
> spares... you could lose 2 disks on all your machines, and it can still run
> (assuming it had time to recover in between, and enough space). And you
> don't want min_size=1, and if you have 2 layers of redundancy, you'll be
> tempted to do that probably.
>
> But for some workloads, like RBD, ceph doesn't balance out the workload
> very evenly for a specific client, only many clients at once... raid might
> help solve that, but I don't see it as worth it.
>
> I would just software RAID1 the OS and mons, and mds, not the OSDs.
>
>
> On 11/13/17 12:26, Oscar Segarra wrote:
>
> Hi,
>
> I'm designing my infraestructure. I want to provide 8TB (8 disks x 1TB
> each) of data per host just for Microsoft Windows 10 VDI. In each host I
> will have storage (ceph osd) and compute (on kvm).
>
> I'd like to hear your opinion about theese two configurations:
>
> 1.- RAID5 with 8 disks (I will have 7TB but for me it is enough) + 1 OSD
> daemon
> 2.- 8 OSD daemons
>
> I'm a little bit worried that 8 osd daemons can affect performance because
> all jobs running and scrubbing.
>
> Another question is the procedure of a replacement of a failed disk. In
> case of a big RAID, replacement is direct. In case of many OSDs, the
> procedure is a little bit tricky.
>
> http://ceph.com/geen-categorie/admin-guide-replacing-a-failed-disk-in-a-
> ceph-cluster/
>
> What is your advice?
>
> Thanks a lot everybody in advance...
>
>
> _______________________________________________
> ceph-users mailing listceph-users at lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> --
>
> --------------------------------------------
> Peter Maloney
> Brockmann Consult
> Max-Planck-Str. 2
> 21502 Geesthacht
> Germany
> Tel: +49 4152 889 300 <+49%204152%20889300>
> Fax: +49 4152 889 333 <+49%204152%20889333>
> E-mail: peter.maloney at brockmann-consult.de
> Internet: http://www.brockmann-consult.de
> --------------------------------------------
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171113/8bcd3b76/attachment.html>


More information about the ceph-users mailing list