[ceph-users] HW Raid vs. Multiple OSD

Peter Maloney peter.maloney at brockmann-consult.de
Mon Nov 13 03:59:03 PST 2017


Once you've replaced an OSD, you'll see it is quite simple... doing it
for a few is not much more work (you've scripted it, right?). I don't
see RAID as giving any benefit here at all. It's not tricky...it's
perfectly normal operation. Just get used to ceph, and it'll be as
normal as replacing a RAID disk. And for performance degradation, maybe
it could be better on either... or better on ceph if you don't mind
setting the rate to the lowest... but when the QoS functionality is
ready, probably ceph will be much better. Also RAID will cost you more
for hardware.

And raid5 is really bad for IOPS. And ceph already replicates, so you
will have 2 layers of redundancy... and ceph does it cluster wide, not
just one machine. Using ceph with replication is like all your free
space as hot spares... you could lose 2 disks on all your machines, and
it can still run (assuming it had time to recover in between, and enough
space). And you don't want min_size=1, and if you have 2 layers of
redundancy, you'll be tempted to do that probably.

But for some workloads, like RBD, ceph doesn't balance out the workload
very evenly for a specific client, only many clients at once... raid
might help solve that, but I don't see it as worth it.

I would just software RAID1 the OS and mons, and mds, not the OSDs.

On 11/13/17 12:26, Oscar Segarra wrote:
> Hi, 
>
> I'm designing my infraestructure. I want to provide 8TB (8 disks x 1TB
> each) of data per host just for Microsoft Windows 10 VDI. In each host
> I will have storage (ceph osd) and compute (on kvm).
>
> I'd like to hear your opinion about theese two configurations:
>
> 1.- RAID5 with 8 disks (I will have 7TB but for me it is enough) + 1
> OSD daemon
> 2.- 8 OSD daemons
>
> I'm a little bit worried that 8 osd daemons can affect performance
> because all jobs running and scrubbing.
>
> Another question is the procedure of a replacement of a failed disk.
> In case of a big RAID, replacement is direct. In case of many OSDs,
> the procedure is a little bit tricky.
>
> http://ceph.com/geen-categorie/admin-guide-replacing-a-failed-disk-in-a-ceph-cluster/
>
> What is your advice?
>
> Thanks a lot everybody in advance...
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 

--------------------------------------------
Peter Maloney
Brockmann Consult
Max-Planck-Str. 2
21502 Geesthacht
Germany
Tel: +49 4152 889 300
Fax: +49 4152 889 333
E-mail: peter.maloney at brockmann-consult.de
Internet: http://www.brockmann-consult.de
--------------------------------------------

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171113/6738ba3f/attachment.html>


More information about the ceph-users mailing list