[ceph-users] HW Raid vs. Multiple OSD

Oscar Segarra oscar.segarra at gmail.com
Mon Nov 13 03:26:49 PST 2017


I'm designing my infraestructure. I want to provide 8TB (8 disks x 1TB
each) of data per host just for Microsoft Windows 10 VDI. In each host I
will have storage (ceph osd) and compute (on kvm).

I'd like to hear your opinion about theese two configurations:

1.- RAID5 with 8 disks (I will have 7TB but for me it is enough) + 1 OSD
2.- 8 OSD daemons

I'm a little bit worried that 8 osd daemons can affect performance because
all jobs running and scrubbing.

Another question is the procedure of a replacement of a failed disk. In
case of a big RAID, replacement is direct. In case of many OSDs, the
procedure is a little bit tricky.


What is your advice?

Thanks a lot everybody in advance...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171113/33149be6/attachment.html>

More information about the ceph-users mailing list