[ceph-users] HW Raid vs. Multiple OSD

Michael mehe.schmid at gmx.ch
Mon Nov 13 04:32:11 PST 2017

Oscar Segarra wrote:
> I'd like to hear your opinion about theese two configurations:
> 1.- RAID5 with 8 disks (I will have 7TB but for me it is enough) + 1 
> OSD daemon
> 2.- 8 OSD daemons
You mean 1 OSD daemon on top of RAID5? I don't think I'd do that. You'll 
probably want redundancy at Ceph's level anyhow, and then where is the 
> I'm a little bit worried that 8 osd daemons can affect performance 
> because all jobs running and scrubbing.
If you ran RAID instead of Ceph, RAID might still perform better. But I 
don't believe anything much changes for the better if you run the Ceph 
on top of RAID rather than on top of individual OSD, unless your 
configuration is bad. I generally don't think you have to worry that 
much that a reasonably modern machine can't handle running a few extra 
jobs, either.

But you could certainly do some tests on your hardware to be sure.

> Another question is the procedure of a replacement of a failed disk. 
> In case of a big RAID, replacement is direct. In case of many OSDs, 
> the procedure is a little bit tricky.
> http://ceph.com/geen-categorie/admin-guide-replacing-a-failed-disk-in-a-ceph-cluster/ 
I wasn't using Ceph in 2014, but at least in my limited experience, 
today the most important step is done when you add the new drive and 
activate an OSD on it.

You probably still want to remove the leftovers of the old failed OSD 
for it to not clutter your list, but as far as I can tell replication 
and so on will trigger *before* you remove it. (There is a configurable 
timeout for how long an OSD can be down, after which the OSD is 
essentially treated as dead already, at which point replication and 
rebalancing starts).


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171113/ed245777/attachment.html>

More information about the ceph-users mailing list