[ceph-users] Adding multiple OSD

Karun Josy karunjosy1 at gmail.com
Mon Dec 4 11:29:20 PST 2017

Thanks for your reply!

I am using erasure coded profile with k=5, m=3 settings

$ ceph osd erasure-code-profile get profile5by3

Cluster has 8 nodes, with 3 disks each. We are planning to add 2 more on
each nodes.

If I understand correctly, then I can add 3 disks at once right , assuming
3 disks can fail at a time as per the ec code profile.

Karun Josy

On Tue, Dec 5, 2017 at 12:06 AM, David Turner <drakonstein at gmail.com> wrote:

> Depending on how well you burn-in/test your new disks, I like to only add
> 1 failure domain of disks at a time in case you have bad disks that you're
> adding.  If you are confident that your disks aren't likely to fail during
> the backfilling, then you can go with more.  I just added 8 servers (16
> OSDs each) to a cluster with 15 servers (16 OSDs each) all at the same
> time, but we spent 2 weeks testing the hardware before adding the new nodes
> to the cluster.
> If you add 1 failure domain at a time, then any DoA disks in the new nodes
> will only be able to fail with 1 copy of your data instead of across
> multiple nodes.
> On Mon, Dec 4, 2017 at 12:54 PM Karun Josy <karunjosy1 at gmail.com> wrote:
>> Hi,
>> Is it recommended to add OSD disks one by one or can I add couple of
>> disks at a time ?
>> Current cluster size is about 4 TB.
>> Karun
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171205/918eef49/attachment.html>

More information about the ceph-users mailing list