[ceph-users] Adding multiple OSD

David Turner drakonstein at gmail.com
Mon Dec 4 10:36:57 PST 2017


Depending on how well you burn-in/test your new disks, I like to only add 1
failure domain of disks at a time in case you have bad disks that you're
adding.  If you are confident that your disks aren't likely to fail during
the backfilling, then you can go with more.  I just added 8 servers (16
OSDs each) to a cluster with 15 servers (16 OSDs each) all at the same
time, but we spent 2 weeks testing the hardware before adding the new nodes
to the cluster.

If you add 1 failure domain at a time, then any DoA disks in the new nodes
will only be able to fail with 1 copy of your data instead of across
multiple nodes.

On Mon, Dec 4, 2017 at 12:54 PM Karun Josy <karunjosy1 at gmail.com> wrote:

> Hi,
>
> Is it recommended to add OSD disks one by one or can I add couple of disks
> at a time ?
>
> Current cluster size is about 4 TB.
>
>
>
> Karun
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171204/ff01f6c9/attachment.html>


More information about the ceph-users mailing list