[ceph-users] Replaced a disk, first time. Quick question

David C dcsysengineer at gmail.com
Mon Dec 4 09:15:53 PST 2017


On Mon, Dec 4, 2017 at 4:39 PM, Drew Weaver <drew.weaver at thenap.com> wrote:

> Howdy,
>
>
>
> I replaced a disk today because it was marked as Predicted failure. These
> were the steps I took
>
>
>
> ceph osd out osd17
>
> ceph -w #waited for it to get done
>
> systemctl stop ceph-osd at osd17
>
> ceph osd purge osd17 --yes-i-really-mean-it
>
> umount /var/lib/ceph/osd/ceph-osdX
>
>
>
> I noticed that after I ran the ‘osd out’ command that it started moving
> data around.
>

That's normal

>
>
> 19446/16764 objects degraded (115.999%) ß I noticed that number seems odd
>

I don't think that's normal!

>
>
> So then I replaced the disk
>
> Created a new label on it
>
> Ceph-deploy osd prepare OSD5:sdd
>
>
>
> THIS time, it started rebuilding
>
>
>
> 40795/16764 objects degraded (243.349%) ß Now I’m really concerned.
>
>
>
> Perhaps I don’t quite understand what the numbers are telling me but is it
> normal for it to rebuilding more objects than exist?
>
See:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-September/020682.html,
seems to be similar issue to yours.

I'd recommend providing more info, Ceph version, bluestore or filestore,
crushmap etc.

>
>
> Thanks,
>
> -Drew
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171204/a086c247/attachment.html>


More information about the ceph-users mailing list