[ceph-users] OSD is near full and slow in accessing storage from client

Cassiano Pilipavicius cassiano at tips.com.br
Sun Nov 12 05:50:34 PST 2017


I am also not an expert, but it looks like you have big data volumes on 
few PGs, from what I've seen, the pg data is only deleted from the old 
OSD when is completed copied to the new osd.

So, if 1 pg have 100G por example, only when it is fully copied to the 
new OSD, the space will be released on the old OSD.

If you have a busy cluster/network, it may take a good while. Maybe just 
wait a litle and check from time to time and the space will eventually 
be released.


Em 11/12/2017 11:44 AM, Sébastien VIGNERON escreveu:
> I’m not an expert either so if someone in the list have some ideas on 
> this problem, don’t be shy, share them with us.
>
> For now, I only have hypothese that the OSD space will be recovered as 
> soon as the recovery process is complete.
> Hope everything will get back in order soon (before reaching 95% or 
> above).
>
> I saw some messages on the list about the fstrim tool which can help 
> reclaim unused free space, but i don’t know if it’s apply to your case.
>
> Cordialement / Best regards,
>
> Sébastien VIGNERON
> CRIANN,
> Ingénieur / Engineer
> Technopôle du Madrillet
> 745, avenue de l'Université
> 76800 Saint-Etienne du Rouvray - France
> tél. +33 2 32 91 42 91
> fax. +33 2 32 91 42 92
> http://www.criann.fr
> mailto:sebastien.vigneron at criann.fr
> support: support at criann.fr
>
>> Le 12 nov. 2017 à 13:29, gjprabu <gjprabu at zohocorp.com 
>> <mailto:gjprabu at zohocorp.com>> a écrit :
>>
>> Hi Sebastien,
>>
>>     Below is the query details. I am not that much expert and still 
>> learning . pg's are not stuck stat before adding osd and pg are 
>> slowly clearing stat to active-clean. Today morning there was around 
>> 53 active+undersized+degraded+remapped+wait_backfill and now it is 21 
>> only, hope its going on and i am seeing the space keep increasing in 
>> newly added OSD (osd.6)
>>
>>
>> ID WEIGHT  REWEIGHT SIZE   USE    AVAIL %USE  VAR  PGS
>> *0 3.29749  1.00000  3376G 2814G  562G 83.35 1.23 165  ( Available 
>> Spaces not reduced after adding new OSD)*
>> 1 3.26869  1.00000  3347G  1923G 1423G 57.48 0.85 152
>> 2 3.27339  1.00000  3351G  1980G 1371G 59.10 0.88 161
>> 3 3.24089  1.00000  3318G  2131G 1187G 64.23 0.95 168
>> *4 3.24089  1.00000  3318G 2998G  319G 90.36 1.34 176  (**Available 
>> *Spaces not reduced after adding new OSD)**
>> *5 3.32669  1.00000  3406G 2476G  930G 72.68 1.08 165 *(**Available 
>> *Spaces not reduced after adding new OSD)***
>> 6 3.27800  1.00000  3356G  1518G 1838G 45.24 0.67 166
>>               TOTAL 23476G 15843G 7632G 67.49
>> MIN/MAX VAR: 0.67/1.34  STDDEV: 14.53
> ...
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171112/684f384c/attachment.html>


More information about the ceph-users mailing list