[ceph-users] Mons are using a lot of disk space and has a lot of old osd maps
Wido den Hollander
wido at 42on.com
Tue Oct 9 00:44:30 PDT 2018
On 10/09/2018 09:35 AM, Aleksei Zakharov wrote:
> If someone is interested: we've found a workaround in this mailing list: https://www.spinics.net/lists/ceph-users/msg47963.html
> It looks like an old bug.
> We fixed the issue by restarting all ceph-mon services one by one. Mon's store uses ~500Mb now and osd's removed old osd maps:
> ~# find /var/lib/ceph/osd/ceph-224/current/meta/ | wc -l
> New osd's have only 1.35GiB used space after first start with no weight.
Since you MONs are rather old I think they are using LevelDB instead of
It might be worth it to re-deploy the MONs one by one to have them use
> 08.10.2018, 22:31, "Aleksei Zakharov" <zakharov.a.g at yandex.ru>:
>> As i can see, all pg's are active+clean:
>> ~# ceph -s
>> id: d168189f-6105-4223-b244-f59842404076
>> health: HEALTH_WARN
>> noout,nodeep-scrub flag(s) set
>> mons 1,2,3,4,5 are using a lot of disk space
>> mon: 5 daemons, quorum 1,2,3,4,5
>> mgr: api1(active), standbys: api2
>> osd: 832 osds: 791 up, 790 in
>> flags noout,nodeep-scrub
>> pools: 10 pools, 52336 pgs
>> objects: 47.78M objects, 238TiB
>> usage: 854TiB used, 1.28PiB / 2.12PiB avail
>> pgs: 52336 active+clean
>> client: 929MiB/s rd, 1.16GiB/s wr, 31.85kop/s rd, 36.19kop/s wr
>> 08.10.2018, 22:11, "Wido den Hollander" <wido at 42on.com>:
>>> On 10/08/2018 05:04 PM, Aleksei Zakharov wrote:
>>>> Hi all,
>>>> We've upgraded our cluster from jewel to luminous and re-created monitors using rocksdb.
>>>> Now we see, that mon's are using a lot of disk space and used space only grows. It is about 17GB for now. It was ~13GB when we used leveldb and jewel release.
>>>> When we added new osd's we saw that it downloads from monitors a lot of data. It was ~15GiB few days ago and it is ~18GiB today.
>>>> One of the osd's we created uses filestore and it looks like old osd maps are not removed:
>>>> ~# find /var/lib/ceph/osd/ceph-224/current/meta/ | wc -l
>>>> I've tried to run manual compaction (ceph tell mon.NUM compact) but it doesn't help.
>>>> So, how to stop this growth of data on monitors?
>>> What is the status of Ceph? Can you post the output of:
>>> $ ceph -s
>>> MONs do not trim their database if one or more PGs aren't active+clean.
>>>> Aleksei Zakharov
>>>> ceph-users mailing list
>>>> ceph-users at lists.ceph.com
>> Aleksei Zakharov
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
> Aleksei Zakharov
More information about the ceph-users