[ceph-users] Mons are using a lot of disk space and has a lot of old osd maps
zakharov.a.g at yandex.ru
Mon Oct 8 08:31:25 PDT 2018
As i can see, all pg's are active+clean:
~# ceph -s
noout,nodeep-scrub flag(s) set
mons 1,2,3,4,5 are using a lot of disk space
mon: 5 daemons, quorum 1,2,3,4,5
mgr: api1(active), standbys: api2
osd: 832 osds: 791 up, 790 in
pools: 10 pools, 52336 pgs
objects: 47.78M objects, 238TiB
usage: 854TiB used, 1.28PiB / 2.12PiB avail
pgs: 52336 active+clean
client: 929MiB/s rd, 1.16GiB/s wr, 31.85kop/s rd, 36.19kop/s wr
08.10.2018, 22:11, "Wido den Hollander" <wido at 42on.com>:
> On 10/08/2018 05:04 PM, Aleksei Zakharov wrote:
>> Hi all,
>> We've upgraded our cluster from jewel to luminous and re-created monitors using rocksdb.
>> Now we see, that mon's are using a lot of disk space and used space only grows. It is about 17GB for now. It was ~13GB when we used leveldb and jewel release.
>> When we added new osd's we saw that it downloads from monitors a lot of data. It was ~15GiB few days ago and it is ~18GiB today.
>> One of the osd's we created uses filestore and it looks like old osd maps are not removed:
>> ~# find /var/lib/ceph/osd/ceph-224/current/meta/ | wc -l
>> I've tried to run manual compaction (ceph tell mon.NUM compact) but it doesn't help.
>> So, how to stop this growth of data on monitors?
> What is the status of Ceph? Can you post the output of:
> $ ceph -s
> MONs do not trim their database if one or more PGs aren't active+clean.
>> Aleksei Zakharov
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
More information about the ceph-users