[ceph-users] Constant Compaction on one mimic node

Alex Litvak alexander.v.litvak at gmail.com
Sun Mar 17 02:11:47 PDT 2019


Hello everyone,

I am getting a huge number of messages on one out of three nodes showing Manual compaction starting all the time.  I see no such of log entries on the other nodes in the cluster.

Mar 16 06:40:11 storage1n1-chi docker[24502]: debug 2019-03-16 06:40:11.441 7f6967af4700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/13.2.2/rpm/el7/BUILD/ceph-13.2.2/src/rocksdb/db/db_impl_compaction_flush.cc:1024] 
[default] Manual compaction starting
Mar 16 06:40:11 storage1n1-chi docker[24502]: message repeated 4 times: [ debug 2019-03-16 06:40:11.441 7f6967af4700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/13.2.2/rpm/el7/BUILD/ceph-13.2.2/src/rocksdb/db/db_impl_compaction_flush.cc:1024] 
[default] Manual compaction starting]
Mar 16 06:42:21 storage1n1-chi docker[24502]: debug 2019-03-16 06:42:21.466 7f6970305700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/13.2.2/rpm/el7/BUILD/ceph-13.2.2/src/rocksdb/db/db_impl_compaction_flush.cc:77] 
[JOB 1021] Syncing log #194307

I am not sure what triggers those message on one node an not on the others.

Checking config on all mons

debug_leveldb             4/5                                          override
debug_memdb               4/5                                          override
debug_mgr                 0/5                                          override
debug_mgrc                0/5                                          override
debug_rocksdb             4/5                                          override

Documentation tells nothing about the compaction logs or at least I couldn't find anything specific to my issue.



More information about the ceph-users mailing list