[ceph-users] RocksDB and WAL migration to new block device

Igor Fedotov ifedotov at suse.de
Tue Nov 20 07:17:08 PST 2018


Hi Florian,

what's your Ceph version?

Can you also check the output for

ceph-bluestore-tool show-label -p <path to osd>


It should report 'size' labels for every volume, please check they 
contain new values.


Thanks,

Igor


On 11/20/2018 5:29 PM, Florian Engelmann wrote:
> Hi,
>
> today we migrated all of our rocksdb and wal devices to new once. The 
> new once are much bigger (500MB for wal/db -> 60GB db and 2G WAL) and 
> LVM based.
>
> We migrated like:
>
>     export OSD=x
>
>     systemctl stop ceph-osd@$OSD
>
>     lvcreate -n db-osd$OSD -L60g data || exit 1
>     lvcreate -n wal-osd$OSD -L2g data || exit 1
>
>     dd if=/var/lib/ceph/osd/ceph-$OSD/block.wal 
> of=/dev/data/wal-osd$OSD bs=1M || exit 1
>     dd if=/var/lib/ceph/osd/ceph-$OSD/block.db of=/dev/data/db-osd$OSD 
> bs=1M  || exit 1
>
>     rm -v /var/lib/ceph/osd/ceph-$OSD/block.db || exit 1
>     rm -v /var/lib/ceph/osd/ceph-$OSD/block.wal || exit 1
>     ln -vs /dev/data/db-osd$OSD /var/lib/ceph/osd/ceph-$OSD/block.db 
> || exit 1
>     ln -vs /dev/data/wal-osd$OSD /var/lib/ceph/osd/ceph-$OSD/block.wal 
> || exit 1
>
>
>     chown -c ceph:ceph $(realpath /dev/data/db-osd$OSD) || exit 1
>     chown -c ceph:ceph $(realpath /dev/data/wal-osd$OSD) || exit 1
>     chown -ch ceph:ceph /var/lib/ceph/osd/ceph-$OSD/block.db || exit 1
>     chown -ch ceph:ceph /var/lib/ceph/osd/ceph-$OSD/block.wal || exit 1
>
>
>     ceph-bluestore-tool bluefs-bdev-expand --path 
> /var/lib/ceph/osd/ceph-$OSD/ || exit 1
>
>     systemctl start ceph-osd@$OSD
>
>
> Everything went fine but it looks like the db and wal size is still 
> the old one:
>
> ceph daemon osd.0 perf dump|jq '.bluefs'
> {
>   "gift_bytes": 0,
>   "reclaim_bytes": 0,
>   "db_total_bytes": 524279808,
>   "db_used_bytes": 330301440,
>   "wal_total_bytes": 524283904,
>   "wal_used_bytes": 69206016,
>   "slow_total_bytes": 320058949632,
>   "slow_used_bytes": 13606322176,
>   "num_files": 220,
>   "log_bytes": 44204032,
>   "log_compactions": 0,
>   "logged_bytes": 31145984,
>   "files_written_wal": 1,
>   "files_written_sst": 1,
>   "bytes_written_wal": 37753489,
>   "bytes_written_sst": 238992
> }
>
>
> Even if the new block devices are recognized correctly:
>
> 2018-11-20 11:40:34.653524 7f70219b8d00  1 bdev(0x5647ea9ce200 
> /var/lib/ceph/osd/ceph-0/block.db) open size 64424509440 (0xf00000000, 
> 60GiB) block_size 4096 (4KiB) non-rotational
> 2018-11-20 11:40:34.653532 7f70219b8d00  1 bluefs add_block_device 
> bdev 1 path /var/lib/ceph/osd/ceph-0/block.db size 60GiB
>
>
> 2018-11-20 11:40:34.662385 7f70219b8d00  1 bdev(0x5647ea9ce600 
> /var/lib/ceph/osd/ceph-0/block.wal) open size 2147483648 (0x80000000, 
> 2GiB) block_size 4096 (4KiB) non-rotational
> 2018-11-20 11:40:34.662406 7f70219b8d00  1 bluefs add_block_device 
> bdev 0 path /var/lib/ceph/osd/ceph-0/block.wal size 2GiB
>
>
> Are we missing some command to "notify" rocksdb about the new device 
> size?
>
> All the best,
> Florian
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181120/cfab7206/attachment.html>


More information about the ceph-users mailing list