[ceph-users] how to upgrade CEPH journal?

Alwin Antreich a.antreich at proxmox.com
Thu Nov 9 08:02:15 PST 2017


Hi Rudi,
On Thu, Nov 09, 2017 at 04:09:04PM +0200, Rudi Ahlers wrote:
> Hi,
>
> Can someone please tell me what the correct procedure is to upgrade a CEPH
> journal?
>
> I'm running ceph: 12.2.1 on Proxmox 5.1, which runs on Debian 9.1
>
> For a journal I have a 400GB Intel SSD drive and it seems CEPH created a
> 1GB journal:
>
> Disk /dev/sdf: 372.6 GiB, 400088457216 bytes, 781422768 sectors
> /dev/sdf1     2048 2099199 2097152   1G unknown
> /dev/sdf2  2099200 4196351 2097152   1G unknown
>
> root at virt2:~# fdisk -l | grep sde
> Disk /dev/sde: 372.6 GiB, 400088457216 bytes, 781422768 sectors
> /dev/sde1   2048 2099199 2097152   1G unknown
>
>
> /dev/sda :
>  /dev/sda1 ceph data, active, cluster ceph, osd.3, block /dev/sda2,
> block.db /dev/sde1
>  /dev/sda2 ceph block, for /dev/sda1
> /dev/sdb :
>  /dev/sdb1 ceph data, active, cluster ceph, osd.4, block /dev/sdb2,
> block.db /dev/sdf1
>  /dev/sdb2 ceph block, for /dev/sdb1
> /dev/sdc :
>  /dev/sdc1 ceph data, active, cluster ceph, osd.5, block /dev/sdc2,
> block.db /dev/sdf2
>  /dev/sdc2 ceph block, for /dev/sdc1
> /dev/sdd :
>  /dev/sdd1 other, xfs, mounted on /data/brick1
>  /dev/sdd2 other, xfs, mounted on /data/brick2
> /dev/sde :
>  /dev/sde1 ceph block.db, for /dev/sda1
> /dev/sdf :
>  /dev/sdf1 ceph block.db, for /dev/sdb1
>  /dev/sdf2 ceph block.db, for /dev/sdc1
> /dev/sdg :
>
>
> resizing the partition through fdisk didn't work. What is the correct
> procedure, please?
>
> Kind Regards
> Rudi Ahlers
> Website: http://www.rudiahlers.co.za

> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
For Bluestore OSDs you need to set bluestore_block_size to geat a bigger
partition for the DB and bluestore_block_wal_size for the WAL.

ceph-disk prepare --bluestore \
--block.db /dev/sde --block.wal /dev/sde /dev/sdX

This gives you in total four partitions on two different disks.

I think it will be less hassle to remove the OSD and prepare it again.

--
Cheers,
Alwin



More information about the ceph-users mailing list