[ceph-users] Journal / WAL drive size?

Rudi Ahlers rudiahlers at gmail.com
Thu Nov 23 08:13:30 PST 2017


Hi Caspar,

Thanx. I don't see any mention that it's a bad idea to have the WAL and DB
on the same SSD, but I guess it could improve performance?

I do understand that if I loose the the WAL drive, I loose the OSD's as
well.


>From what I gather, I can't move the WAL or DB devices and would have to
completely rebuild the OSD storage. So, unless there's a compelling reason
not to have those two on the same drive




On Thu, Nov 23, 2017 at 11:53 AM, Caspar Smit <casparsmit at supernas.eu>
wrote:

> Rudi,
>
> First off all do not deploy an OSD specifying the same seperate device for
> DB and WAL:
>
> Please read the following why:
>
> http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
>
>
> That said you have a fairly large amount of SSD size available so i
> recommend using it as block.db:
>
> You can specify a fixed size block.db size in ceph.conf using:
>
> [global]
> bluestore_block_db_size = 16106127360
>
> The above is a 15GB block.db size
>
> Now when you deploy an OSD with a seperate block.db device the partition
> will be 15GB.
>
> The default size is a percentage of the device i believe and not always a
> usable amount.
>
> Caspar
>
> Met vriendelijke groet,
>
> Caspar Smit
> Systemengineer
> SuperNAS
> Dorsvlegelstraat 13
> 1445 PA Purmerend
>
> t: (+31) 299 410 414 <+31%20299%20410%20414>
> e: casparsmit at supernas.eu
> w: www.supernas.eu
>
> 2017-11-23 10:27 GMT+01:00 Rudi Ahlers <rudiahlers at gmail.com>:
>
>> Hi,
>>
>> Can someone please explain this to me in layman's terms. How big a WAL
>> drive do I really need?
>>
>> I have a 2x 400GB SSD drives used as WAL / DB drive and 4x 8TB HDD's used
>> as OSD's. When I look at the drive partitions the DB / WAL partitions are
>> only 576Mb & 1GB respectively. This feels a bit small.
>>
>>
>> root at virt1:~# lsblk
>> NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>> sda                  8:0    0   7.3T  0 disk
>> ├─sda1               8:1    0   100M  0 part /var/lib/ceph/osd/ceph-0
>> └─sda2               8:2    0   7.3T  0 part
>> sdb                  8:16   0   7.3T  0 disk
>> ├─sdb1               8:17   0   100M  0 part /var/lib/ceph/osd/ceph-1
>> └─sdb2               8:18   0   7.3T  0 part
>> sdc                  8:32   0   7.3T  0 disk
>> ├─sdc1               8:33   0   100M  0 part /var/lib/ceph/osd/ceph-2
>> └─sdc2               8:34   0   7.3T  0 part
>> sdd                  8:48   0   7.3T  0 disk
>> ├─sdd1               8:49   0   100M  0 part /var/lib/ceph/osd/ceph-3
>> └─sdd2               8:50   0   7.3T  0 part
>> sde                  8:64   0 372.6G  0 disk
>> ├─sde1               8:65   0     1G  0 part
>> ├─sde2               8:66   0   576M  0 part
>> ├─sde3               8:67   0     1G  0 part
>> └─sde4               8:68   0   576M  0 part
>> sdf                  8:80   0 372.6G  0 disk
>> ├─sdf1               8:81   0     1G  0 part
>> ├─sdf2               8:82   0   576M  0 part
>> ├─sdf3               8:83   0     1G  0 part
>> └─sdf4               8:84   0   576M  0 part
>> sdg                  8:96   0   118G  0 disk
>> ├─sdg1               8:97   0     1M  0 part
>> ├─sdg2               8:98   0   256M  0 part /boot/efi
>> └─sdg3               8:99   0 117.8G  0 part
>>   ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
>>   ├─pve-root       253:1    0  29.3G  0 lvm  /
>>   ├─pve-data_tmeta 253:2    0    68M  0 lvm
>>   │ └─pve-data     253:4    0  65.9G  0 lvm
>>   └─pve-data_tdata 253:3    0  65.9G  0 lvm
>>     └─pve-data     253:4    0  65.9G  0 lvm
>>
>>
>>
>>
>> --
>> Kind Regards
>> Rudi Ahlers
>> Website: http://www.rudiahlers.co.za
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Kind Regards
Rudi Ahlers
Website: http://www.rudiahlers.co.za
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171123/70c69732/attachment.html>


More information about the ceph-users mailing list