[ceph-users] Moving bluestore WAL and DB after bluestore creation

Shawn Edwards lesser.evil at gmail.com
Wed Nov 15 11:46:48 PST 2017


On Wed, Nov 15, 2017, 11:07 David Turner <drakonstein at gmail.com> wrote:

> I'm not going to lie.  This makes me dislike Bluestore quite a bit.  Using
> multiple OSDs to an SSD journal allowed for you to monitor the write
> durability of the SSD and replace it without having to out and re-add all
> of the OSDs on the device.  Having to now out and backfill back onto the
> HDDs is awful and would have made a time when I realized that 20 journal
> SSDs all ran low on writes at the same time nearly impossible to recover
> from.
>
> Flushing journals, replacing SSDs, and bringing it all back online was a
> slick process.  Formatting the HDDs and backfilling back onto the same
> disks sounds like a big regression.  A process to migrate the WAL and DB
> onto the HDD and then back off to a new device would be very helpful.
>
> On Wed, Nov 15, 2017 at 10:51 AM Mario Giammarco <mgiammarco at gmail.com>
> wrote:
>
>> It seems it is not possible. I recreated the OSD
>>
>> 2017-11-12 17:44 GMT+01:00 Shawn Edwards <lesser.evil at gmail.com>:
>>
>>> I've created some Bluestore OSD with all data (wal, db, and data) all on
>>> the same rotating disk.  I would like to now move the wal and db onto an
>>> nvme disk.  Is that possible without re-creating the OSD?
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users at lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
This.  Exactly this.  Not being able to move the .db and .wal data on and
off the main storage disk on Bluestore is a regression.


>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171115/f6aa6b57/attachment.html>


More information about the ceph-users mailing list