[ceph-users] Migration from filestore to blustore

Iban Cabrillo cabrillo at ifca.unican.es
Mon Nov 20 06:55:37 PST 2017


Hi,
  Looking at ceph-deploy list node:

[cephosd02][INFO  ] Running command: /usr/sbin/ceph-disk list
[cephosd02][DEBUG ] /dev/sda :
[cephosd02][DEBUG ]  /dev/sda1 ceph data, prepared, cluster ceph, block
/dev/sda2
[cephosd02][DEBUG ]  /dev/sda2 ceph block, for /dev/sda1
[cephosd02][DEBUG ] /dev/sdb :
[cephosd02][DEBUG ]  /dev/sdb1 ceph data, active, cluster ceph, osd.2,
journal /dev/sdn1

......
[cephosd02][DEBUG ]  /dev/sdm1 ceph journal

/dev/sdm1 should be the jorunar for old sda disk (old osd.0)

Doing this should solve the issue?
ceph-disk prepare --bluestore --osd-id 0 /dev/sda /dev/sdm1

Regards, I







2017-11-20 14:34 GMT+01:00 Iban Cabrillo <cabrillo at ifca.unican.es>:

> Hi Wido,
>   The disk was empty, I checked that there were no remapped pgs, before
> run ceph-disk prepare. Re-run ceph-disk again?
>
> Regards, i
>
> El lun., 20 nov. 2017 14:12, Wido den Hollander <wido at 42on.com> escribió:
>
>>
>> > Op 20 november 2017 om 14:02 schreef Iban Cabrillo <
>> cabrillo at ifca.unican.es>:
>> >
>> >
>> > Hi cephers,
>> >   I was trying to migrate from Filestore to bluestore followig the
>> > instructions but after the ceph-disk prepare the new osd had not join to
>> > the cluster again:
>> >
>> >    [root at cephadm ~]# ceph osd tree
>> > ID CLASS WEIGHT   TYPE NAME                STATUS    REWEIGHT PRI-AFF
>> > -1       58.21509 root default
>> > -7       58.21509     datacenter 10GbpsNet
>> > -2       29.12000         host cephosd01
>> >  1   hdd  3.64000             osd.1               up  1.00000 1.00000
>> >  3   hdd  3.64000             osd.3               up  1.00000 1.00000
>> >  5   hdd  3.64000             osd.5               up  1.00000 1.00000
>> >  7   hdd  3.64000             osd.7               up  1.00000 1.00000
>> >  9   hdd  3.64000             osd.9               up  1.00000 1.00000
>> > 11   hdd  3.64000             osd.11              up  1.00000 1.00000
>> > 13   hdd  3.64000             osd.13              up  1.00000 1.00000
>> > 15   hdd  3.64000             osd.15              up  1.00000 1.00000
>> > -3       29.09509         host cephosd02
>> >  0   hdd  3.63689             osd.0        destroyed        0 1.00000
>> >  2   hdd  3.63689             osd.2               up  1.00000 1.00000
>> >  4   hdd  3.63689             osd.4               up  1.00000 1.00000
>> >  6   hdd  3.63689             osd.6               up  1.00000 1.00000
>> >  8   hdd  3.63689             osd.8               up  1.00000 1.00000
>> > 10   hdd  3.63689             osd.10              up  1.00000 1.00000
>> > 12   hdd  3.63689             osd.12              up  1.00000 1.00000
>> > 14   hdd  3.63689             osd.14              up  1.00000 1.00000
>> > -8              0     datacenter 1GbpsNet
>> >
>> >
>> > The state is destroyed yet
>> >
>> > The operation has completed successfully.
>> > [root at cephosd02 ~]# ceph-disk prepare --bluestore /dev/sda --osd-id 0
>> > The operation has completed successfully.
>>
>> Did you wipe the disk yet? Make sure it's completely empty before you
>> re-create the OSD.
>>
>> Wido
>>
>> > The operation has completed successfully.
>> > The operation has completed successfully.
>> > meta-data=/dev/sda1              isize=2048   agcount=4, agsize=6400
>> blks
>> >          =                       sectsz=512   attr=2, projid32bit=1
>> >          =                       crc=1        finobt=0, sparse=0
>> > data     =                       bsize=4096   blocks=25600, imaxpct=25
>> >          =                       sunit=0      swidth=0 blks
>> > naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
>> > log      =internal log           bsize=4096   blocks=864, version=2
>> >          =                       sectsz=512   sunit=0 blks, lazy-count=1
>> > realtime =none                   extsz=4096   blocks=0, rtextents=0
>> >
>> > The metadata was on SSD disk
>> >
>> > In the logs I only see this :
>> >
>> > 2017-11-20 14:00:48.536252 7fc2d149dd00 -1  ** ERROR: unable to open OSD
>> > superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory
>> > 2017-11-20 14:01:08.788158 7f4a9165fd00  0 set uid:gid to 167:167
>> > (ceph:ceph)
>> > 2017-11-20 14:01:08.788179 7f4a9165fd00  0 ceph version 12.2.0
>> > (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc), process
>> > (unknown), pid 115029
>> >
>> > Any Advise?
>> >
>> > Regards, I
>> >
>> > --
>> > ############################################################
>> ################
>> > Iban Cabrillo Bartolome
>> > Instituto de Fisica de Cantabria (IFCA)
>> > Santander, Spain
>> > Tel: +34942200969 <+34%20942%2020%2009%2069>
>> > PGP PUBLIC KEY:
>> > http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
>> > ############################################################
>> ################
>> > Bertrand Russell:*"El problema con el mundo es que los estúpidos están
>> > seguros de todo y los inteligentes están **llenos de dudas*"
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users at lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>


-- 
############################################################################
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
############################################################################
Bertrand Russell:*"El problema con el mundo es que los estúpidos están
seguros de todo y los inteligentes están **llenos de dudas*"
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171120/ce6b5712/attachment.html>


More information about the ceph-users mailing list