[ceph-users] Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap

Cary dynamic.cary at gmail.com
Mon Nov 27 13:44:16 PST 2017


Hello,

 Could someone please help me complete my botched upgrade from Jewel
10.2.3-r1 to Luminous 12.2.1. I have 9 Gentoo servers, 4 of which have 2
OSDs each.

 My OSD servers were accidentally rebooted before the monitor servers
causing them to be running Luminous before the monitors. All services have
been restarted and running ceph versions gives the following:

# ceph versions
2017-11-27 21:27:24.356940 7fed67efe700 -1 WARNING: the following dangerous
and experimental features are enabled: btrfs
2017-11-27 21:27:24.368469 7fed67efe700 -1 WARNING: the following dangerous
and experimental features are enabled: btrfs
{
    "mon": {
        "ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e)
luminous (stable)": 4
    },
    "mgr": {
        "ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e)
luminous (stable)": 3
    },
    "osd": {},
    "mds": {
        "ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e)
luminous (stable)": 1
    },
    "overall": {
        "ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e)
luminous (stable)": 8
    }
}

For some reason the OSDs do not show what version they are running, and a
ceph osd tree shows all of the OSD as being down.

 # ceph osd tree
2017-11-27 21:32:51.969335 7f483d9c2700 -1 WARNING: the following dangerous
and experimental features are enabled: btrfs
2017-11-27 21:32:51.980976 7f483d9c2700 -1 WARNING: the following dangerous
and experimental features are enabled: btrfs
ID CLASS WEIGHT   TYPE NAME              STATUS REWEIGHT PRI-AFF
-1       27.77998 root default
-3       27.77998     datacenter DC1
-6       27.77998         rack 1B06
-5        6.48000             host ceph3
 1        1.84000                 osd.1    down        0 1.00000
 3        4.64000                 osd.3    down        0 1.00000
-2        5.53999             host ceph4
 5        4.64000                 osd.5    down        0 1.00000
 8        0.89999                 osd.8    down        0 1.00000
-4        9.28000             host ceph6
 0        4.64000                 osd.0    down        0 1.00000
 2        4.64000                 osd.2    down        0 1.00000
-7        6.48000             host ceph7
 6        4.64000                 osd.6    down        0 1.00000
 7        1.84000                 osd.7    down        0 1.00000

The OSD logs all have this message:

20235 osdmap REQUIRE_JEWEL OSDMap flag is NOT set; please set it

When I try to set it with "ceph osd set require_jewel_osds" I get this
error:

Error EPERM: not all up OSDs have CEPH_FEATURE_SERVER_JEWEL feature

A "ceph features" returns:

    "mon": {
        "group": {
            "features": "0x1ffddff8eea4fffb",
            "release": "luminous",
            "num": 4
        }
    },
    "mds": {
        "group": {
            "features": "0x1ffddff8eea4fffb",
            "release": "luminous",
            "num": 1
        }
    },
    "osd": {
        "group": {
            "features": "0x1ffddff8eea4fffb",
            "release": "luminous",
            "num": 8
        }
    },
    "client": {
        "group": {
            "features": "0x1ffddff8eea4fffb",
            "release": "luminous",
            "num": 3

Is there any way I can get these OSDs to join the cluster now, or recover
my data?

Cary
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171127/88ca54e4/attachment.html>


More information about the ceph-users mailing list