[ceph-users] Ceph stuck creating pool

Guilherme Lima guilherme.lima at farfetch.com
Tue Oct 3 08:11:58 PDT 2017


Here it is,



size: 3

min_size: 2

crush_rule: replicated_rule



[

    {

        "rule_id": 0,

        "rule_name": "replicated_rule",

        "ruleset": 0,

        "type": 1,

        "min_size": 1,

        "max_size": 10,

        "steps": [

            {

                "op": "take",

                "item": -1,

                "item_name": "default"

            },

            {

                "op": "chooseleaf_firstn",

                "num": 0,

                "type": "host"

            },

            {

                "op": "emit"

            }

        ]

    }

]





Thanks

Guilherme





*From:* ceph-users [mailto:ceph-users-bounces at lists.ceph.com] *On Behalf Of
*Webert de Souza Lima
*Sent:* Tuesday, October 3, 2017 15:47
*To:* ceph-users <ceph-users at lists.ceph.com>
*Subject:* Re: [ceph-users] Ceph stuck creating pool



This looks like something wrong with the crush rule.



What's the size, min_size and crush_rule of this pool?

 ceph osd pool get POOLNAME size

 ceph osd pool get POOLNAME min_size

 ceph osd pool get POOLNAME crush_ruleset



How is the crush rule?

 ceph osd crush rule dump




Regards,



Webert Lima

DevOps Engineer at MAV Tecnologia

*Belo Horizonte - Brasil*



On Tue, Oct 3, 2017 at 11:22 AM, Guilherme Lima <guilherme.lima at farfetch.com>
wrote:

Hi,



I have installed a virtual Ceph Cluster lab. I using Ceph Luminous v12.2.1

It consist in 3 mon + 3 osd nodes.

Each node have 3 x 250GB OSD.



My osd tree:



ID CLASS WEIGHT  TYPE NAME      STATUS REWEIGHT PRI-AFF

-1       2.19589 root default

-3       0.73196     host osd1

0   hdd 0.24399         osd.0      up  1.00000 1.00000

6   hdd 0.24399         osd.6      up  1.00000 1.00000

9   hdd 0.24399         osd.9      up  1.00000 1.00000

-5       0.73196     host osd2

1   hdd 0.24399         osd.1      up  1.00000 1.00000

7   hdd 0.24399         osd.7      up  1.00000 1.00000

10   hdd 0.24399         osd.10     up  1.00000 1.00000

-7       0.73196     host osd3

2   hdd 0.24399         osd.2      up  1.00000 1.00000

8   hdd 0.24399         osd.8      up  1.00000 1.00000

11   hdd 0.24399         osd.11     up  1.00000 1.00000



After create a new pool it is stuck on creating+peering and
creating+activating.



  cluster:

    id:     d20fdc12-f8bf-45c1-a276-c36dfcc788bc

    health: HEALTH_WARN

            Reduced data availability: 256 pgs inactive, 143 pgs peering

            Degraded data redundancy: 256 pgs unclean



  services:

    mon: 3 daemons, quorum mon2,mon3,mon1

    mgr: mon1(active), standbys: mon2, mon3

    osd: 9 osds: 9 up, 9 in



  data:

    pools:   1 pools, 256 pgs

    objects: 0 objects, 0 bytes

    usage:   10202 MB used, 2239 GB / 2249 GB avail

    pgs:     100.000% pgs not active

             143 creating+peering

             113 creating+activating



Can anyone help to find the issue?



Thanks

Guilherme











This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the system manager.
This message contains confidential information and is intended only for the
individual named. If you are not the named addressee you should not
disseminate, distribute or copy this e-mail. Please notify the sender
immediately by e-mail if you have received this e-mail by mistake and
delete this e-mail from your system. If you are not the intended recipient
you are notified that disclosing, copying, distributing or taking any
action in reliance on the contents of this information is strictly
prohibited.


_______________________________________________
ceph-users mailing list
ceph-users at lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 


This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. 
If you have received this email in error please notify the system manager. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and 
delete this e-mail from your system. If you are not the intended recipient 
you are notified that disclosing, copying, distributing or taking any 
action in reliance on the contents of this information is strictly 
prohibited.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171003/00e89712/attachment.html>


More information about the ceph-users mailing list