[ceph-users] librbd::image::CreateRequest: 0x55e4fc8bf620 handle_create_id_object: error creating RBD id object

Dengke Du dengke.du at windriver.com
Tue Nov 6 00:26:22 PST 2018


On 2018/11/6 下午4:29, Ashley Merrick wrote:
> What does
>
> "ceph osd tree" show ?
root at node1:~# ceph osd tree
ID CLASS WEIGHT  TYPE NAME      STATUS REWEIGHT PRI-AFF
-2             0 host 0
-1       1.00000 root default
-3       1.00000     host node1
  0   hdd 1.00000         osd.0    down        0 1.00000
>
> On Tue, Nov 6, 2018 at 4:27 PM Dengke Du <dengke.du at windriver.com 
> <mailto:dengke.du at windriver.com>> wrote:
>
>
>     On 2018/11/6 下午4:24, Ashley Merrick wrote:
>>     If I am reading your ceph -s output correctly you only have 1
>>     OSD, and 0 pool's created.
>>
>>     So your be unable to create a RBD till you atleast have a pool
>>     setup and configured to create the RBD within.
>     root at node1:~# ceph osd lspools
>     1 libvirt-pool
>     2 test-pool
>
>
>     I create pools using:
>
>     ceph osd pool create libvirt-pool 128 128
>
>     following:
>
>     http://docs.ceph.com/docs/master/rbd/libvirt/
>
>>
>>     On Tue, Nov 6, 2018 at 4:21 PM Dengke Du <dengke.du at windriver.com
>>     <mailto:dengke.du at windriver.com>> wrote:
>>
>>
>>         On 2018/11/6 下午4:16, Mykola Golub wrote:
>>         > On Tue, Nov 06, 2018 at 09:45:01AM +0800, Dengke Du wrote:
>>         >
>>         >> I reconfigure the osd service from start, the journal was:
>>         > I am not quite sure I understand what you mean here.
>>         >
>>         >>
>>         ------------------------------------------------------------------------------------------------------------------------------------------
>>         >>
>>         >> -- Unit ceph-osd at 0.service <mailto:ceph-osd at 0.service> has
>>         finished starting up.
>>         >> --
>>         >> -- The start-up result is RESULT.
>>         >> Nov 05 18:02:36 node1 ceph-osd[4487]: 2018-11-05
>>         18:02:36.915 7f6a27204e80
>>         >> -1 Public network was set, but cluster network was not set
>>         >> Nov 05 18:02:36 node1 ceph-osd[4487]: 2018-11-05
>>         18:02:36.915 7f6a27204e80
>>         >> -1     Using public network also for cluster network
>>         >> Nov 05 18:02:36 node1 ceph-osd[4487]: starting osd.0 at -
>>         osd_data
>>         >> /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
>>         >> Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05
>>         18:02:37.365 7f6a27204e80
>>         >> -1 journal FileJournal::_open: disabling aio for non-block
>>         journal.  Use
>>         >> journal_force_aio to force use of a>
>>         >> Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05
>>         18:02:37.414 7f6a27204e80
>>         >> -1 journal do_read_entry(6930432): bad header magic
>>         >> Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05
>>         18:02:37.729 7f6a27204e80
>>         >> -1 osd.0 21 log_to_monitors {default=true}
>>         >> Nov 05 18:02:47 node1 nagios[3584]: Warning: Return code
>>         of 13 for check of
>>         >> host 'localhost' was out of bounds.
>>         >>
>>         >>
>>         ------------------------------------------------------------------------------------------------------------------------------------------
>>         > Could you please post the full ceph-osd log somewhere?
>>         /var/log/ceph/ceph-osd.0.log
>>
>>         I don't have the file /var/log/ceph/ceph-osd.o.log
>>
>>         root at node1:~# systemctl status ceph-osd at 0
>>ceph-osd at 0.service <mailto:ceph-osd at 0.service> - Ceph
>>         object storage daemon osd.0
>>             Loaded: loaded (/lib/systemd/system/ceph-osd at .service;
>>         disabled;
>>         vendor preset: enabled)
>>             Active: active (running) since Mon 2018-11-05 18:02:36
>>         UTC; 6h ago
>>           Main PID: 4487 (ceph-osd)
>>              Tasks: 64
>>             Memory: 27.0M
>>             CGroup:
>>         /system.slice/system-ceph\x2dosd.slice/ceph-osd at 0.service
>>         <mailto:x2dosd.slice/ceph-osd at 0.service>
>>                     └─4487 /usr/bin/ceph-osd -f --cluster ceph --id 0
>>
>>         Nov 05 18:02:36 node1 systemd[1]: Starting Ceph object
>>         storage daemon
>>         osd.0...
>>         Nov 05 18:02:36 node1 systemd[1]: Started Ceph object storage
>>         daemon osd.0.
>>         Nov 05 18:02:36 node1 ceph-osd[4487]: 2018-11-05 18:02:36.915
>>         7f6a27204e80 -1 Public network was set, but cluster network
>>         was not set
>>         Nov 05 18:02:36 node1 ceph-osd[4487]: 2018-11-05 18:02:36.915
>>         7f6a27204e80 -1     Using public network also for cluster network
>>         Nov 05 18:02:36 node1 ceph-osd[4487]: starting osd.0 at -
>>         osd_data
>>         /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
>>         Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05 18:02:37.365
>>         7f6a27204e80 -1 journal FileJournal::_open: disabling aio for
>>         non-block
>>         journal.  Use journal_force_aio to force use of a>
>>         Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05 18:02:37.414
>>         7f6a27204e80 -1 journal do_read_entry(6930432): bad header magic
>>         Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05 18:02:37.729
>>         7f6a27204e80 -1 osd.0 21 log_to_monitors {default=true}
>>
>>         >
>>         >> but hang at the command: "rbd create libvirt-pool/dimage
>>         --size 10240 "
>>         > So it hungs forever now instead of returning the error?
>>         no returning any error, just hungs
>>         > What is `ceph -s` output?
>>         root at node1:~# ceph -s
>>            cluster:
>>              id:     9c1a42e1-afc2-4170-8172-96f4ebdaac68
>>              health: HEALTH_WARN
>>                      no active mgr
>>
>>            services:
>>              mon: 1 daemons, quorum 0
>>              mgr: no daemons active
>>              osd: 1 osds: 0 up, 0 in
>>
>>            data:
>>              pools:   0 pools, 0 pgs
>>              objects: 0  objects, 0 B
>>              usage:   0 B used, 0 B / 0 B avail
>>              pgs:
>>
>>
>>         >
>>         _______________________________________________
>>         ceph-users mailing list
>>         ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>>         http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181106/470ece0b/attachment.html>


More information about the ceph-users mailing list