[ceph-users] Could ceph-disk run as expected in docker

Dai Xiang xiang.dai at sky-data.cn
Thu Nov 23 00:48:07 PST 2017


I want to do some test in container with ceph,
but after i install ceph, i find osd doesn't be installed
successfully:
[root at f3c3a6580a0f ~]$ ceph -s
  cluster:
    id:     20f00ca7-0de9-449e-a996-3d53aec4ed16
    health: HEALTH_WARN
            Reduced data availability: 128 pgs inactive
            Degraded data redundancy: 128 pgs unclean
 
  services:
    mon: 2 daemons, quorum f3c3a6580a0f,2fca4942f691
    mgr: f3c3a6580a0f(active), standbys: 2fca4942f691
    mds: cephfs-1/1/1 up  {0=2fca4942f691=up:creating}, 1 up:standby
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   2 pools, 128 pgs
    objects: 0 objects, 0 bytes
    usage:   0 kB used, 0 kB / 0 kB avail
    pgs:     100.000% pgs unknown
             128 unknown

I only found below error from install log:
Traceback (most recent call last):
  File "/usr/sbin/ceph-disk", line 9, in <module>
    load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5704, in run
    main(sys.argv[1:])
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5657, in main
    main_catch(args.func, args)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5682, in main_catch
    func(args)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4617, in main_list
    main_list_protected(args)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4621, in main_list_protected
    devices = list_devices()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4525, in list_devices
    partmap = list_all_partitions()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 778, in list_all_partitions
    dev_part_list[name] = list_partitions(get_dev_path(name))
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 791, in list_partitions
    if is_mpath(dev):
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 626, in is_mpath
    uuid = get_dm_uuid(dev)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 611, in get_dm_uuid
    uuid_path = os.path.join(block_path(dev), 'dm', 'uuid')
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 605, in block_path
    rdev = os.stat(path).st_rdev
OSError: [Errno 2] No such file or directory: '/dev/dm-6'

but next info confused me:
[ceph_deploy.osd][DEBUG ] Host 172.17.0.4 is now ready for osd use.

In 172.17.0.4, `ceph-disk list` show above error too.

Since start container in order, i can call `ceph-disk list` in the last
one without above error.

I know ceph-deploy would call ceph-disk, but above error do not make
whole install process exit and in docker, the dm type device is for
containers self. Is there any possible to skip dm device and make ceph
install osd succeed indeed?

-- 
Best Regards
Dai Xiang


More information about the ceph-users mailing list