[ceph-users] install ceph-osd failed in docker

David Turner drakonstein at gmail.com
Mon Nov 27 15:53:03 PST 2017


Allocate disk space from where?  If it's a physical device, then migration
still requires moving hardware.  If it isn't a physical device, why are you
setting up OSDs without a physical device?

Who is the user in your scenario?  If I'm the user, then I use a physical
device to create the OSD.  If people needing storage are the user, then
they get to use an RBD, CephFS, object pool, or S3 bucket... but none of
that explains why putting the OSD into a container gives any benefit.

On Mon, Nov 27, 2017 at 1:09 AM Dai Xiang <xiang.dai at sky-data.cn> wrote:

> On Mon, Nov 27, 2017 at 03:10:09AM +0000, David Turner wrote:
> > Disclaimer... This is slightly off topic and a genuine question.  I am a
> > container noobie that has only used them for test environments for nginx
> > configs and ceph client multi-tenency benchmarking.
> >
> > I understand the benefits to containerizing RGW,  MDS, and MGR daemons. I
> > can even come up with a decent argument to containerize MON daemons.
> > However, I cannot fathom a reason to containerize OSD daemons.
> >
> > The entire state of the osd is a physical disk, possibly with a shared
> > physical component with other OSDs. The rest of the daemons have very
> > little state, excepting MONs. If you need to have 1+ physical devices for
> > an OSD to run AND it needs access to the physical hardware, then why add
> > the complexity of a container to the configuration? It just sounds
> needless
> > and complex for no benefit other than saying you're doing it.
>
> I think it is wonderful if user can just allocate some disk and use docker
> to start whole ceph cluster no matter which os he used.
> Also, in this way, many things like migration become very easy.
>
> >
> > On Sun, Nov 26, 2017, 9:18 PM Dai Xiang <xiang.dai at sky-data.cn> wrote:
> >
> > > Hi!
> > >
> > > I am trying to install ceph in container, but osd always failed:
> > > [root at d32f3a7b6eb8 ~]$ ceph -s
> > >   cluster:
> > >     id:     a5f1d744-35eb-4e1b-a7c7-cb9871ec559d
> > >     health: HEALTH_WARN
> > >             Reduced data availability: 128 pgs inactive
> > >             Degraded data redundancy: 128 pgs unclean
> > >
> > >   services:
> > >     mon: 2 daemons, quorum d32f3a7b6eb8,1d22f2d81028
> > >     mgr: d32f3a7b6eb8(active), standbys: 1d22f2d81028
> > >     mds: cephfs-1/1/1 up  {0=1d22f2d81028=up:creating}, 1 up:standby
> > >     osd: 0 osds: 0 up, 0 in
> > >
> > >   data:
> > >     pools:   2 pools, 128 pgs
> > >     objects: 0 objects, 0 bytes
> > >     usage:   0 kB used, 0 kB / 0 kB avail
> > >     pgs:     100.000% pgs unknown
> > >              128 unknown
> > >
> > > Since docker can not create and access new partition, i create
> > > partition at host then use --device to make it available in container.
> > >
> > > From the install log, i only found below error:
> > >
> > > Traceback (most recent call last):
> > >   File "/usr/sbin/ceph-disk", line 9, in <module>
> > >     load_entry_point('ceph-disk==1.0.0', 'console_scripts',
> 'ceph-disk')()
> > >   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
> 5704, in
> > > run
> > >     main(sys.argv[1:])
> > >   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
> 5657, in
> > > main
> > >     main_catch(args.func, args)
> > >   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
> 5682, in
> > > main_catch
> > >     func(args)
> > >   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
> 4617, in
> > > main_list
> > >     main_list_protected(args)
> > >   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
> 4621, in
> > > main_list_protected
> > >     devices = list_devices()
> > >   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
> 4525, in
> > > list_devices
> > >     partmap = list_all_partitions()
> > >   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 778,
> in
> > > list_all_partitions
> > >     dev_part_list[name] = list_partitions(get_dev_path(name))
> > >   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 791,
> in
> > > list_partitions
> > >     if is_mpath(dev):
> > >   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 626,
> in
> > > is_mpath
> > >     uuid = get_dm_uuid(dev)
> > >   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 611,
> in
> > > get_dm_uuid
> > >     uuid_path = os.path.join(block_path(dev), 'dm', 'uuid')
> > >   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 605,
> in
> > > block_path
> > >     rdev = os.stat(path).st_rdev
> > > OSError: [Errno 2] No such file or directory: '/dev/dm-6'
> > >
> > > But it didn't make whole install exit. I enter container and call
> > > `ceph-disk list` then same error occur.
> > >
> > > I know that create osd need to call `ceph-disk`, while dm device is
> > > for container storage, it would be better if `ceph-disk` can skip dm
> > > device.
> > >
> > > I guess this error lead osd create fail since i do not get other error
> > > info.
> > >
> > > --
> > > Best Regards
> > > Dai Xiang
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users at lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
>
> --
> Best Regards
> Dai Xiang
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20171127/9155bd4c/attachment.html>


More information about the ceph-users mailing list