[ceph-users] Problems with OSDs (cuttlefish)

Sage Weil sage at inktank.com
Thu Jun 6 08:23:50 PDT 2013


On Thu, 6 Jun 2013, Ilja Maslov wrote:
> Hi,
> 
> I do not think that ceph-deploy osd prepare/deploy/create actually works 
> when run on a partition.  It was returning successfully for me, but 
> wouldn't actually add any OSDs to the configuration and associate them 
> with a host.  No errors, but also no result, had to revert back to using 
> mkcephfs.

What is supposed to happen:

 - ceph-deploy osd create HOST:DIR will create a symlink in 
/var/lib/ceph/osd/ to DIR.
 - admin is responsible for mounting DIR on boot
 - sysvinit or upstart will see the symlink to DIR and start the daemon.

It is possible that the osd create step also isn't triggering the daemon 
start at the appropriate time.  Sorry, this isn't as well tested a use 
case.

s



> 
> Ilja
> 
> > -----Original Message-----
> > From: ceph-users-bounces at lists.ceph.com [mailto:ceph-users-
> > bounces at lists.ceph.com] On Behalf Of Alvaro Izquierdo Jimeno
> > Sent: Thursday, June 06, 2013 2:26 AM
> > To: John Wilkins
> > Cc: ceph-users at lists.ceph.com
> > Subject: Re: [ceph-users] Problems with OSDs (cuttlefish)
> >
> > Hi John,
> >
> > Thanks for your answer!
> >
> > But Maybe I haven?t install the cuttlefish correctly in my hosts.
> >
> > sudo initctl list | grep ceph
> > -> none
> >
> > No ceph-all found anywhere.
> >
> > Steps that I have done to install cuttlefish:
> >
> > sudo rpm --import
> > 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
> > sudo su -c 'rpm -Uvh http://ceph.com/rpm-cuttlefish/el6/x86_64/ceph-
> > release-1-0.el6.noarch.rpm'
> > sudo yum install ceph
> >
> >
> > Thanks a lot and best regards,
> > ?lvaro.
> >
> >
> >
> >
> > -----Mensaje original-----
> > De: John Wilkins [mailto:john.wilkins at inktank.com]
> > Enviado el: jueves, 06 de junio de 2013 2:48
> > Para: Alvaro Izquierdo Jimeno
> > CC: ceph-users at lists.ceph.com
> > Asunto: Re: [ceph-users] Problems with OSDs (cuttlefish)
> >
> > You can also start/stop an individual daemon this way:
> >
> > sudo stop ceph-osd id=0
> > sudo start ceph-osd id=0
> >
> >
> >
> > On Wed, Jun 5, 2013 at 4:33 PM, John Wilkins <john.wilkins at inktank.com>
> > wrote:
> > > Ok. It's more like this:
> > >
> > > sudo initctl list | grep ceph
> > >
> > > This lists all your ceph scripts and their state.
> > >
> > > To start the cluster:
> > >
> > > sudo start ceph-all
> > >
> > > To stop the cluster:
> > >
> > > sudo stop ceph-all
> > >
> > > You can also do the same with all OSDs, MDSs, etc. I'll write it up
> > > and check it in.
> > >
> > >
> > > On Wed, Jun 5, 2013 at 3:16 PM, John Wilkins
> > <john.wilkins at inktank.com> wrote:
> > >> Alvaro,
> > >>
> > >> I ran into this too. Clusters running with ceph-deploy  now use
> > upstart.
> > >>
> > >>> start ceph
> > >>> stop ceph
> > >>
> > >> Should work. I'm testing and will update the docs shortly.
> > >>
> > >> On Wed, Jun 5, 2013 at 7:41 AM, Alvaro Izquierdo Jimeno
> > >> <aizquierdo at aubay.es> wrote:
> > >>> Hi all,
> > >>>
> > >>>
> > >>>
> > >>> I already installed Ceph Bobtail in centos machines and it?s run
> > perfectly.
> > >>>
> > >>>
> > >>>
> > >>> But now I have to install Ceph Cuttlefish over Redhat 6.4. I have
> > >>> two machines (until the moment). We can assume the hostnames IP1
> > and
> > >>> IP2  ;).  I want (just to test) two monitors (one per host) and two
> > osds (one per host).
> > >>>
> > >>> In both machines, I have a XFS logical volume:
> > >>>
> > >>> Disk /dev/mapper/lvceph:  X GB, Y bytes
> > >>>
> > >>> The logical volume is formatted with XFS (sudo mkfs.xfs -f -i
> > >>> size=2048 /dev/mapper/ lvceph) and mounted
> > >>>
> > >>> In /etc/fstab I have:
> > >>>
> > >>> /dev/mapper/lvceph   /ceph                   xfs
> > >>> defaults,inode64,noatime        0 2
> > >>>
> > >>>
> > >>>
> > >>> After use ceph-deploy to install (ceph-deploy install --stable
> > >>> cuttlefish
> > >>> IP1 IP2), create (ceph-deploy new IP1 IP2) and add two monitors
> > >>> (ceph-deploy --overwrite-conf mon create  IP1 y ceph-deploy
> > >>> --overwrite-conf mon create IP2), I want to add the two osds
> > >>>
> > >>>
> > >>>
> > >>> ceph-deploy osd prepare IP1:/ceph
> > >>>
> > >>> ceph-deploy osd activate IP1:/ceph
> > >>>
> > >>> ceph-deploy osd prepare IP2:/ceph
> > >>>
> > >>> ceph-deploy osd activate IP2:/ceph
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> But no one osd is up (neither in)
> > >>>
> > >>> #sudo ceph -d osd stat
> > >>>
> > >>> e3: 2 osds: 0 up, 0 in
> > >>>
> > >>> #sudo ceph osd tree
> > >>>
> > >>> # id    weight  type name       up/down reweight
> > >>>
> > >>> -1      0       root default
> > >>>
> > >>> 0       0       osd.0   down    0
> > >>>
> > >>> 1       0       osd.1   down    0
> > >>>
> > >>>
> > >>>
> > >>> I tried to start both osds:
> > >>>
> > >>> #sudo /etc/init.d/ceph  -a start osd.0
> > >>>
> > >>> /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
> > >>> /var/lib/ceph defines )
> > >>>
> > >>>
> > >>>
> > >>> I suppose I have something wrong in the ceph-deploy osd prepare or
> > >>> activate but, can anybody help me to find it?
> > >>>
> > >>> Is needed to add anything more to /etc/ceph/ceph.conf?
> > >>>
> > >>> Now it looks like:
> > >>>
> > >>> [global]
> > >>>
> > >>> filestore_xattr_use_omap = true
> > >>>
> > >>> mon_host = the_ip_of_IP1, the_ip_of_IP2
> > >>>
> > >>> osd_journal_size = 1024
> > >>>
> > >>> mon_initial_members = IP1,IP2
> > >>>
> > >>> auth_supported = cephx
> > >>>
> > >>> fsid = 43501eb5-e8cf-4f89-a4e2-3c93ab1d9cc5
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> Thanks in advanced and best regards,
> > >>>
> > >>> ?lvaro
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> ____________
> > >>> Verificada la ausencia de virus por G Data AntiVirus Versi?n: AVA
> > >>> 22.10143 del 05.06.2013 Noticias de virus: www.antiviruslab.com
> > >>>
> > >>> _______________________________________________
> > >>> ceph-users mailing list
> > >>> ceph-users at lists.ceph.com
> > >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >> John Wilkins
> > >> Senior Technical Writer
> > >> Intank
> > >> john.wilkins at inktank.com
> > >> (415) 425-9599
> > >> http://inktank.com
> > >
> > >
> > >
> > > --
> > > John Wilkins
> > > Senior Technical Writer
> > > Intank
> > > john.wilkins at inktank.com
> > > (415) 425-9599
> > > http://inktank.com
> >
> >
> >
> > --
> > John Wilkins
> > Senior Technical Writer
> > Intank
> > john.wilkins at inktank.com
> > (415) 425-9599
> > http://inktank.com
> >
> > ____________
> > Verificada la ausencia de virus por G Data AntiVirus
> > Versi?n: AVA 22.10160 del 06.06.2013
> > Noticias de virus: www.antiviruslab.com
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users at lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient, please note that any review, dissemination, disclosure, alteration, printing, circulation, retention or transmission of this e-mail and/or any file or attachment transmitted with it, is prohibited and may be unlawful. If you have received this e-mail or any file or attachment transmitted with it in error please notify postmaster at openet.com. Although Openet has taken reasonable precautions to ensure no viruses are present in this email, we cannot accept responsibility for any loss or damage arising from the use of this email or attachments.
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


More information about the ceph-users mailing list