[ceph-users] mount cephfs from a public network ip of mds

Joshua Chen cschen at asiaa.sinica.edu.tw
Mon Oct 1 14:09:00 PDT 2018


Thank you for all your reply.
I will consider changing the design or negotiate with my colleagues for the
topology issue. Or if all are not working, try to come back to this
solution.

Cheers
Joshua

On Mon, Oct 1, 2018 at 9:05 PM Paul Emmerich <paul.emmerich at croit.io> wrote:

> No, mons can only have exactly one IP address and they'll only listen
> on that IP.
>
> As David suggested: check if you really need separate networks. This
> setup usually creates more problems than it solves, especially if you
> have one 1G and one 10G network.
>
> Paul
> Am Mo., 1. Okt. 2018 um 04:11 Uhr schrieb Joshua Chen
> <cschen at asiaa.sinica.edu.tw>:
> >
> > Hello Paul,
> >   Thanks for your reply.
> >   Now my clients will be from 140.109 (LAN, the real ip network 1Gb/s)
> and from 10.32 (SAN, a closed 10Gb network). Could I make this
> public_network to be 0.0.0.0? so mon daemon listens on both 1Gb and 10Gb
> network?
> >   Or could I have
> > public_network = 140.109.169.0/24, 10.32.67.0/24
> > cluster_network = 10.32.67.0/24
> >
> > does ceph allow 2 (multiple) public_network?
> >
> >   And I don't want to limit the client read/write speed to be 1Gb/s nics
> unless they don't have 10Gb nic installed. To guarantee clients read/write
> to osd (when they know the details of the location) they should be using
> the fastest nic (10Gb) when available. But other clients with only 1Gb nic
> will go through 140.109.0.0 (1Gb LAN) to ask mon or to read/write to osds.
> This is why my osds also have 1Gb and 10Gb nics with 140.109.0.0 and
> 10.32.0.0 networking respectively.
> >
> > Cheers
> > Joshua
> >
> > On Sun, Sep 30, 2018 at 12:09 PM David Turner <drakonstein at gmail.com>
> wrote:
> >>
> >> The cluster/private network is only used by the OSDs. Nothing else in
> ceph or its clients communicate using it. Everything other than osd to osd
> communication uses the public network. That includes the MONs, MDSs,
> clients, and anything other than an osd talking to an osd. Nothing else
> other than osd to osd traffic can communicate on the private/cluster
> network.
> >>
> >> On Sat, Sep 29, 2018, 6:43 AM Paul Emmerich <paul.emmerich at croit.io>
> wrote:
> >>>
> >>> All Ceph clients will always first connect to the mons. Mons provide
> >>> further information on the cluster such as the IPs of MDS and OSDs.
> >>>
> >>> This means you need to provide the mon IPs to the mount command, not
> >>> the MDS IPs. Your first command works by coincidence since
> >>> you seem to run the mons and MDS' on the same server.
> >>>
> >>>
> >>> Paul
> >>> Am Sa., 29. Sep. 2018 um 12:07 Uhr schrieb Joshua Chen
> >>> <cschen at asiaa.sinica.edu.tw>:
> >>> >
> >>> > Hello all,
> >>> >   I am testing the cephFS cluster so that clients could mount -t
> ceph.
> >>> >
> >>> >   the cluster has 6 nodes, 3 mons (also mds), and 3 osds.
> >>> >   All these 6 nodes has 2 nic, one 1Gb nic with real ip
> (140.109.0.0) and 1 10Gb nic with virtual ip (10.32.0.0)
> >>> >
> >>> > 140.109. Nic1 1G<-MDS1->Nic2 10G 10.32.
> >>> > 140.109. Nic1 1G<-MDS2->Nic2 10G 10.32.
> >>> > 140.109. Nic1 1G<-MDS3->Nic2 10G 10.32.
> >>> > 140.109. Nic1 1G<-OSD1->Nic2 10G 10.32.
> >>> > 140.109. Nic1 1G<-OSD2->Nic2 10G 10.32.
> >>> > 140.109. Nic1 1G<-OSD3->Nic2 10G 10.32.
> >>> >
> >>> >
> >>> >
> >>> > and I have the following questions:
> >>> >
> >>> > 1, can I have both public (140.109.0.0) and cluster (10.32.0.0)
> clients all be able to mount this cephfs resource
> >>> >
> >>> > I want to do
> >>> >
> >>> > (in a 140.109 network client)
> >>> > mount -t ceph mds1(140.109.169.48):/ /mnt/cephfs -o user=,secret=,,,,
> >>> >
> >>> > and also in a 10.32.0.0 network client)
> >>> > mount -t ceph mds1(10.32.67.48):/
> >>> > /mnt/cephfs -o user=,secret=,,,,
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > Currently, only this 10.32.0.0 clients can mount it. that of public
> network (140.109) can not. How can I enable this?
> >>> >
> >>> > here attached is my ceph.conf
> >>> >
> >>> > Thanks in advance
> >>> >
> >>> > Cheers
> >>> > Joshua
> >>> > _______________________________________________
> >>> > ceph-users mailing list
> >>> > ceph-users at lists.ceph.com
> >>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>>
> >>>
> >>> --
> >>> Paul Emmerich
> >>>
> >>> Looking for help with your Ceph cluster? Contact us at
> https://croit.io
> >>>
> >>> croit GmbH
> >>> Freseniusstr. 31h
> >>> 81247 München
> >>> www.croit.io
> >>> Tel: +49 89 1896585 90
> >>> _______________________________________________
> >>> ceph-users mailing list
> >>> ceph-users at lists.ceph.com
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181002/6291baf4/attachment.html>


More information about the ceph-users mailing list