[ceph-users] rbd ls operation not permitted

Jason Dillaman jdillama at redhat.com
Mon Oct 8 08:04:03 PDT 2018


On Mon, Oct 8, 2018 at 10:20 AM <sinan at turka.nl> wrote:
>
> On a Ceph Monitor:
> # ceph auth get client.openstack | grep caps
> exported keyring for client.openstack
>         caps mon = "allow r"
>         caps osd = "allow class-read object_prefix rbd_children, allow rwx
> pool=ssdvolumes, allow rxw pool=ssdvolumes-13, allow rwx
> pool=sasvolumes-13, allow rwx pool=sasvolumes, allow rwx pool=vms, allow
> rwx pool=images"
> #

By chance, is your issue really that your OpenStack 13 cluster cannot
access the pool named "ssdvolumes-13"? I ask because you have a typo
on your "rwx" cap (you have "rxw" instead).

>
> On the problematic Openstack cluster:
> $ ceph auth get client.openstack --id openstack | grep caps
> Error EACCES: access denied
> $
>
>
> When I change "caps: [mon] allow r" to "caps: [mon] allow *" the problem
> disappears.
>
>
> On 08-10-2018 16:06, Jason Dillaman wrote:
> > Can you run "ceph auth get client.openstack | grep caps"?
> >
> > On Mon, Oct 8, 2018 at 10:03 AM <sinan at turka.nl> wrote:
> >>
> >> The result of your command:
> >>
> >> $ rbd ls --debug-rbd=20 -p ssdvolumes --id openstack
> >> 2018-10-08 13:42:17.386505 7f604933fd40 20 librbd: list 0x7fff5b25cc30
> >> rbd: list: (1) Operation not permitted
> >> $
> >>
> >> Thanks!
> >> Sinan
> >>
> >> On 08-10-2018 15:37, Jason Dillaman wrote:
> >> > On Mon, Oct 8, 2018 at 9:24 AM <sinan at turka.nl> wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> I am running a Ceph cluster (Jewel, ceph version 10.2.10-17.el7cp).
> >> >>
> >> >>
> >> >> I also have 2 OpenStack clusters (Ocata (v12) and Pike (v13)).
> >> >>
> >> >> When I perform a "rbd ls -p <pool> --id openstack" on the OpenStack
> >> >> Ocata cluster it works fine, when I perform the same command on the
> >> >> OpenStack Pike cluster I am getting an "operation not permitted".
> >> >>
> >> >>
> >> >> OpenStack Ocata (where it does work fine):
> >> >> $ rbd -v
> >> >> ceph version 10.2.7-48.el7cp
> >> >> (cf7751bcd460c757e596d3ee2991884e13c37b96)
> >> >> $ rpm -qa | grep rbd
> >> >> python-rbd-10.2.7-48.el7cp.x86_64
> >> >> libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64
> >> >> librbd1-10.2.7-48.el7cp.x86_64
> >> >> rbd-mirror-10.2.7-48.el7cp.x86_64
> >> >> $
> >> >>
> >> >> OpenStack Pike (where it doesn't work, operation not permitted):
> >> >> $ rbd -v
> >> >> ceph version 12.2.4-10.el7cp
> >> >> (03fd19535b3701f3322c68b5f424335d6fc8dd66)
> >> >> luminous (stable)
> >> >> $ rpm -qa | grep rbd
> >> >> rbd-mirror-12.2.4-10.el7cp.x86_64
> >> >> libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.5.x86_64
> >> >> librbd1-12.2.4-10.el7cp.x86_64
> >> >> python-rbd-12.2.4-10.el7cp.x86_64
> >> >> $
> >> >
> >> > Can you run "rbd --debug-rbd=20 ls -p <pool> --id openstack" and
> >> > pastebin the resulting logs?
> >> >
> >> >>
> >> >> Both clusters are using the same Ceph client key, same Ceph
> >> >> configuration file.
> >> >>
> >> >> The only difference is the version of rbd.
> >> >>
> >> >> Is this expected behavior?
> >> >>
> >> >>
> >> >> Thanks!
> >> >> Sinan
> >> >> _______________________________________________
> >> >> ceph-users mailing list
> >> >> ceph-users at lists.ceph.com
> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason


More information about the ceph-users mailing list