[ceph-users] openstack swift multitenancy problems with ceph RGW

Dilip Renkila dilip.renkila at linserv.se
Sun Nov 18 13:08:52 PST 2018


Hi all,

We are provisioning openstack swift api though ceph rgw (mimic). We have
problems when trying to create two containers in two projects of same name.
After scraping web, i came to know that i have to enable

* rgw_keystone_implicit_tenants in ceph conf file. But no use. Is really
multitenancy supported for swift ? Can any one share ceph conf file ?

ceph conf file
########

[client.rgw.ctrl1]
host = ctrl1
keyring = /var/lib/ceph/radosgw/ceph-rgw.ctrl1/keyring
log file = /var/log/ceph/ceph-rgw-ctrl1.log
rgw frontends = civetweb port=10.70.1.1:8080 num_threads=100


[client.rgw.ctrl2]
host = ctrl2
keyring = /var/lib/ceph/radosgw/ceph-rgw.ctrl2/keyring
log file = /var/log/ceph/ceph-rgw-ctrl2.log
rgw frontends = civetweb port=10.70.1.2:8080 num_threads=100

[client.rgw.ctrl3]
host = ctrl3
keyring = /var/lib/ceph/radosgw/ceph-rgw.ctrl3/keyring
log file = /var/log/ceph/ceph-rgw-ctrl3.log
rgw frontends = civetweb port=10.70.1.3:8080 num_threads=100


# Please do not change this file directly since it is managed by Ansible
and will be overwritten
[global]
cluster network = 10.60.0.0/22
fsid = 6cb9b9ca-7cdd-4200-a311-5b132ddc89f7
mon host = 10.70.1.1,10.70.1.2,10.70.1.3
mon initial members = ctrl1,ctrl2,ctrl3
public network = 10.70.0.0/22
rgw bucket default quota max objects = 1638400
rgw override bucket index max shards = 16
mon_max_pg_per_osd = 500
# Keystone information
rgw_keystone_url = http://10.20.0.199:5000
rgw keystone api version = 3
rgw_keystone_admin_user = swift
rgw_keystone_admin_password = password
rgw_keystone_admin_domain = default
rgw_keystone_admin_project = service

rgw_keystone_accepted_roles = admin,user,project-admin,cloud-admin
rgw_keystone_token_cache_size = 0
rgw_keystone_revocation_interval = 0
rgw_keystone_make_new_tenants = true
rgw_keystone_implicit_tenants = true
rgw_s3_auth_use_keystone = true
#nss_db_path = {path to nss db}
rgw_keystone_verify_ssl = false
mon_pg_warn_max_object_skew = 1000

#mon_allow_pool_delete = true
[mon]
mgr initial modules = dashboard, prometheus

[osd]
bluestore_block_db_size = 20000000000




Best Regards / Kind Regards

Dilip Renkila
Linux / Unix SysAdmin



Linserv AB
Direct: +46 8 473 60 64
Mobile: +46 705080243
dilip.renkila at linserv.se

www.linserv.se
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20181118/2117f2b8/attachment.html>


More information about the ceph-users mailing list