[ceph-users] Significance of the us-east-1 region when using S3 clients to talk to RGW

David Turner drakonstein at gmail.com
Mon Feb 26 13:10:16 PST 2018


I'm also not certain how to do the tcpdump for this.  Do you have any
pointers to how to capture that for you?

On Mon, Feb 26, 2018 at 4:09 PM David Turner <drakonstein at gmail.com> wrote:

> That's what I set it to in the config file.  I probably should have
> mentioned that.
>
> On Mon, Feb 26, 2018 at 4:07 PM Yehuda Sadeh-Weinraub <yehuda at redhat.com>
> wrote:
>
>> According to the log here, it says that the location constraint it got
>> is "cn", can you take a look at a tcpdump, see if that's actually
>> what's passed in?
>>
>> On Mon, Feb 26, 2018 at 12:02 PM, David Turner <drakonstein at gmail.com>
>> wrote:
>> > I run with `debug rgw = 10` and was able to find these lines at the end
>> of a
>> > request to create the bucket.
>> >
>> > Successfully creating a bucket with `bucket_location = US` looks like
>> > [1]this.  Failing to create a bucket has "ERROR: S3 error: 400
>> > (InvalidLocationConstraint): The specified location-constraint is not
>> valid"
>> > on the CLI and [2]this (excerpt from the end of the request) in the rgw
>> log
>> > (debug level 10).  "create bucket location constraint" was not found in
>> the
>> > log for successfully creating the bucket.
>> >
>> >
>> > [1]
>> > 2018-02-26 19:52:36.419251 7f4bc9bc8700 10 cache put:
>> >
>> name=local-atl.rgw.data.root++.bucket.meta.testerton:bef43c26-daf3-47ef-a3a5-e1167e3f88ac.39099765.1
>> > info.flags=0x17
>> > 2018-02-26 19:52:36.419262 7f4bc9bc8700 10 adding
>> >
>> local-atl.rgw.data.root++.bucket.meta.testerton:bef43c26-daf3-47ef-a3a5-e1167e3f88ac.39099765.1
>> > to cache LRU end
>> > 2018-02-26 19:52:36.419266 7f4bc9bc8700 10 updating xattr:
>> name=user.rgw.acl
>> > bl.length()=141
>> > 2018-02-26 19:52:36.423863 7f4bc9bc8700 10 RGWWatcher::handle_notify()
>> > notify_id 344855809097728 cookie 139963970426880 notifier 39099765
>> > bl.length()=361
>> > 2018-02-26 19:52:36.423875 7f4bc9bc8700 10 cache put:
>> > name=local-atl.rgw.data.root++testerton info.flags=0x17
>> > 2018-02-26 19:52:36.423882 7f4bc9bc8700 10 adding
>> > local-atl.rgw.data.root++testerton to cache LRU end
>> >
>> > [2]
>> > 2018-02-26 19:43:37.340289 7f466bbca700  2 req 428078:0.004204:s3:PUT
>> > /testraint/:create_bucket:executing
>> > 2018-02-26 19:43:37.340366 7f466bbca700  5 NOTICE: call to
>> > do_aws4_auth_completion
>> > 2018-02-26 19:43:37.340472 7f466bbca700 10 v4 auth ok --
>> > do_aws4_auth_completion
>> > 2018-02-26 19:43:37.340715 7f466bbca700 10 create bucket location
>> > constraint: cn
>> > 2018-02-26 19:43:37.340766 7f466bbca700  0 location constraint (cn)
>> can't be
>> > found.
>> > 2018-02-26 19:43:37.340794 7f466bbca700  2 req 428078:0.004701:s3:PUT
>> > /testraint/:create_bucket:completing
>> > 2018-02-26 19:43:37.341782 7f466bbca700  2 req 428078:0.005689:s3:PUT
>> > /testraint/:create_bucket:op status=-2208
>> > 2018-02-26 19:43:37.341792 7f466bbca700  2 req 428078:0.005707:s3:PUT
>> > /testraint/:create_bucket:http status=400
>> >
>> > On Mon, Feb 26, 2018 at 2:36 PM Yehuda Sadeh-Weinraub <
>> yehuda at redhat.com>
>> > wrote:
>> >>
>> >> I'm not sure if the rgw logs (debug rgw = 20) specify explicitly why a
>> >> bucket creation is rejected in these cases, but it might be worth
>> >> trying to look at these. If not, then a tcpdump of the specific failed
>> >> request might shed some light (would be interesting to look at the
>> >> generated LocationConstraint).
>> >>
>> >> Yehuda
>> >>
>> >> On Mon, Feb 26, 2018 at 11:29 AM, David Turner <drakonstein at gmail.com>
>> >> wrote:
>> >> > Our problem only appeared to be present in bucket creation.  Listing,
>> >> > putting, etc objects in a bucket work just fine regardless of the
>> >> > bucket_location setting.  I ran this test on a few different realms
>> to
>> >> > see
>> >> > what would happen and only 1 of them had a problem.  There isn't an
>> >> > obvious
>> >> > thing that steps out about it.  The 2 local realms do not have
>> >> > multi-site,
>> >> > the internal realm has multi-site and the operations were performed
>> on
>> >> > the
>> >> > primary zone for the zonegroup.
>> >> >
>> >> > Worked with non 'US' bucket_location for s3cmd to create bucket:
>> >> > realm=internal
>> >> > zonegroup=internal-ga
>> >> > zone=internal-atl
>> >> >
>> >> > Failed with non 'US' bucket_location for s3cmd to create bucket:
>> >> > realm=local-atl
>> >> > zonegroup=local-atl
>> >> > zone=local-atl
>> >> >
>> >> > Worked with non 'US' bucket_location for s3cmd to create bucket:
>> >> > realm=local
>> >> > zonegroup=local
>> >> > zone=local
>> >> >
>> >> > I was thinking it might have to do with all of the parts being named
>> the
>> >> > same, but I made sure to do the last test to confirm.  Interestingly
>> >> > it's
>> >> > only bucket creation that has a problem and it's fine as long as I
>> put
>> >> > 'US'
>> >> > as the bucket_location.
>> >> >
>> >> > On Mon, Feb 19, 2018 at 6:48 PM F21 <f21.groups at gmail.com> wrote:
>> >> >>
>> >> >> I am using the official ceph/daemon docker image. It starts RGW and
>> >> >> creates a zonegroup and zone with their names set to an empty
>> string:
>> >> >>
>> >> >>
>> >> >>
>> https://github.com/ceph/ceph-container/blob/master/ceph-releases/luminous/ubuntu/16.04/daemon/start_rgw.sh#L36:54
>> >> >>
>> >> >> $RGW_ZONEGROUP and $RGW_ZONE are both empty strings by default:
>> >> >>
>> >> >>
>> >> >>
>> https://github.com/ceph/ceph-container/blob/master/ceph-releases/luminous/ubuntu/16.04/daemon/variables_entrypoint.sh#L46
>> >> >>
>> >> >> Here's what I get when I query RGW:
>> >> >>
>> >> >> $ radosgw-admin zonegroup list
>> >> >> {
>> >> >>      "default_info": "",
>> >> >>      "zonegroups": [
>> >> >>          "default"
>> >> >>      ]
>> >> >> }
>> >> >>
>> >> >> $ radosgw-admin zone list
>> >> >> {
>> >> >>      "default_info": "",
>> >> >>      "zones": [
>> >> >>          "default"
>> >> >>      ]
>> >> >> }
>> >> >>
>> >> >> On 20/02/2018 10:33 AM, Yehuda Sadeh-Weinraub wrote:
>> >> >> > What is the name of your zonegroup?
>> >> >> >
>> >> >> > On Mon, Feb 19, 2018 at 3:29 PM, F21 <f21.groups at gmail.com>
>> wrote:
>> >> >> >> I've done some debugging and the LocationConstraint is not being
>> set
>> >> >> >> by
>> >> >> >> the
>> >> >> >> SDK by default.
>> >> >> >>
>> >> >> >> I do, however, need to set the region on the client to us-east-1
>> for
>> >> >> >> it
>> >> >> >> to
>> >> >> >> work. Anything else will return an InvalidLocationConstraint
>> error.
>> >> >> >>
>> >> >> >> Francis
>> >> >> >>
>> >> >> >>
>> >> >> >> On 20/02/2018 8:40 AM, Yehuda Sadeh-Weinraub wrote:
>> >> >> >>> Sounds like the go sdk adds a location constraint to requests
>> that
>> >> >> >>> don't go to us-east-1. RGW itself is definitely isn't tied to
>> >> >> >>> us-east-1, and does not know anything about it (unless you
>> happen
>> >> >> >>> to
>> >> >> >>> have a zonegroup named us-east-1). Maybe there's a way to
>> configure
>> >> >> >>> the sdk to avoid doing that?
>> >> >> >>>
>> >> >> >>> Yehuda
>> >> >> >>>
>> >> >> >>> On Sun, Feb 18, 2018 at 1:54 PM, F21 <f21.groups at gmail.com>
>> wrote:
>> >> >> >>>> I am using the AWS Go SDK v2
>> >> >> >>>> (https://github.com/aws/aws-sdk-go-v2)
>> >> >> >>>> to
>> >> >> >>>> talk
>> >> >> >>>> to my RGW instance using the s3 interface. I am running ceph in
>> >> >> >>>> docker
>> >> >> >>>> using
>> >> >> >>>> the ceph/daemon docker images in demo mode. The RGW is started
>> >> >> >>>> with a
>> >> >> >>>> zonegroup and zone with their names set to an empty string by
>> the
>> >> >> >>>> scripts
>> >> >> >>>> in
>> >> >> >>>> the image.
>> >> >> >>>>
>> >> >> >>>> I have ForcePathStyle for the client set to true, because I
>> want
>> >> >> >>>> to
>> >> >> >>>> access
>> >> >> >>>> all my buckets using the path: myrgw.instance:8080/somebucket.
>> >> >> >>>>
>> >> >> >>>> I noticed that if I set the region for the client to anything
>> >> >> >>>> other
>> >> >> >>>> than
>> >> >> >>>> us-east-1, I get this error when creating a bucket:
>> >> >> >>>> InvalidLocationConstraint: The specified location-constraint is
>> >> >> >>>> not
>> >> >> >>>> valid.
>> >> >> >>>>
>> >> >> >>>> If I set the region in the client to something made up, such as
>> >> >> >>>> "ceph"
>> >> >> >>>> and
>> >> >> >>>> the LocationConstraint to "ceph", I still get the same error.
>> >> >> >>>>
>> >> >> >>>> The only way to get my buckets to create successfully is to set
>> >> >> >>>> the
>> >> >> >>>> client's
>> >> >> >>>> region to us-east-1. I have grepped the ceph code base and
>> cannot
>> >> >> >>>> find
>> >> >> >>>> any
>> >> >> >>>> references to us-east-1. In addition, I looked at the AWS docs
>> for
>> >> >> >>>> calculating v4 signatures and us-east-1 is the default region
>> but
>> >> >> >>>> I
>> >> >> >>>> can
>> >> >> >>>> see
>> >> >> >>>> that the region string is used in the calculation (i.e. the
>> region
>> >> >> >>>> is
>> >> >> >>>> not
>> >> >> >>>> ignored when calculating the signature if it is set to
>> us-east-1).
>> >> >> >>>>
>> >> >> >>>> Why do my buckets create successfully if I set the region in
>> my s3
>> >> >> >>>> client
>> >> >> >>>> to
>> >> >> >>>> us-east-1, but not otherwise? If I do not want to use
>> us-east-1 as
>> >> >> >>>> my
>> >> >> >>>> default region, for example, if I want us-west-1 as my default
>> >> >> >>>> region,
>> >> >> >>>> what
>> >> >> >>>> should I be configuring in ceph?
>> >> >> >>>>
>> >> >> >>>> Thanks,
>> >> >> >>>>
>> >> >> >>>> Francis
>> >> >> >>>>
>> >> >> >>>> _______________________________________________
>> >> >> >>>> ceph-users mailing list
>> >> >> >>>> ceph-users at lists.ceph.com
>> >> >> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >> >>
>> >> >> >>
>> >> >>
>> >> >> _______________________________________________
>> >> >> ceph-users mailing list
>> >> >> ceph-users at lists.ceph.com
>> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20180226/4d4b7b1d/attachment.html>


More information about the ceph-users mailing list