Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenStack API rate limit #21

Open
tpatzig opened this issue Feb 28, 2017 · 6 comments
Open

OpenStack API rate limit #21

tpatzig opened this issue Feb 28, 2017 · 6 comments

Comments

@tpatzig
Copy link

tpatzig commented Feb 28, 2017

  • we need a way to define request thresholds (limits) for the OpenStack APIs
  • we had a lot of service outages because CF hammered the OpenStack API automated and unrestricted
  • without, one user can bring the whole service down
  • e.g. client X is allowed to do Y requests/sec to nova api
@aspiers aspiers self-assigned this Mar 28, 2017
@aspiers
Copy link

aspiers commented Apr 5, 2017

I am experimenting with an approach based on https://blog.codecentric.de/en/2014/12/haproxy-http-header-rate-limiting/.

Future work

  • If we want to track limits based on a combination of (say) the Authentication token and source IP, we can use this technique.
  • If we want to slow down requests rather than returning HTTP 429 (Too Many Requests), then we could potentially upgrade to HAProxy 1.6.0+ and use this lua-based technique for delaying requests.

@aspiers
Copy link

aspiers commented Apr 11, 2017

Unfortunately we can't limit based on the authentication token (which in keystone is X-Auth-Token not Authentication as previously suggested) because it can change with every API request if the API requests are triggered from the CLI :-( And there is no way to determine the user or tenant from the HTTP header. That must be why Rackspace bothered to write Repose.

However this kind of approach based on source IP works:

frontend nova-api
        bind 0.0.0.0:8774
        mode http

        option tcpka
        option httplog
        option forwardfor
        log 127.0.0.1:514 local0
        default_backend nova-api-backend

        option tcpka
        option httplog
        option forwardfor
        tcp-request inspect-delay 5s

        acl too_many_reqs_by_user sc0_gpc0_rate() gt 5
        acl mark_seen sc0_inc_gpc0 gt 0
        stick-table type string size 100k store gpc0_rate(20s)
        tcp-request content track-sc0 src

        use_backend be_429_slow_down if mark_seen too_many_reqs_by_user

backend be_429_slow_down
        mode http
        timeout tarpit 2s
        errorfile 500 /etc/haproxy/error-files/429.http
        http-request tarpit

@aspiers
Copy link

aspiers commented May 30, 2017

@mkoderer The exact HTTP response in this example is defined by the contents of /etc/haproxy/error-files/429.http (ignore the 500 which has no effect), for example:

HTTP/1.1 429 Too Many Requests
Cache-Control: no-cache
Connection: close
Content-Type: text/plain
Retry-After: 60
 
Too Many Requests (HAP429).

@matelakat
Copy link

internal discussions needed, and a pull request that can be used as a basis for testing

@berendt
Copy link

berendt commented Jul 5, 2017

Have you checked Repose in front of HAProxy as mentioned by @aspiers? I think this could work for this use case, it includes an OpenStack Identity v3 filter.

https://repose.atlassian.net/wiki/display/REPOSE/Rate+Limiting+filter
https://repose.atlassian.net/wiki/display/REPOSE/OpenStack+Identity+v3+filter

@aspiers
Copy link

aspiers commented Jul 17, 2017

Last I heard from SAP, it was good enough to limit based on source IP, so the above technique should work OK. If they need to limit per project then yes Repose is the way to go, but that would probably take quite a bit more effort to implement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants