Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate keep-alive and inactivity timeouts for all built-in web servers #1493

Merged
merged 2 commits into from
Apr 19, 2020

Conversation

kraih
Copy link
Member

@kraih kraih commented Apr 18, 2020

This patch is preparation for #1492. It adds a low level way to reset timers, which is much more efficient than completely replacing timers every time you want to change a setting like the inactivity_timeout.

Baseline, this is pretty much equal with and without patch (EV):

$ perl -Ilib -Mojo -E 'a(sub ($c) { $c->render(text => "Hello Mojo!") })->start' daemon -m production
Server available at http://127.0.0.1:3000

$ wrk -c 20 -d 10 http://127.0.0.1:3000
Running 10s test @ http://127.0.0.1:3000
  2 threads and 20 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     9.46ms  735.87us  18.77ms   92.09%
    Req/Sec     1.06k    44.73     1.15k    69.50%
  21183 requests in 10.03s, 3.11MB read
Requests/sec:   2111.02
Transfer/sec:    317.85KB

With patch applied (EV):

$ perl -Ilib -Mojo -E 'a(sub ($c) { $c->inactivity_timeout(30); $c->render(text => "Hello Mojo!") })->start' daemon -m production
Server available at http://127.0.0.1:3000

$ wrk -c 20 -d 10 http://127.0.0.1:3000
Running 10s test @ http://127.0.0.1:3000
  2 threads and 20 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     9.66ms    0.96ms  23.77ms   93.89%
    Req/Sec     1.04k    43.88     1.11k    64.50%
  20761 requests in 10.05s, 3.05MB read
Requests/sec:   2066.38
Transfer/sec:    311.13KB

Without patch applied (EV):

$ perl -Mojo -E 'a(sub ($c) { $c->inactivity_timeout(30); $c->render(text => "Hello Mojo!") })->start' daemon -m production
Server available at http://127.0.0.1:3000

$ wrk -c 20 -d 10 http://127.0.0.1:3000
Running 10s test @ http://127.0.0.1:3000
  2 threads and 20 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    10.16ms    0.91ms  20.89ms   93.22%
    Req/Sec     0.99k    41.46     1.10k    73.00%
  19736 requests in 10.04s, 2.90MB read
Requests/sec:   1965.20
Transfer/sec:    295.88KB

With patch applied (Poll):

$ MOJO_REACTOR=Mojo::Reactor::Poll perl -Ilib -Mojo -E 'a(sub ($c) { $c->inactivity_timeout(30); $c->render(text => "Hello Mojo!") })->start' daemon -m production
Server available at http://127.0.0.1:3000

$ wrk -c 20 -d 10 http://127.0.0.1:3000
Running 10s test @ http://127.0.0.1:3000
  2 threads and 20 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     9.92ms    0.92ms  25.57ms   93.01%
    Req/Sec     1.01k    47.22     1.11k    74.50%
  20207 requests in 10.04s, 2.97MB read
Requests/sec:   2012.91
Transfer/sec:    303.09KB

Without patch applied (Poll):

$ MOJO_REACTOR=Mojo::Reactor::Poll perl -Mojo -E 'a(sub ($c) { $c->inactivity_timeout(30); $c->render(text => "Hello Mojo!") })->start' daemon -m production
Server available at http://127.0.0.1:3000

$ wrk -c 20 -d 10 http://127.0.0.1:3000
Running 10s test @ http://127.0.0.1:3000
  2 threads and 20 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    10.15ms    0.87ms  20.10ms   92.19%
    Req/Sec     0.99k    38.45     1.10k    78.00%
  19723 requests in 10.03s, 2.90MB read
Requests/sec:   1967.33
Transfer/sec:    296.20KB

@kraih
Copy link
Member Author

kraih commented Apr 18, 2020

Calling for a vote @mojolicious/core.

@kraih kraih added the vote label Apr 18, 2020
@kraih
Copy link
Member Author

kraih commented Apr 18, 2020

Instead of adding a new API, i've just reused $reactor->again($id) for $reactor->again($id, $seconds). It does raise the bar a little bit for 3rd party reactors, but being able to split inactivity_timeout into inactivity_timeout and keep_alive_timeout is probably worth it.

@Grinnz
Copy link
Contributor

Grinnz commented Apr 18, 2020

I don't mind updating my reactor backends to support this, but what's the practical impact if someone updates Mojolicious without updating the reactor backend they're using?

@kraih
Copy link
Member Author

kraih commented Apr 18, 2020

Actually, i've amended the PR with the keep_alive_timeout feature since it was really easy now. Thanks to the optimisation it costs pretty much nothing and i can't measure a difference with the micro benchmark above.

@kraih
Copy link
Member Author

kraih commented Apr 18, 2020

@Grinnz Since i've never used an alternative reactor backend i have no idea. Timeout reassignment will probably just not do anything.

@kraih kraih changed the title Allow timers to be reset for better performance Separate keep-alive and inactivity timeouts for all built-in web servers Apr 18, 2020
@marcusramberg
Copy link
Member

I'm voting 👍

@kraih
Copy link
Member Author

kraih commented Apr 19, 2020

To be clear, this is actually an important feature that i strongly believe should be added. It addresses fairly common problems with the current inactivity timeout.

To give one example i ran into at work. There we had a micro service running in prefork mode with 12 workers. Because the service could run fairly long SQL queries we had to limit concurrency (one request per worker) and increase the inactivity timeout a bit. So the service could run 12 requests at a time. Now unexpectedly we suddenly had more than 12 clients, and clients were configured to use keep-alive. And boom, everything broke because idle keep-alive connections blocked all the workers.

There are many more such scenarios. With this feature i could simply increase the inactivity timeout to say 300 (for very very slow SQL queries) and set the keep-alive timeout to 5. Letting idle connections get closed very quickly if there are not multiple requests in quick succession.

Copy link
Member

@jhthorsen jhthorsen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the example.

I'm 👍 on this change. I don't think most users have custom reactors, so I'm not concerned about Grinnz' comment.

Copy link
Contributor

@Grinnz Grinnz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will change timeouts for keepalive connections for those who have increased inactivity timeouts, but this shouldn't actually affect anything other than possibly causing more reconnections to be necessary. As best as I can tell with a third party reactor that does not yet support it, it will simply fail to adjust the timeout to the keep-alive timeout when the connection becomes inactive. Since it is a very useful feature to have these separate timeouts I am voting in favor. 👍

@kraih kraih merged commit 4452004 into master Apr 19, 2020
@jberger
Copy link
Member

jberger commented Apr 20, 2020

I missed the vote with a busy weekend but I'll still hop in here with a 👍

@kraih kraih deleted the faster_timeout branch May 2, 2020 16:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants