Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using interruptible bidirectional copying to keep idle connections fr… #4707

Merged
merged 16 commits into from
Jun 21, 2016

Conversation

oxtoacart
Copy link
Contributor

…om piling up, closes getlantern/lantern#4686

bufin := pool.Get()
bufout := pool.Get()
defer pool.Put(bufin)
defer pool.Put(bufout)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note - using a buffer pool is not necessary to fix this bug, but it's a useful optimization anyway.

…om piling up, closes getlantern/lantern#4686

// Then start copying from proxy to client.
_, readErr := io.Copy(clientConn, connOut)
writeErr := <-writeErrCh
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where in the previous code are connections leaking @oxtoacart?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no leak. It's just that we don't close the connections until both the reading and the writing are done. That meant that we were relying on the idle timeout to close the connections, which would stop the reader and writer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait sorry but what's bad about not closing the connection until both reading and writing are done? Why would we want to close them before they're done or before the idle timeout @oxtoacart?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's the scenario:

  1. Client connects
  2. Proxy connects to server
  3. Proxy starts piping both ways
  4. Client disconnects
  5. Reading from the client will stop at this point due to an EOF, but writing to the client doesn't stop, so we keep both connections open

The simplistic way to do this is to just close both connections as soon as either io.Copy is finished, which is what we used to do, but that sometimes results in us closing a connection while we're still doing an io op on it. The new code in netx sets timeouts on the io ops and interrupts in between ops.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha @oxtoacart. OK the code looks fine to me. Merging!

@myleshorton
Copy link
Contributor

Although it looks like one of the netx tests is failing @oxtoacart...

@coveralls
Copy link

Coverage Status

Coverage increased (+0.6%) to 73.77% when pulling 151ff4e on issue4686 into bf3bb4b on devel.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.2%) to 73.392% when pulling 532a016 on issue4686 into bf3bb4b on devel.

@oxtoacart
Copy link
Contributor Author

@myleshorton I fixed the test for Linux.

nr, er := src.Read(buf)
if nr > 0 {
nw, ew := dst.Write(buf[0:nr])
if ew != nil && !isTimeout(err) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. Timeouts are expected because we're intentionally setting a short deadline to give us a chance to unblock and check the stop flag.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yes, great catch!

@coveralls
Copy link

Coverage Status

Coverage increased (+0.2%) to 73.461% when pulling 5ca3b9d on issue4686 into bf3bb4b on devel.

@oxtoacart
Copy link
Contributor Author

Oh my alternative has a flaw. It will close connections prematurely in below rarely happen case:

  1. client sends request and close.
  2. server continuously send large chunk of data back, then close.

Same flaw exists for current code.

Great catch!

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.03%) to 73.188% when pulling f9de69c on issue4686 into bf3bb4b on devel.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.1%) to 73.112% when pulling 5964736 on issue4686 into bf3bb4b on devel.

@fffw fffw merged commit f6f203b into devel Jun 21, 2016
@fffw fffw deleted the issue4686 branch June 21, 2016 08:34
oxtoacart pushed a commit that referenced this pull request Apr 28, 2020
Using interruptible bidirectional copying to keep idle connections fr…
@getlantern getlantern deleted a comment from fffw Mar 3, 2021
@getlantern getlantern deleted a comment from fffw Mar 3, 2021
@getlantern getlantern deleted a comment from fffw Mar 3, 2021
@getlantern getlantern deleted a comment from myleshorton Mar 3, 2021
@getlantern getlantern deleted a comment from fffw Mar 3, 2021
@getlantern getlantern deleted a comment from fffw Mar 3, 2021
@getlantern getlantern deleted a comment from fffw Mar 3, 2021
@getlantern getlantern deleted a comment from fffw Mar 3, 2021
@getlantern getlantern deleted a comment from fffw Mar 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants