Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Generic request/response infrastructure for Polkadot #2352

Merged
40 commits merged into from
Feb 3, 2021
Merged

Conversation

eskimor
Copy link
Member

@eskimor eskimor commented Jan 29, 2021

Generic infrastructure for sending and receiving of requests and responses from subsystems.

This PR depends on this substrate PR, which just got merged - so this should be good to go.

@github-actions github-actions bot added the A0-please_review Pull request needs code review. label Jan 29, 2021
@eskimor eskimor added B0-silent Changes should not be mentioned in any release notes C1-low PR touches the given topic and has a low impact on builders. labels Jan 29, 2021
@eskimor eskimor requested review from drahnr and ordian January 29, 2021 18:57
node/network/protocol/src/request_response.rs Outdated Show resolved Hide resolved
node/network/bridge/src/multiplexer.rs Outdated Show resolved Hide resolved
node/network/availability-distribution/src/lib.rs Outdated Show resolved Hide resolved
@eskimor eskimor requested a review from tomaka January 29, 2021 20:56
eskimor and others added 15 commits February 3, 2021 09:40
It is not protocol related at all, it is in fact only part of the
subsystem communication as it gets wrapped into messages of each
subsystem.
Request multiplexer is moved to bridge, as there the implementation is
more straight forward as we can specialize on `AllMessages` for the
multiplexing target.

Sending of requests is mostly complete, apart from a few `From`
instances. Receiving is also almost done, initializtion needs to be
fixed and the multiplexer needs to be invoked.
Subsystems are now able to receive and send requests and responses via
the overseer.
- start encoding at 0
- don't crash on zero protocols
- don't panic on not yet implemented request handling
Use index 0 instead of 1.

Co-authored-by: Andronik Ordian <write@reusable.software>
node/network/bridge/src/lib.rs Outdated Show resolved Hide resolved
node/network/bridge/src/multiplexer.rs Outdated Show resolved Hide resolved
Comment on lines +92 to +94
// Receiver is a fused stream, which allows for this simple handling of
// exhausted ones.
Poll::Ready(None) => {}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that expected that one of receivers streams can end?
If not, maybe we should abort in this case by returning Poll::Ready(None) instead of silently ignoring it?
Aso, is it safe to poll it if is_terminated is true? Not sure.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Receiver is a fused Stream, so it should be safe. Not sure about the first question. It seemed logical that the stream ends if all its sources are exhausted.

The only reason those streams should get exhausted is because we are shutting down, in that case it might or might not matter to deliver any messages still in the queues. It felt a bit safer to try to deliver them, until everything is exhausted. On the other hand, receivers will hardly be able to respond anyways ....

I guess that's an edge case, which should not make much of a difference, either way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Receiver is a fused Stream, so it should be safe.

it still can panic:
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f1ac00093b376fc6cefb41033adff814

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh boy, thanks a lot! I could have sworn that it is safe to poll a fused stream after exhaustion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the safest thing would be to early return on the first closed stream, i.e. return Ready(None).

Copy link
Contributor

@drahnr drahnr Feb 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the issue is slightly skewed, if one fuses the inner streams, so they can be polled it will work.

So what you want is: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=691bfc7175278e2887340f5289df63d3

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so I confused a Stream implementing FusedStream with a stream that got fused by means of fuse. Ok this definitely deserves a test. Thank you guys. Will provide a fix in a minute.

node/network/bridge/src/multiplexer.rs Outdated Show resolved Hide resolved
node/network/protocol/src/request_response.rs Outdated Show resolved Hide resolved
node/network/bridge/src/lib.rs Outdated Show resolved Hide resolved
Copy link
Member

@ordian ordian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

node/network/bridge/src/multiplexer.rs Outdated Show resolved Hide resolved
node/network/bridge/src/multiplexer.rs Outdated Show resolved Hide resolved
let (p, rx): &mut (_, _) = &mut self.receivers[i % len];
i += 1;
count -= 1;
match Pin::new(rx).poll_next(cx) {
// If at least one stream is pending, then we are not done yet (No
// Ready(None)).
// Ready(None)).
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At some point, I will get my editor to do this right. It works like 99% of the time, but then suddenly expandtab is activated again for some reason. arrgh

}
};

match action {
Action::Nop => {}
Action::Abort => return Ok(()),
Action::Abort(reason) => match reason {
AbortReason::SubsystemError(err) => {
Copy link
Contributor

@drahnr drahnr Feb 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If AbortReason becomes an Error type, this could become trivial as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, you have a point. I am already bending my definition of error.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Na, I think it is good enough for now :-) I finally want to carry on with availability distribution.

Copy link
Contributor

@drahnr drahnr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few very small nits, other than that looks good to me 👍

eskimor and others added 4 commits February 3, 2021 16:06
Co-authored-by: Bernhard Schuster <bernhard@ahoi.io>
Co-authored-by: Bernhard Schuster <bernhard@ahoi.io>
- Channel size is now determined by function.
- Explicitely scope NetworkService::start_request.
@eskimor
Copy link
Member Author

eskimor commented Feb 3, 2021

bot merge

@ghost
Copy link

ghost commented Feb 3, 2021

Trying merge.

@ghost ghost merged commit ecc3772 into master Feb 3, 2021
@ghost ghost deleted the rk-req-resp-2306 branch February 3, 2021 20:21
ordian added a commit that referenced this pull request Feb 4, 2021
* master:
  Diagnostics quality of life improvements (#2375)
  Generic request/response infrastructure for Polkadot (#2352)
  Fix bug in statement table (#2369)
This pull request was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A0-please_review Pull request needs code review. B0-silent Changes should not be mentioned in any release notes C1-low PR touches the given topic and has a low impact on builders.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants