Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Add a fn listen_addresses() to NetworkService #6409

Closed
wants to merge 2 commits into from

Conversation

cecton
Copy link
Contributor

@cecton cecton commented Jun 18, 2020

Use case

I'm writing a test crate for polkadot and a simple test that ensures blocks are being built. For that I need to start a few nodes on random ports. To do that I pass the port 0 to get a random port but I need to know what port has been assigned for each node.

Proposed solution

The only way I found would be to call listen_addresses on the NetworkWorker but since I only have a NetworkService, I added a function listen_addresses to NetworkService which passes the message to the NetworkWorker.

Related PR and alternative solution chosen: paritytech/polkadot#1258

@cecton cecton added A0-please_review Pull request needs code review. B0-silent Changes should not be mentioned in any release notes C1-low PR touches the given topic and has a low impact on builders. labels Jun 18, 2020
@cecton cecton self-assigned this Jun 18, 2020
@tomaka
Copy link
Contributor

tomaka commented Jun 18, 2020

For testing, note that the network can be put into a "memory-only" mode:

The idea is to generate u64 port numbers ahead of time (using rand::random::<u64>()), then you pass addresses like /memory/1234 as listening addresses.

Example usage:

fn build_nodes() -> (Swarm<CustomProtoWithAddr>, Swarm<CustomProtoWithAddr>) {
let mut out = Vec::with_capacity(2);
let keypairs: Vec<_> = (0..2).map(|_| libp2p::identity::Keypair::generate_ed25519()).collect();
let addrs: Vec<Multiaddr> = (0..2)
.map(|_| format!("/memory/{}", rand::random::<u64>()).parse().unwrap())
.collect();
for index in 0 .. 2 {
let keypair = keypairs[index].clone();
let local_peer_id = keypair.public().into_peer_id();
let transport = libp2p::core::transport::MemoryTransport
.and_then(move |out, endpoint| {
let secio = libp2p::secio::SecioConfig::new(keypair);
libp2p::core::upgrade::apply(
out,
secio,
endpoint,
libp2p::core::upgrade::Version::V1
)
})
.and_then(move |(peer_id, stream), endpoint| {
libp2p::core::upgrade::apply(
stream,
libp2p::yamux::Config::default(),
endpoint,
libp2p::core::upgrade::Version::V1
)
.map_ok(|muxer| (peer_id, libp2p::core::muxing::StreamMuxerBox::new(muxer)))
})
.timeout(Duration::from_secs(20))
.map_err(|err| io::Error::new(io::ErrorKind::Other, err))
.boxed();
let (peerset, _) = sc_peerset::Peerset::from_config(sc_peerset::PeersetConfig {
in_peers: 25,
out_peers: 25,
bootnodes: if index == 0 {
keypairs
.iter()
.skip(1)
.map(|keypair| keypair.public().into_peer_id())
.collect()
} else {
vec![]
},
reserved_only: false,
priority_groups: Vec::new(),
});
let behaviour = CustomProtoWithAddr {
inner: GenericProto::new(local_peer_id, &b"test"[..], &[1], peerset, None),
addrs: addrs
.iter()
.enumerate()
.filter_map(|(n, a)| if n != index {
Some((keypairs[n].public().into_peer_id(), a.clone()))
} else {
None
})
.collect(),
};
let mut swarm = Swarm::new(
transport,
behaviour,
keypairs[index].public().into_peer_id()
);
Swarm::listen_on(&mut swarm, addrs[index].clone()).unwrap();
out.push(swarm);
}
// Final output
let mut out_iter = out.into_iter();
let first = out_iter.next().unwrap();
let second = out_iter.next().unwrap();
(first, second)
}

Beyond the fact that it solves the problem that you're having, it also solves the problem that using TCP sockets might also lead to running out of ports if you have several heavy tests running at the same time.

Copy link
Contributor

@tomaka tomaka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR looks good, but as explained I suggest your use the "memory-mode".

@@ -747,6 +747,15 @@ impl<B: BlockT + 'static, H: ExHashT> NetworkService<B, H> {
.unbounded_send(ServiceToWorkerMsg::UpdateChain);
}

/// Returns a stream containing the listen addresses
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// Returns a stream containing the listen addresses
/// Returns the list of addresses we are currently listening on.
///
/// Please be aware that this list might change over time.

@cecton cecton marked this pull request as draft June 18, 2020 14:55
@cecton
Copy link
Contributor Author

cecton commented Jun 18, 2020

This is AWESOME!! Thanks! ... 🤔 this is strange. Now my function listen_addresses return an empty Vec. Shouldn't it have the memory address?

(I understand I might not need this function anymore but I'm curious)

@tomaka
Copy link
Contributor

tomaka commented Jun 18, 2020

Shouldn't it have the memory address?

Oh, this might be a bug in rust-libp2p. The list of listened addresses is reported by the low-level transport code (such as the TCP code), and I think we don't do that for the memory transport.

@tomaka
Copy link
Contributor

tomaka commented Jun 18, 2020

Nevermind, we do it: https://github.com/libp2p/rust-libp2p/blob/168b00ed3e8999904e075469eb867d692e04487f/core/src/transport/memory.rs#L179-L182

Another possibility is that because opening listeners is actually asynchronous in libp2p, you're processing the nodes in such a way that you get the list of listened addresses before the first listener has opened.

@cecton
Copy link
Contributor Author

cecton commented Jun 18, 2020

I did some experiment but I have a new issue: it seems my nodes are not connected for some reason (it says "0 peers")... I haven't investigated yet

2020-06-18 18:08:46 Can't listen on /memory/18035446117391323389 because: MultiaddrNotSupported("/memory/18035446117391323389")
2020-06-18 18:08:51 💤 Idle (0 peers), best: #0 (0x854f…eb5f), finalized #0 (0x854f…eb5f), ⬇ 0 ⬆ 0
2020-06-18 18:08:54 💤 Idle (0 peers), best: #0 (0x854f…eb5f), finalized #0 (0x854f…eb5f), ⬇ 0 ⬆ 0
2020-06-18 18:08:56 💤 Idle (0 peers), best: #0 (0x854f…eb5f), finalized #0 (0x854f…eb5f), ⬇ 0 ⬆ 0

Also: after tweaking with all of this I have the feeling that listen_addresses would still be good to have because it reduces the code on my side (I won't have to create a MultiaddrWithPeerId struct to determine the boot_node).

@cecton
Copy link
Contributor Author

cecton commented Jun 19, 2020

Thanks for the help @tomaka I had to set the TransportConfig of the sc-service::Configuration to MemoryOnly to make it work. I might open up a pr to improve the error message on that.

I'm closing this PR for now as it would be unnecessary code to maintain and I will post my solution soon when it is merged.

@cecton cecton closed this Jun 19, 2020
@cecton cecton deleted the cecton-network-listen-addresses branch June 19, 2020 08:41
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A0-please_review Pull request needs code review. B0-silent Changes should not be mentioned in any release notes C1-low PR touches the given topic and has a low impact on builders.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants