-
Notifications
You must be signed in to change notification settings - Fork 920
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(discovery): discover triggered too frequently #3550
Comments
@guillaumemichel i agree that the interval for retrying is a bit too aggressive and could be increased, but the actual deadline to celestia-node/share/p2p/discovery/discovery.go Lines 286 to 293 in accb058
|
So basically there will always be a lookup running until enough peers are discovered. |
IIRC, that was the intention, but as you mentioned that might be too aggressive |
Sometimes, a node fails to discover any peers on startup. Without discovered peers, the node is unable to perform most of its P2P logic. The downside of a longer cooldown is that it will cause the node application to halt for the cooldown duration in such cases. If some peers are discovered, the node should still aim to find at least 5 (our default) to rely less on the performance and availability of a single peer. The idea behind aggressive retries is to bootstrap node into a stable network condition as soon as possible, perhaps at the cost of more resources spent on aggressive discovery. So I think the defaults should stay low. I think it might be valuable for some users to have the ability to increase retry/timeout values if they are less concerned about the node being connected to the FN network. |
In the case a node doesn't have its quota of peers, it will send a new
discover
request every second. There are no guarantees that thediscover
request can complete within 1 second.celestia-node/share/p2p/discovery/discovery.go
Lines 225 to 229 in accb058
This interval seems too aggressive and should probably be increased (e.g to
1
or even5
minutes?)The text was updated successfully, but these errors were encountered: