Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(discovery): Implement capped linear backoff and asynchronous tri… #2766

Closed
wants to merge 8 commits into from
Closed

feat(discovery): Implement capped linear backoff and asynchronous tri… #2766

wants to merge 8 commits into from

Conversation

nodersteam
Copy link
Contributor

Implemented a capped linear backoff mechanism to reduce the frequency of retries during the discovery process. The backoff starts at 5 seconds and increases by a factor of 5, capped at a maximum duration of 10 minutes.

Added asynchronous triggering of the discoveryLoop using the triggerDisc channel

#2703

…ggering

This commit introduces two significant changes to the discoveryLoop function in the discovery package. First, we have implemented a capped linear backoff mechanism to reduce the frequency of retries during the discovery process. The backoff starts at 5 seconds and increases by a factor of 5, capped at a maximum duration of 10 minutes. This helps reduce CPU and network load and improves the efficiency of peer discovery.

Second, we have added asynchronous triggering of the discoveryLoop using the triggerDisc channel. This change reduces wasted cycles by triggering the discovery process only when necessary, rather than at a fixed interval.

Additionally, we have introduced two new constants, backoffInitialDuration and maxBackoffDuration, to control the backoff behavior.
@github-actions github-actions bot added the external Issues created by non node team members label Sep 26, 2023
@ramin ramin added area:p2p kind:feat Attached to feature PRs labels Sep 26, 2023
share/p2p/discovery/discovery.go Outdated Show resolved Hide resolved
share/p2p/discovery/discovery.go Outdated Show resolved Hide resolved
share/p2p/discovery/discovery.go Outdated Show resolved Hide resolved
share/p2p/discovery/discovery.go Outdated Show resolved Hide resolved
share/p2p/discovery/discovery.go Outdated Show resolved Hide resolved
share/p2p/discovery/discovery.go Show resolved Hide resolved
share/p2p/discovery/discovery.go Show resolved Hide resolved
@codecov-commenter
Copy link

codecov-commenter commented Sep 28, 2023

Codecov Report

Merging #2766 (ea4a70c) into main (849cb67) will increase coverage by 0.06%.
The diff coverage is 88.88%.

@@            Coverage Diff             @@
##             main    #2766      +/-   ##
==========================================
+ Coverage   51.92%   51.98%   +0.06%     
==========================================
  Files         161      161              
  Lines       10827    10831       +4     
==========================================
+ Hits         5622     5631       +9     
+ Misses       4710     4705       -5     
  Partials      495      495              
Files Coverage Δ
share/p2p/discovery/discovery.go 76.25% <88.88%> (+0.40%) ⬆️

... and 7 files with indirect coverage changes

@ramin ramin self-requested a review September 28, 2023 08:22
Copy link
Contributor

@ramin ramin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nodersteam i think baking the backoff into the ticker is the wrong approach. I'd also request splitting the two tasks (backoff, async trigger) into two PRs for ease of review if you can.

For the backoff, i'd suggest something like adding a "backoffDuration" time.Interval to the Discover struct (or even something that wraps that to apply a backoff) then leave the tick interval the same, but add/update an incrementing backup duration on d.discover failures that will be checked before executing. Will make things simpler. Also, i'd suggest adding a little jitter to the backoff interval so very slightly randomize the backoff times. Want to give it a shot?

@nodersteam
Copy link
Contributor Author

@nodersteam i think baking the backoff into the ticker is the wrong approach. I'd also request splitting the two tasks (backoff, async trigger) into two PRs for ease of review if you can.

For the backoff, i'd suggest something like adding a "backoffDuration" time.Interval to the Discover struct (or even something that wraps that to apply a backoff) then leave the tick interval the same, but add/update an incrementing backup duration on d.discover failures that will be checked before executing. Will make things simpler. Also, i'd suggest adding a little jitter to the backoff interval so very slightly randomize the backoff times. Want to give it a shot?

yes, I will try to implement this
the comments that were left above have already been practically corrected, but I will also take into account your comment

@nodersteam
Copy link
Contributor Author

@ramin @walldiss

Hello! What do you think about improving test logging?

I’m currently writing tests to check the correct implementation of everything I need, and I’m faced with the fact that logging tests is not informative. What do you say about this format? Or is this not necessary?
image

@ramin
Copy link
Contributor

ramin commented Oct 9, 2023

@nodersteam i quite like the idea of being able to see some more detailed logging if the -v flag is provided when running tests. I'd say propose it with the updates to the PR and explain why it was necessary and helpful so we can understand while reviewing.

@nodersteam
Copy link
Contributor Author

nodersteam commented Nov 13, 2023

@walldiss Is the question of continuing to search for peers after finding a sufficient number still relevant?

mean this #2703

@nodersteam nodersteam closed this Nov 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:p2p external Issues created by non node team members kind:feat Attached to feature PRs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants