-
Notifications
You must be signed in to change notification settings - Fork 920
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
light/availability: reevaluate light availability success condition #2780
Comments
Light nodes should clearly be doing (2), as it allows for the lowest What is confusing to me is that I have roughly seen the numbers with the simulation match up in BarelyRecoverable case - which is before their sampling routines time out. If they don't store anything until after timing out, how is this test case consistently passing with the given parameters? |
I think the missing detail here is that the paper is measuring something different in that section: Namely "The probability that the players collectively sample at least |
…ility call (#3239) This PR introduces persistence for sample selection in random sampling. It addresses the issue by storing all failed samples into the datastore, allowing them to be reloaded on the next sampling attempt. This ensures that if the availability call fails fully or partially during the last sampling attempt, the sampling retry will use the same preselected random coordinates of shares. Provided solution is backwards compatible with previously stored empty byte slice on sampling success, allowing the change to be non-breaking for existing storage. Additionally, this PR includes basic refactoring to simplify concurrency logic in availability. It also ensures that errors returned by the call are aligned with the interface declaration in [availability.go](https://github.com/celestiaorg/celestia-node/blob/main/share/availability.go) enhancing code consistency and maintainability. Resolves #2780
…ility call (celestiaorg#3239) This PR introduces persistence for sample selection in random sampling. It addresses the issue by storing all failed samples into the datastore, allowing them to be reloaded on the next sampling attempt. This ensures that if the availability call fails fully or partially during the last sampling attempt, the sampling retry will use the same preselected random coordinates of shares. Provided solution is backwards compatible with previously stored empty byte slice on sampling success, allowing the change to be non-breaking for existing storage. Additionally, this PR includes basic refactoring to simplify concurrency logic in availability. It also ensures that errors returned by the call are aligned with the interface declaration in [availability.go](https://github.com/celestiaorg/celestia-node/blob/main/share/availability.go) enhancing code consistency and maintainability. Resolves celestiaorg#2780
…ility call (celestiaorg#3239) This PR introduces persistence for sample selection in random sampling. It addresses the issue by storing all failed samples into the datastore, allowing them to be reloaded on the next sampling attempt. This ensures that if the availability call fails fully or partially during the last sampling attempt, the sampling retry will use the same preselected random coordinates of shares. Provided solution is backwards compatible with previously stored empty byte slice on sampling success, allowing the change to be non-breaking for existing storage. Additionally, this PR includes basic refactoring to simplify concurrency logic in availability. It also ensures that errors returned by the call are aligned with the interface declaration in [availability.go](https://github.com/celestiaorg/celestia-node/blob/main/share/availability.go) enhancing code consistency and maintainability. Resolves celestiaorg#2780
Implementation ideas
Lets evaluate light sampling algorithm for withholding attack case, since there seems to be a confusion how this attack should be simulated. #2697 (comment)
Initial simulation script by @distractedm1nd assumes, that
k
amount of shares are selected for sampling. If any of selected shares are not available, light node will only those, that it was able to sample from initial selection.Another simulation made by me assumes, that light nodes stores
k
amount of shares, regardless if initial selection. In other words it tries to sample new shares untilk
amount is reached.Current implementation of light availability is different and do not satisfy either of assumtuions. It select random 16 shares from eds and tries to sample them. It will store all successful samples to blockstore, but If any of samples fails, whole sampling operation will fail. Having some samples unavailable have a high probability in withholding attack case. Failed sampling operation causes to DASer to retry sampling attempt for the same height. New attempt will select new set of 16 random shares regardless of previous results. All successfully sampled shares from subsequent calls will also be stored as an addition to previous attempts. Since subsequent attempts will still have probabilistic nature of success/failures and can result in multiple retries. And on every attempt successfully amount of sampled shares will likely grow beyond initially preset sampling limit
k
.Lets check an example. For sake for simplicity some values are lowered from our defaults:
In the example you can see, how more than
k
samples are stored by light node.Lets evaluate how many shares needs to be sampled in case of withholding attack and modify light availability accordingly if needed.
The text was updated successfully, but these errors were encountered: