-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only trace block with digest #595
Only trace block with digest #595
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This supposedly work, if we assume runtime is always non-faulty and always have a Ethereum block digest. This should be the case for the majority of current production chains.
However, the current mapping sync worker is more tolerate. For example, in future versions of Frontier we may want to allow the block digest generation multi-block, for cases where we have to "perserve" the block notion imported elsewhere (so we can't split it in the Ethereum side but only on the Substrate side). A more common scenario is that we may want to allow chains to easily remove EVM pallet without doing hard forks, or to upgrade the EVM pallet to a completely new one, with a few block pauses due to multi-block upgrade in place. This PR would break all such use cases.
A simpler and more straight-forward change you can just make to fulfill you need is just in sync_one_block
, check header number first, if it's less than a hard-coded number then stop syncing and early return.
Indeed, this way is more simple. |
Practically, you can just make larger margin. For example, if you plan to vote on a runtime upgrade now, then just set the current block as the limit. The problem with fetching the data on-chain is that it's expensive, and because this PR is about making fetching the data less expensive, it defeats the purpose. For complicated cases, we'll again require hard-coded value, such as a chain that activated and later deactivated EVM. |
* Only trace block with digest * Use hardcode value * Remove useless clone * Check operating tips * Fix corner cases
* Only trace block with digest * Use hardcode value * Remove useless clone * Check operating tips * Fix corner cases
* Only trace block with digest * Use hardcode value * Remove useless clone * Check operating tips * Fix corner cases
For chains that want to install frontier pallets mid-flight, the mapping-sync-worker helps extract the block and transaction information from the header digest and writes it to rocksDB for all history blocks. When the original chain is high enough(1M above), the problem arises. During this time, the node can easily become stuck due to intensive calculations.
I do some changes in the
sync_blocks
strategy and only track and sync these blocks with frontier digest. Another change is making theretry_limit
configured. I shorten the default retry times from 8 to 3. Everything works fine in my tests.