-
Notifications
You must be signed in to change notification settings - Fork 667
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Per-pallet child tries #363
Comments
What should be the childinfo used here ? does the module should be the childtrie at the path |
Yes, child trie prefix in addition to being unique need to avoid starting with another existing child prefix (to avoid making assumption on the keys used by the module). So the child trie prefix got the collision resistance of the smaller prefix use, at this point contract allows 256 bit prefix. So having So I would keep something that is at least 128bit (not sure if enough), But if we consider that pallet modules using those macros cannot use random key, IIRC they are already using some twox hashes over field id, if smaller field prefix is a twox_64 then we can use a hash that is "targetted_collision_resistance_size - 64". If we don't implement batch deletion and consider pallet module cannot write random key, then it should be fine to just use twox_64, it relies on the the same assumption as for module prefix definition (key used internally in rocksdb will be fairly similar). |
also we could do more precise stuff such as module could be at |
yes, that seems like a possibility, that could even be different child_trie kinds. |
maybe the simpliest way to have this unique_id not colliding with anything is to make its length to be 256, thus we could use a unique id like |
I unassigned for two weeks but here is my current progress: First idea:I wanted to introduce a new structure: /// An abstract storage. Can be used to generalize over a prefixed space in the top trie, or in a
/// child trie.
pub trait StorageSpace {
type StorageSpaceIterator: Iterator<Item=(Vec<u8>, Vec<u8>)>;
/// Get a Vec of bytes from storage.
fn get(&self, key: &[u8]) -> Option<Vec<u8>>;
/// Put a raw byte slice into storage.
fn put(&self, key: &[u8], value: &[u8]);
/// Check to see if `key` has an explicit entry in storage.
fn exists(&self, key: &[u8]) -> bool;
/// Ensure `key` has no explicit entry in storage.
fn kill(&self, key: &[u8]);
/// Ensure keys with the given `prefix` have no entries in storage.
fn kill_prefix(&self, prefix: &[u8]);
/// Iter over all (key, value) of the storage after the given `prefix`
fn iter_prefix(&self, prefix: &[u8]) -> Self::StorageSpaceIterator;
} This trait would be implemented for three structure:
I would imagine this space structure to implement migration from one to one another, just be iterating on them while removing their value and adding value to new space. Second idea:(in this part we talk about StorageMap but this is generalizable to all 4 storage kind). Note: such StorageSpace an ok idea to me, though we should notice that this doesn't cover everything. For example a migration between Considering this limitation maybe it is better not to introduce this intermediate abstraction and just create a new associated type to our StorageMap which would be: trait StorageMap {
type Trie: Trie;
fn hashed_module_prefix() -> &'static [u8]; // make it empty if you are in a module unique child trie
fn hashed_storage_prefix() -> &'static [u8]; // make it empty if you are in a module_storage unique child_trie
} having these module_prefix hashed at compile time is doable, you can't import sp_core at compile time because it seems features are mixed by cargo but you can import twox-hash crate and reimplement twox_128 yourself. (though we might want to create a sp_core_compile_time crate or such thing) So implementing all kind of migration from one StorageMap to one another should be possible. migration_same_hasher<Hasher, StorageMap1<Hasher=Hasher>, StorageMap2<Hasher=Hasher>>(..)
migration_hasher_concat<
Hasher1: HasherConcat,
Hasher2: HasherConcat,
StorageMap1<Hasher1>,
StorageMap2<Hasher2>
>(..) But when it come to lazy migration, it is more difficult, the only thing I see it to crate a new trait StorageMapLazyMigration and which would have two associated type StorageMap1 and StorageMap2 and do a lazy migration between both. Note that maybe lazy migration is not very interesting as is or should be done in some other way. Current wip branch is gui-module-in-childtries: https://github.com/paritytech/substrate/compare/gui-module-in-childtries?diff=unified&expand=1 |
Yeah i figured i wasn't going to be the only one who's thought of this, I wonder if there's a tracking issue for custom keys for storage items 🤔 |
@thiolliere any updates on this one? |
* Return error when OutOfFund * Fix estimate_gas early return * Update comments * Update changelog * Update message info
There should be a variant of
decl_storage
that stores all items in a child-trie rather than the main trie.Storage format/hashing should be equivalent, except that there should be no module prefix.
Additionally, there should be migration code allowing for modules using the existing system (e.g. Kusama) to migrate their modules' storage over to per-pallet child tries during an upgrade.
Change tries should be made to work on child tries as per the main trie.
The text was updated successfully, but these errors were encountered: