Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Use Message Queue as DMP and XCMP dispatch queue #2157

Open
wants to merge 144 commits into
base: master
Choose a base branch
from

Conversation

ggwpez
Copy link
Member

@ggwpez ggwpez commented Jan 31, 2023

Changes

This MR refactores the XCMP, Parachains System and DMP pallets to use the MessageQueue for delayed execution of incoming messages. The DMP pallet is entirely replaced by the MQ and thereby removed. This allows for PoV-bounded execution and resolves a number of issues that stem from the current work-around.

All System Parachains adopt this change.
The most important changes are in primitives/core/src/lib.rs, parachains/common/src/process_xcm_message.rs, pallets/parachain-system/src/lib.rs, pallets/xcmp-queue/src/lib.rs and the runtime configs.

DMP Queue Pallet

The pallet got removed and its logic refactored into parachain-system. Overweight message management can be done directly through the MQ pallet.

Final undeployment migrations are provided by cumulus_pallet_dmp_queue::UndeployDmpQueue and DeleteDmpQueue that can be configured with an aux config trait like:

parameter_types! {
	pub const DmpQueuePalletName: &'static str = "DmpQueue" < CHANGE ME;
	pub const RelayOrigin: AggregateMessageOrigin = AggregateMessageOrigin::Parent;
}

impl cumulus_pallet_dmp_queue::MigrationConfig for Runtime {
	type PalletName = DmpQueuePalletName;
	type DmpHandler = frame_support::traits::EnqueueWithOrigin<MessageQueue, RelayOrigin>;
	type DbWeight = <Runtime as frame_system::Config>::DbWeight;
}

// And adding them to your Migrations tuple:
pub type Migrations = (
	...
	cumulus_pallet_dmp_queue::UndeployDmpQueue<Runtime>,
	cumulus_pallet_dmp_queue::DeleteDmpQueue<Runtime>,
);

XCMP Queue pallet

Removed all dispatch queue functionality. Incoming XCMP messages are now either: Immediately handled if they are Signals, enqueued into the MQ pallet otherwise.

New config items for the XCMP queue pallet:

/// The actual queue implementation that retains the messages for later processing.
type XcmpQueue: EnqueueMessage<ParaId>;

/// How a XCM over HRMP from a sibling parachain should be processed.
type XcmpProcessor: ProcessMessage<Origin = ParaId>;

/// The maximal number of suspended XCMP channels at the same time.
#[pallet::constant]
type MaxInboundSuspended: Get<u32>;

How to configure those:

// Use the MessageQueue pallet to store messages for later processing. The `TransformOrigin` is needed since
// the MQ pallet itself operators on `AggregateMessageOrigin` but we want to enqueue `ParaId`s.
type XcmpQueue = TransformOrigin<MessageQueue, AggregateMessageOrigin, ParaId, ParaIdToSibling>;

// Process XCMP messages from siblings. This is type-safe to only accept `ParaId`s. They will be dispatched
// with origin `Junction::Sibling(…)`.
type XcmpProcessor = ProcessFromSibling<
	ProcessXcmMessage<
		AggregateMessageOrigin,
		xcm_executor::XcmExecutor<xcm_config::XcmConfig>,
		RuntimeCall,
	>,
>;

// Not really important what to choose here. Just something larger than the maximal number of channels.
type MaxInboundSuspended = sp_core::ConstU32<1_000>;

The InboundXcmpStatus storage item was replaced by InboundXcmpSuspended since it now only tracks inbound queue suspension and no message indices anymore.

Parachain System pallet

For DMP messages instead of forwarding them to the DMP pallet, it now pushes them to the configured DmpQueue. The message processing which was triggered in set_validation_data is now being done by the MQ pallet on_initialize.

XCMP messages are still handed off to the XcmpMessageHandler (XCMP-Queue pallet) - no change here.

New config items for the parachain system pallet:

/// Queues inbound downward messages for delayed processing. 
///
/// Analogous to the `XcmpQueue` of the XCMP queue pallet.
type DmpQueue: EnqueueMessage<AggregateMessageOrigin>;

How to configure:

/// Use the MQ pallet to store DMP messages for delayed processing.
type DmpQueue = MessageQueue;

Message Flow

The flow of messages on the parachain side. Messages come in from the left via the Validation Data and finally end up at the Xcm Executor on the right.

Untitled (1)

Further changes

  • Bumped the default suspension, drop and resume thresholds in QueueConfigData::default().
  • XcmpQueue::{suspend_xcm_execution, resume_xcm_execution} errors when they would be a noop.
  • Properly validate the QueueConfigData before setting it.
  • Marked weight files as auto-generated so they wont auto-expand in the MR files view.

Questions:

  • What about the ugly #[cfg(feature = "runtime-benchmarks")] in the runtimes? Not sure how to best fix. Just having them like this makes tests fail that rely on the real message processor when the feature is enabled.
  • Need a good weight for MessageQueueServiceWeight. The scheduler already takes 80% so I put it to 10% but that is quite low.

TODO:

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@paritytech-ci paritytech-ci requested review from a team January 31, 2023 22:31
@ggwpez ggwpez added the A3-in_progress Pull request is in progress. No review needed at this stage. label Jan 31, 2023
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
This reverts commit c238c51.

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@ggwpez ggwpez changed the title Use Message Queue pallet for DMP queue dispatch Use Message Queue for DMP dispatch queue Jan 31, 2023
@ggwpez ggwpez changed the title Use Message Queue for DMP dispatch queue Use Message Queue as DMP dispatch queue Jan 31, 2023
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@ggwpez ggwpez changed the title Use Message Queue as DMP dispatch queue Use Message Queue as dispatch queue DMP and XCMP Feb 9, 2023
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@ggwpez ggwpez changed the title Use Message Queue as dispatch queue DMP and XCMP Use Message Queue as DMP and XCMP dispatch queue Feb 9, 2023
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@ggwpez
Copy link
Member Author

ggwpez commented Aug 17, 2023

don’t we need to migrate the items from old pallet to new pallet? we can’t just delete them

Yes... i added cumulus_pallet_dmp_queue::UndeployDmp and cumulus_pallet_dmp_queue::DeleteDmp as described in the description (still need to be deployed everywhere).
I will check again, but something like this is probably also needed for the XCMP queue pallet.

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@paritytech-cicd-pr
Copy link

The CI pipeline was cancelled due to failure one of the required jobs.
Job name: test-linux-stable
Logs: https://gitlab.parity.io/parity/mirrors/cumulus/-/jobs/3424752

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@ggwpez
Copy link
Member Author

ggwpez commented Aug 24, 2023

@xlc the migration that i added for the DMP messages has the problem that it migrates them all at once.
This can cause overweight blocks and stall the chain.

Maybe we can do a multi-block migration in the DMP pallet without locking the runtime since the storage is inaccessible anyway.
Then the DMP pallet would migrate them one by one into the MQ pallet. Order would be wrong but XCM cannot rely on that anyway.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A0-please_review Pull request needs code review. B1-note_worthy Changes should be noted in the release notes C1-low PR touches the given topic and has a low impact on builders. D9-needsaudit 👮 PR contains changes to fund-managing logic that should be properly reviewed and externally audited T1-runtime This PR/Issue is related to the topic “runtime”.
Projects
Development

Successfully merging this pull request may close these issues.

8 participants