Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Upgrade wasm crate dependencies #12173

Merged
merged 7 commits into from
Sep 8, 2022
Merged

Upgrade wasm crate dependencies #12173

merged 7 commits into from
Sep 8, 2022

Conversation

athei
Copy link
Member

@athei athei commented Sep 2, 2022

This upgrades:

parity-wasm 0.42 -> 0.45
wasm-instrument 0.1 -> 0.2
wasmi 0.9 -> 0.13

These do not contain big changes but merely big fixes. wasmi is upgraded to the last version before bigger changes were made. We will upgrade only until we did more testing: paritytech/roadmap#9

We should also do a burn in with wasmi as a execution engine.

cc @pepyakin @Robbepop

@athei athei added A0-please_review Pull request needs code review. B0-silent Changes should not be mentioned in any release notes C1-low PR touches the given topic and has a low impact on builders. A1-needs_burnin Pull request needs to be tested on a live validator node before merge. DevOps is notified via matrix D5-nicetohaveaudit ⚠️ PR contains trivial changes to logic that should be properly reviewed. labels Sep 2, 2022
Copy link
Contributor

@Robbepop Robbepop left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@koute koute left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Performance-wise just for your reference, I have a few benchmarks laying around that I've created recently (when investigating the random performance regression on wasmtime bump); here they are:

coremark,wasmi_009_under_wasmtime_040,217
coremark,wasmi_013_under_wasmtime_040,203
coremark,wasmi_016_under_wasmtime_040,180
regexredux,wasmi_009_under_wasmtime_040,2280
regexredux,wasmi_013_under_wasmtime_040,2386
regexredux,wasmi_016_under_wasmtime_040,2645
noop,wasmi_009_under_wasmtime_040,9237
noop,wasmi_013_under_wasmtime_040,9208
noop,wasmi_016_under_wasmtime_040,4352

First column is the name of the benchmark, the second column is the executor, and the third is the score or time. For coremark higher is better, for regexredux and noop lower is better. It seems that (at least in these benchmarks) the higher you bump wasmi the worse the performance gets in CPU-heavy benchmarks (coremark and regexredux) while the performance in an instantiation-heavy benchmark noop gets better on 0.16.

@athei
Copy link
Member Author

athei commented Sep 7, 2022

/cmd queue -c bench-bot $ pallet dev pallet_contracts

@command-bot
Copy link

command-bot bot commented Sep 7, 2022

@athei https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/1818470 was started for your command "$PIPELINE_SCRIPTS_DIR/bench-bot.sh" pallet dev pallet_contracts. Check out https://gitlab.parity.io/parity/mirrors/substrate/-/pipelines?page=1&scope=all&username=group_605_bot to know what else is being executed currently.

Comment /cmd cancel 54-23866352-2a7f-48c6-acca-3c37b3e700a2 to cancel this command or /cmd cancel to cancel all commands in this pull request.

@Robbepop
Copy link
Contributor

Robbepop commented Sep 7, 2022

LGTM.

Performance-wise just for your reference, I have a few benchmarks laying around that I've created recently (when investigating the random performance regression on wasmtime bump); here they are:

coremark,wasmi_009_under_wasmtime_040,217
coremark,wasmi_013_under_wasmtime_040,203
coremark,wasmi_016_under_wasmtime_040,180
regexredux,wasmi_009_under_wasmtime_040,2280
regexredux,wasmi_013_under_wasmtime_040,2386
regexredux,wasmi_016_under_wasmtime_040,2645
noop,wasmi_009_under_wasmtime_040,9237
noop,wasmi_013_under_wasmtime_040,9208
noop,wasmi_016_under_wasmtime_040,4352

First column is the name of the benchmark, the second column is the executor, and the third is the score or time. For coremark higher is better, for regexredux and noop lower is better. It seems that (at least in these benchmarks) the higher you bump wasmi the worse the performance gets in CPU-heavy benchmarks (coremark and regexredux) while the performance in an instantiation-heavy benchmark noop gets better on 0.16.

Have you used

[profile.release]
lto = "fat"
codegen-units = 1

for wasmi compilation while profiling? Especially for newer wasmi versions this makes a huge performance difference (>200%).
More information here: wasmi-labs/wasmi#339

@koute
Copy link
Contributor

koute commented Sep 7, 2022

Have you used

[profile.release]
lto = "fat"
codegen-units = 1

for wasmi compilation while profiling? Especially for newer wasmi versions this makes a huge performance difference (>200%). More information here: paritytech/wasmi#339

Yes.

All of the WASM kernels were compiled on the following profile:

[profile.lto]
inherits = "release"
lto = true
codegen-units = 1

(AFAIK true is same as "fat")

@Robbepop
Copy link
Contributor

Robbepop commented Sep 7, 2022

@koute Is there a way to check this? I often had problem properly setting up the profile for dependencies (in workspaces). The thing is that wasmi 0.16.0 should be vastly more efficient than version 0.9.0 and 0.13.0 but this benchmark shows quite the opposite which is only true if the wrong profile was selected since the newer wasmi version is quite a lot slower with the wrong profiles.
For example on non-Wasm platforms we measured 65%-120% performance improvement for wasmi 0.16.0 compared over wasmi 0.8.0.

[profile.lto]

Maybe forgot to set the lto profile?
An easy way to check this is to compare performance when benchmarking explicitly with the standard release profile which should yield drastic performance differences for wasmi 0.16.0.
This branch holds the coremark benchmark updated with Wasmtime 0.4.0, Wasm3 0.3.1 and wasmi 0.16.0 for native compilation performance comparison.
https://github.com/Robbepop/wasm-coremark-rs/tree/rf-update-vms-v2

(AFAIK true is same as "fat")

Yes, although less readable, so I'd propose changing all occurences to "fat".

@athei
Copy link
Member Author

athei commented Sep 7, 2022

@Robbepop you conducted your benchmarks only with wasmi compiled to native, right? These benchmarks run it under wasmtime which could completely change the game.

I am currently running a burn-in (syncing kusama) with this branch. When nothing comes up I will merge it.

@Robbepop
Copy link
Contributor

Robbepop commented Sep 7, 2022

@Robbepop you conducted your benchmarks only with wasmi compiled to native, right? These benchmarks run it under wasmtime which could completely change the game.

I am aware that Wasm compilation might change things but I just doubt that the difference between Wasm and native compilation run on a JIT compiler is as big as claimed in the benchmarks above. That's it. I'd like to have sources to those benchmarks to convince myself.

Basically what @koute measured looks suspiciously similar to the numbers I am seeing when I chose the wrong profile (e.g. default release profile) on native compilation.

@command-bot
Copy link

command-bot bot commented Sep 7, 2022

@athei Command "$PIPELINE_SCRIPTS_DIR/bench-bot.sh" pallet dev pallet_contracts has finished. Result: https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/1818470 has finished. If any artifacts were generated, you can download them from https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/1818470/artifacts/download.

@athei
Copy link
Member Author

athei commented Sep 8, 2022

/cmd queue -c bench-bot $ pallet dev pallet_contracts

@command-bot
Copy link

command-bot bot commented Sep 8, 2022

@athei https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/1820800 was started for your command "$PIPELINE_SCRIPTS_DIR/bench-bot.sh" pallet dev pallet_contracts. Check out https://gitlab.parity.io/parity/mirrors/substrate/-/pipelines?page=1&scope=all&username=group_605_bot to know what else is being executed currently.

Comment /cmd cancel 56-1ce2d388-931b-403e-bc12-e429663ee29e to cancel this command or /cmd cancel to cancel all commands in this pull request.

@koute
Copy link
Contributor

koute commented Sep 8, 2022

@koute Is there a way to check this?

Not that I know of, although you could probably do something like this in build.rs to check that the profile matches:

pub fn main() {
    assert_eq!(std::env::var("PROFILE").unwrap(), "lto");
}

Maybe forgot to set the lto profile? An easy way to check this is to compare performance when benchmarking explicitly with the standard release profile which should yield drastic performance differences for wasmi 0.16.0.

I might have screwed up something else, but this I shouldn't have. (:

Here are the results for the standard release profile (the underlying kernels which wasmi runs are still compiled with lto) in comparison with the previous numbers and also some numbers without wasmtime:

benchmark runtime release lto
coremark wasmi_009_under_wasmtime_040 248 217
coremark wasmi_013_under_wasmtime_040 263 203
coremark wasmi_016_under_wasmtime_040 170 180
coremark wasmi_009_on_bare_metal 452 373
coremark wasmi_013_on_bare_metal 393 411
coremark wasmi_016_on_bare_metal 253 581
regexredux wasmi_009_under_wasmtime_040 2022 2280
regexredux wasmi_013_under_wasmtime_040 2022 2386
regexredux wasmi_016_under_wasmtime_040 2816 2645
regexredux wasmi_009_on_bare_metal 1269 1353
regexredux wasmi_013_on_bare_metal 1333 1241
regexredux wasmi_016_on_bare_metal 1902 891
noop wasmi_009_under_wasmtime_040 10562 9237
noop wasmi_013_under_wasmtime_040 9197 9208
noop wasmi_016_under_wasmtime_040 6347 4352
noop wasmi_009_on_bare_metal 1762 1697
noop wasmi_013_on_bare_metal 1791 1823
noop wasmi_016_on_bare_metal 761 748

@koute
Copy link
Contributor

koute commented Sep 8, 2022

@Robbepop Okay, I've cleaned up my benchmarks just a little bit and pushed them into a repo; you can check them out here if you want: https://github.com/koute/wasm-bench

@command-bot
Copy link

command-bot bot commented Sep 8, 2022

@athei Command "$PIPELINE_SCRIPTS_DIR/bench-bot.sh" pallet dev pallet_contracts has finished. Result: https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/1820800 has finished. If any artifacts were generated, you can download them from https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/1820800/artifacts/download.

@Robbepop
Copy link
Contributor

Robbepop commented Sep 8, 2022

@koute Thanks a lot for making the benchmarks available. They provided me with some valuable insights.

I was able to reproduce your results on my own machine where newer wasmi versions perform worse than older ones.
However, coming from the ink! project I already knew that Rust+LLVM do quite a bad job at optimizing for Wasm targets.
Therefore I ran Binaryen's wasm-opt over all the wasmi_kernel.wasm from all the wasmi versions and reran the benchmarks.

These are the results I got afterwards:

Benchmark wasmi and Wasmtime versions Result
regexredux wasmi_016_under_wasmtime_040 1433
regexredux wasmi_013_under_wasmtime_040 2769
regexredux wasmi_009_under_wasmtime_040 1947
regexredux wasmi_016_under_wasmtime_036 1590
regexredux wasmi_013_under_wasmtime_036 2213
regexredux wasmi_009_under_wasmtime_036 1795
regexredux wasmi_016_on_bare_metal 792
regexredux wasmi_013_on_bare_metal 1243
regexredux wasmi_009_on_bare_metal 1315
coremark wasmi_016_under_wasmtime_040 364.5422
coremark wasmi_013_under_wasmtime_040 158.76796
coremark wasmi_009_under_wasmtime_040 243.26955
coremark wasmi_016_under_wasmtime_036 311.86652
coremark wasmi_013_under_wasmtime_036 217.20244
coremark wasmi_009_under_wasmtime_036 281.19507
coremark wasmi_016_on_bare_metal 693.30646
coremark wasmi_013_on_bare_metal 433.965
coremark wasmi_009_on_bare_metal 415.36865
noop wasmi_016_under_wasmtime_040 9500
noop wasmi_013_under_wasmtime_040 18258
noop wasmi_009_under_wasmtime_040 18179
noop wasmi_016_under_wasmtime_036 8391
noop wasmi_013_under_wasmtime_036 16593
noop wasmi_009_under_wasmtime_036 16401
noop wasmi_016_on_bare_metal 1525
noop wasmi_013_on_bare_metal 3511
noop wasmi_009_on_bare_metal 2996

In conclusion, after wasm-opt -O4 was applied to the wasmi_kernel.wasm then wasmi version 0.16.0 performs by far the best under Wasmtime whereas wasmi version 0.13.0 performs the worst. However, not all wasmi versions perform better after being post-optimized by wasm-opt. Version 0.13.0 seems to be regressing.

My recommendation still is to use wasm-opt for the Substrate runtime if we do not do this already.

@athei
Copy link
Member Author

athei commented Sep 8, 2022

We don't. I have a branch where I tested this. However it is a bit experimental because wasm-opt rust wrapper is still experimental. I intend to merge this as soon as this becomes more stable. I don't want require users to install wasm-opt to compile substrate.

@athei
Copy link
Member Author

athei commented Sep 8, 2022

bot merge

@paritytech-processbot paritytech-processbot bot merged commit b8c3d8c into master Sep 8, 2022
@paritytech-processbot paritytech-processbot bot deleted the at/pariy-wasm branch September 8, 2022 12:48
ark0f pushed a commit to gear-tech/substrate that referenced this pull request Feb 27, 2023
* Upgrade wasm crate dependencies

* New wasmi version changed error output a bit

* ".git/.scripts/bench-bot.sh" pallet dev pallet_contracts

* ".git/.scripts/bench-bot.sh" pallet dev pallet_contracts

Co-authored-by: command-bot <>
@Polkadot-Forum
Copy link

This pull request has been mentioned on Polkadot Forum. There might be relevant details there:

https://forum.polkadot.network/t/exploring-alternatives-to-wasm-for-smart-contracts/2434/20

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A0-please_review Pull request needs code review. A1-needs_burnin Pull request needs to be tested on a live validator node before merge. DevOps is notified via matrix B0-silent Changes should not be mentioned in any release notes C1-low PR touches the given topic and has a low impact on builders. D5-nicetohaveaudit ⚠️ PR contains trivial changes to logic that should be properly reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants