Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure metadata is in sync with running node during tests #333

Merged
merged 15 commits into from
Nov 29, 2021
Merged
23 changes: 22 additions & 1 deletion .github/workflows/rust.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,13 @@ jobs:
- name: Checkout sources
uses: actions/checkout@v2

- name: Download Substrate
run: |
curl "https://releases.parity.io/substrate/x86_64-debian:stretch/latest/substrate/substrate" --output substrate --location
chmod +x substrate
mkdir -p ~/.local/bin
mv substrate ~/.local/bin

- name: Install Rust stable toolchain
uses: actions-rs/toolchain@v1
with:
Expand Down Expand Up @@ -70,6 +77,13 @@ jobs:
- name: Checkout sources
uses: actions/checkout@v2

- name: Download Substrate
run: |
curl "https://releases.parity.io/substrate/x86_64-debian:stretch/latest/substrate/substrate" --output substrate --location
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't much like the repetition of this curl thing, but the alternate way of sharing artifacts between jobs works out to be about as verbose...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:/

chmod +x substrate
mkdir -p ~/.local/bin
mv substrate ~/.local/bin

- name: Install Rust stable toolchain
uses: actions-rs/toolchain@v1
with:
Expand All @@ -93,7 +107,7 @@ jobs:
- name: Download Substrate
run: |
curl "https://releases.parity.io/substrate/x86_64-debian:stretch/latest/substrate/substrate" --output substrate --location
chmod +x ./substrate
chmod +x substrate
mkdir -p ~/.local/bin
mv substrate ~/.local/bin

Expand All @@ -120,6 +134,13 @@ jobs:
- name: Checkout sources
uses: actions/checkout@v2

- name: Download Substrate
run: |
curl "https://releases.parity.io/substrate/x86_64-debian:stretch/latest/substrate/substrate" --output substrate --location
chmod +x substrate
mkdir -p ~/.local/bin
mv substrate ~/.local/bin

- name: Install Rust stable toolchain
uses: actions-rs/toolchain@v1
with:
Expand Down
3 changes: 2 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[workspace]
members = [".", "cli", "codegen", "macro"]
members = [".", "cli", "codegen", "macro", "test-runtime"]

[package]
name = "subxt"
Expand Down Expand Up @@ -48,6 +48,7 @@ env_logger = "0.8.3"
tempdir = "0.3.7"
wabt = "0.10.0"
which = "4.0.2"
test-runtime = { path = "test-runtime" }

sp-keyring = { package = "sp-keyring", git = "https://github.com/paritytech/substrate/", branch = "master" }

1 change: 0 additions & 1 deletion macro/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ struct GeneratedTypeDerives(Punctuated<syn::Path, syn::Token![,]>);
pub fn subxt(args: TokenStream, input: TokenStream) -> TokenStream {
let attr_args = parse_macro_input!(args as syn::AttributeArgs);
let item_mod = parse_macro_input!(input as syn::ItemMod);

let args = match RuntimeMetadataArgs::from_list(&attr_args) {
Ok(v) => v,
Err(e) => return TokenStream::from(e.write_errors()),
Expand Down
14 changes: 14 additions & 0 deletions test-runtime/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
[package]
name = "test-runtime"
version = "0.1.0"
edition = "2021"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A description would be good here.

[dependencies]
subxt = { path = ".." }
sp-runtime = { package = "sp-runtime", git = "https://github.com/paritytech/substrate/", branch = "master" }
codec = { package = "parity-scale-codec", version = "2", default-features = false, features = ["derive", "full", "bit-vec"] }

[build-dependencies]
hex = "0.4.3"
reqwest = { version = "0.11.6", features = ["blocking", "json"] }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we already depend on subxt above we could avoid this extra (another!) http client dependency. Should be able to use subxt::RpcClient

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! I'll make use of that or one of the other http deps instead!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I remember someone in my presentation (@athei or @dvdplm?) suggesting we could even enable the option for the macro to fetch the metadata from the running substrate node, which may be something we can do in the future for this use case.

Yeah this was my suggestion. To download the metadata inside the proc macro.

Copy link
Collaborator Author

@jsdw jsdw Nov 26, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting! I reckon that's best left for a separate PR if we want it. It does strike me that adding the feature to the macro would only really be beneficial for this sort of testing though, so I actually quite like the idea that ithe test specific logic is kept separate and that the macro doesn't become a little more complex to handle it :)

serde_json = "1.0.72"
10 changes: 10 additions & 0 deletions test-runtime/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# test-runtime

The logic for this crate exists mainly in the `build.rs` file.

At compile time, this crate will:
- Spin up a local `substrate` binary (set the `SUBSTRATE_NODE_PATH` env var to point to a custom binary, otehrwise it'll look for `substrate` on your PATH).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Spin up a local `substrate` binary (set the `SUBSTRATE_NODE_PATH` env var to point to a custom binary, otehrwise it'll look for `substrate` on your PATH).
- Spin up a local `substrate` binary (set the `SUBSTRATE_NODE_PATH` env var to point to a custom binary, otherwise it'll look for `substrate` on your PATH).

- Obtain metadata from this node.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a thought: perhaps the substrate node should have a sub-command metadata dump /path/to/metadata-dumpfile.scale that does this work? Feels a bit awkward to start a full node and run queries against it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good idea

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume that the metadata dump command would itself start the substrate node and such. wrapping this logic into a separate command that's useful on its own does sound like a good idea to me too!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- Export the metadata and a `node_runtime` module which has been annotated using the `subxt` proc macro and is based off the above metadata.

The reason for doing this is that our integration tests (which also spin up a Substrate node) can then use the generated `subxt` types from the exact node being tested against, so that we don't have to worry about metadata getting out of sync with the binary under test.
152 changes: 152 additions & 0 deletions test-runtime/build.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.

use serde_json::json;
use std::{
env,
fs,
net::TcpListener,
path::Path,
process::Command,
sync::atomic::{
AtomicU16,
Ordering,
},
thread,
time,
};

static SUBSTRATE_BIN_ENV_VAR: &str = "SUBSTRATE_NODE_PATH";

fn main() {
// Select substrate binary to run based on env var.
let substrate_bin =
env::var(SUBSTRATE_BIN_ENV_VAR).unwrap_or_else(|_| "substrate".to_owned());

// Run binary.
let port = next_open_port()
.expect("Cannot spawn substrate: no available ports in the given port range");
let cmd = Command::new(&substrate_bin)
.arg("--dev")
.arg("--tmp")
.arg(format!("--rpc-port={}", port))
.spawn();
let mut cmd = match cmd {
Ok(cmd) => cmd,
Err(e) => {
panic!("Cannot spawn substrate command '{}': {}", substrate_bin, e)
}
};

// Download metadata from binary; retry until successful, or a limit is hit.
let res: serde_json::Value = {
const MAX_RETRIES: usize = 20;
let mut retries = 0;
let mut wait_secs = 1;
loop {
if retries >= MAX_RETRIES {
panic!("Cannot connect to substrate node after {} retries", retries);
}
let res = reqwest::blocking::Client::new()
.post(format!("http://localhost:{}", port))
.json(&json!({
"id": 1,
"jsonrpc": "2.0",
"method": "state_getMetadata",
}))
.send();
jsdw marked this conversation as resolved.
Show resolved Hide resolved
match res {
Ok(res) if res.status().is_success() => {
let _ = cmd.kill();
break res
.json()
.expect("valid JSON response from substrate node expected")
}
_ => {
thread::sleep(time::Duration::from_secs(wait_secs));
retries += 1;
wait_secs += 1;
}
};
}
};
let metadata_hex = res["result"]
.as_str()
.expect("Metadata should be returned as a string of hex encoded SCALE bytes");
let metadata_bytes = hex::decode(&metadata_hex.trim_start_matches("0x")).unwrap();

// Save metadata to a file:
let out_dir = env::var_os("OUT_DIR").unwrap();
let metadata_path = Path::new(&out_dir).join("metadata.scale");
fs::write(&metadata_path, metadata_bytes).expect("Couldn't write metadata output");

// Write out our expression to generate the runtime API to a file. Ideally, we'd just write this code
// in lib.rs, but we must pass a string literal (and not `concat!(..)`) as an arg to runtime_metadata_path,
// and so we need to spit it out here and include it verbatim instead.
let runtime_api_contents = format!(
"
#[subxt::subxt(
runtime_metadata_path = \"{}\",
generated_type_derives = \"Debug, Eq, PartialEq\"
)]
pub mod node_runtime {{
#[subxt(substitute_type = \"sp_arithmetic::per_things::Perbill\")]
use sp_runtime::Perbill;
}}
",
jsdw marked this conversation as resolved.
Show resolved Hide resolved
metadata_path
.to_str()
.expect("Path to metadata should be stringifiable")
);
let runtime_path = Path::new(&out_dir).join("runtime.rs");
fs::write(&runtime_path, runtime_api_contents)
.expect("Couldn't write runtime rust output");

// Re-build if we point to a different substrate binary:
println!("cargo:rerun-if-env-changed={}", SUBSTRATE_BIN_ENV_VAR);
// Re-build if this file changes:
println!("cargo:rerun-if-changed=build.rs");
}

/// Returns the next open port, or None if no port found in range.
fn next_open_port() -> Option<u16> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder whether we could extract TestNodeProcess and other test utils to their own crate. Not only could we reuse the node spawning logic, but we could publish that crate for use by other projects for testing their nodes. I would certainly use it (this code is originally copied from cargo-contract).

Just an idea, don't need to do it here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a very similar thought doing this! I didn't really fancy the amount of churn in this PR, but I can imagine it would be useful!

Copy link
Contributor

@ascjones ascjones Nov 26, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will be worth doing, separate PR probably better - must remember to remove the duplication here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why aren't we just binding to 0.0.0.0:0 instead and letting the OS find the next free port?

Copy link
Contributor

@ascjones ascjones Nov 26, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is copied from code I wrote in haste, so most likely not the best way to do it. If that works then let's do it.

Edit, we can probably use that or similar since this code was originally designed for all the tests which spin up in parallel so are all attempting to claim ports

Copy link
Collaborator Author

@jsdw jsdw Nov 26, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do like the 0.0.0.0:0 approach normally; the complexity is that you then need to read the actual port used back to connect to the service, so I think I'd need to monitor the stdout (unless you have a better idea; that's what I did in telemetry anyway?).

In the other tests, we need to grab 3 ports by the looks of things, so it is a little trickier!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But yes, in this build script, it's probably a little more robust to scan for a port than rely on any specific stdout from substrate on the matter (but if there's a better way to see which port was used, that'd be great!).

/// The start of the port range to scan.
const START_PORT: u16 = 9900;
/// The end of the port range to scan.
const END_PORT: u16 = 10000;
/// The maximum number of ports to scan before giving up.
const MAX_PORTS: u16 = 1000;

let next_port: AtomicU16 = AtomicU16::new(START_PORT);
let mut ports_scanned = 0u16;
loop {
// Loop back from the beginning if needed
let _ = next_port.compare_exchange(
END_PORT,
START_PORT,
Ordering::SeqCst,
Ordering::SeqCst,
);
let next = next_port.fetch_add(1, Ordering::SeqCst);
if TcpListener::bind(("0.0.0.0", next)).is_ok() {
return Some(next)
}
ports_scanned += 1;
if ports_scanned == MAX_PORTS {
return None
}
}
}
12 changes: 4 additions & 8 deletions tests/integration/runtime.rs → test-runtime/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,7 @@

#![allow(clippy::too_many_arguments)]

#[subxt::subxt(
runtime_metadata_path = "tests/integration/node_runtime.scale",
generated_type_derives = "Debug, Eq, PartialEq"
)]
pub mod node_runtime {
#[subxt(substitute_type = "sp_arithmetic::per_things::Perbill")]
use sp_runtime::Perbill;
}
/// The SCALE encoded metadata obtained from a local run of a substrate node.
pub static METADATA: &[u8] = include_bytes!(concat!(env!("OUT_DIR"), "/metadata.scale"));

include!(concat!(env!("OUT_DIR"), "/runtime.rs"));
2 changes: 1 addition & 1 deletion tests/integration/client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@
// along with subxt. If not, see <http://www.gnu.org/licenses/>.

use crate::{
runtime::node_runtime::system,
test_node_process,
test_node_process_with,
utils::node_runtime::system,
};

use sp_core::storage::{
Expand Down
6 changes: 4 additions & 2 deletions tests/integration/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,15 @@
// along with subxt. If not, see <http://www.gnu.org/licenses/>.

mod codegen;
mod runtime;
mod utils;

#[cfg(test)]
mod client;
#[cfg(test)]
mod frame;

pub use runtime::node_runtime;
pub use test_runtime::{
node_runtime,
METADATA,
jsdw marked this conversation as resolved.
Show resolved Hide resolved
};
pub use utils::*;
Binary file removed tests/integration/node_runtime.scale
Binary file not shown.
8 changes: 2 additions & 6 deletions tests/integration/utils/node_proc.rs
Original file line number Diff line number Diff line change
Expand Up @@ -64,13 +64,9 @@ where

/// Attempt to kill the running substrate process.
pub fn kill(&mut self) -> Result<(), String> {
log::info!("Killing contracts node process {}", self.proc.id());
log::info!("Killing node process {}", self.proc.id());
if let Err(err) = self.proc.kill() {
let err = format!(
"Error killing contracts node process {}: {}",
self.proc.id(),
err
);
let err = format!("Error killing node process {}: {}", self.proc.id(), err);
log::error!("{}", err);
return Err(err)
}
Expand Down