Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Callback interface speedup #1497

Merged
merged 10 commits into from
Mar 24, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,11 @@

[All changes in [[UnreleasedVersion]]](https://github.com/mozilla/uniffi-rs/compare/v0.23.0...HEAD).

### ⚠️ Breaking Changes ⚠️
- ABI: Implemented a new callback-interface ABI that significantly improves performance on Python and Kotlin.
- UniFFI users will automatically get the benefits of this without any code changes.
- External bindings authors will need to update their bindings code. See PR #1494 for details.

### What's changed

- The `include_scaffolding!()` macro must now either be called from your crate root or you must have `use the_mod_that_calls_include_scaffolding::*` in your crate root. This was always the expectation, but wasn't required before. This will now start failing with errors that say `crate::UniFfiTag` does not exist.
Expand Down
1 change: 1 addition & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ members = [
"examples/custom-types",
"examples/app/uniffi-bindgen-cli",

"fixtures/benchmarks",
"fixtures/coverall",
"fixtures/callbacks",

Expand Down
27 changes: 27 additions & 0 deletions fixtures/benchmarks/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
[package]
name = "uniffi-fixture-benchmarks"
edition = "2021"
version = "0.22.0"
authors = ["Firefox Sync Team <sync-team@mozilla.com>"]
license = "MPL-2.0"
publish = false

[lib]
crate-type = ["lib", "cdylib"]
name = "uniffi_benchmarks"
bench = false

[dependencies]
uniffi = {path = "../../uniffi"}
clap = { version = "3.1", features = ["cargo", "std", "derive"] }
criterion = "0.4.0"

[build-dependencies]
uniffi = {path = "../../uniffi", features = ["build"] }

[dev-dependencies]
uniffi_bindgen = {path = "../../uniffi_bindgen"}

[[bench]]
name = "benchmarks"
harness = false
22 changes: 22 additions & 0 deletions fixtures/benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
This fixture runs a set of benchmark tests, using criterion to test the performance.
badboy marked this conversation as resolved.
Show resolved Hide resolved

- `cargo bench` to run all benchmarks.
- `cargo bench -- -p` to run all python benchmarks (or -s for swift, -k for kotlin)
- `cargo bench -- [glob]` to run a subset of the benchmarks
- `cargo bench -- --help` for more details on the CLI

Benchmarking UniFFI is tricky and involves a bit of ping-pong between Rust and
the foreign language:

- `benchmarks.rs` is the top-level Rust executuble where the process starts.
It parses the CLI arguments and determines which languages we want to run
the benchmarks for.
- `benchmarks.rs` executes a script for each foreign language that we want to benchmark.
- Those scripts call the `run_benchmarks()` function from `lib.rs`
- `run_benchmarks()` parses the CLI arguments again, this time to determine how to setup
the `Criterion` object.
- Testing callback interfaces is relatively straightforward, we simply invoke
the callback method.
- Testing regular functions requires some extra care, since these are called
by the foreign bindings. To test those, `benchmarks.rs` invokes a callback
interface method that calls the Rust function and reports the time taken.
48 changes: 48 additions & 0 deletions fixtures/benchmarks/benches/benchmarks.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */

use clap::Parser;
use std::env;
use uniffi_benchmarks::Args;
use uniffi_bindgen::bindings::{kotlin, python, swift, RunScriptMode};

fn main() {
let args = Args::parse();
let script_args: Vec<String> = std::iter::once(String::from("--"))
.chain(env::args())
.collect();

if args.should_run_python() {
python::run_script(
std::env!("CARGO_TARGET_TMPDIR"),
"uniffi-fixture-benchmarks",
"benches/bindings/run_benchmarks.py",
script_args.clone(),
RunScriptMode::PerformanceTest,
)
.unwrap()
}

if args.should_run_kotlin() {
kotlin::run_script(
std::env!("CARGO_TARGET_TMPDIR"),
"uniffi-fixture-benchmarks",
"benches/bindings/run_benchmarks.kts",
script_args.clone(),
RunScriptMode::PerformanceTest,
)
.unwrap()
}

if args.should_run_swift() {
swift::run_script(
std::env!("CARGO_TARGET_TMPDIR"),
"uniffi-fixture-benchmarks",
"benches/bindings/run_benchmarks.swift",
script_args,
RunScriptMode::PerformanceTest,
)
.unwrap()
}
}
41 changes: 41 additions & 0 deletions fixtures/benchmarks/benches/bindings/run_benchmarks.kts
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */

import uniffi.benchmarks.*
import kotlin.system.measureNanoTime

class TestCallbackObj : TestCallbackInterface {
override fun method(a: Int, b: Int, data: TestData): String {
return data.bar;
}

override fun methodWithVoidReturn(a: Int, b: Int, data: TestData) {
}

override fun methodWithNoArgsAndVoidReturn() {
}

override fun runTest(testCase: TestCase, count: ULong): ULong {
val data = TestData("StringOne", "StringTwo")
return when (testCase) {
TestCase.FUNCTION -> measureNanoTime {
for (i in 0UL..count) {
testFunction(10, 20, data)
badboy marked this conversation as resolved.
Show resolved Hide resolved
}
}
TestCase.VOID_RETURN -> measureNanoTime {
for (i in 0UL..count) {
testVoidReturn(10, 20, data)
}
}
TestCase.NO_ARGS_VOID_RETURN -> measureNanoTime {
for (i in 0UL..count) {
testNoArgsVoidReturn()
}
}
}.toULong()
}
}

runBenchmarks("kotlin", TestCallbackObj())
35 changes: 35 additions & 0 deletions fixtures/benchmarks/benches/bindings/run_benchmarks.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.

from benchmarks import *
import time

class TestCallbackObj:
def method(self, a, b, data):
return data.bar

def method_with_void_return(self, a, b, data):
pass

def method_with_no_args_and_void_return(self):
pass

def run_test(self, test_case, count):
data = TestData("StringOne", "StringTwo")
if test_case == TestCase.FUNCTION:
start = time.perf_counter_ns()
for i in range(count):
test_function(10, 20, data)
elif test_case == TestCase.VOID_RETURN:
start = time.perf_counter_ns()
for i in range(count):
test_void_return(10, 20, data)
elif test_case == TestCase.NO_ARGS_VOID_RETURN:
start = time.perf_counter_ns()
for i in range(count):
test_no_args_void_return()
end = time.perf_counter_ns()
return end - start

run_benchmarks("python", TestCallbackObj())
52 changes: 52 additions & 0 deletions fixtures/benchmarks/benches/bindings/run_benchmarks.swift
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */

#if canImport(benchmarks)
import benchmarks
#endif

#if os(Linux)
import Glibc
badboy marked this conversation as resolved.
Show resolved Hide resolved
#else
import Darwin.C
#endif

class TestCallbackObj: TestCallbackInterface {
func method(a: Int32, b: Int32, data: TestData) -> String {
return data.bar
}

func methodWithVoidReturn(a: Int32, b: Int32, data: TestData) {
}

func methodWithNoArgsAndVoidReturn() {
}

func runTest(testCase: TestCase, count: UInt64) -> UInt64 {
let data = TestData(foo: "StringOne", bar: "StringTwo")
let start: clock_t
switch testCase {
case TestCase.function:
start = clock()
for _ in 0...count {
testFunction(a: 10, b: 20, data: data)
}
case TestCase.voidReturn:
start = clock()
for _ in 0...count {
testVoidReturn(a: 10, b: 20, data: data)
}

case TestCase.noArgsVoidReturn:
start = clock()
for _ in 0...count {
testNoArgsVoidReturn()
}
}
let end = clock()
return UInt64((end - start) * 1000000000 / CLOCKS_PER_SEC)
}
}

runBenchmarks(languageName: "swift", cb: TestCallbackObj())
7 changes: 7 additions & 0 deletions fixtures/benchmarks/build.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */

fn main() {
uniffi::generate_scaffolding("./src/benchmarks.udl").unwrap();
}
40 changes: 40 additions & 0 deletions fixtures/benchmarks/src/benchmarks.udl
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
namespace benchmarks {
// Run all benchmarks and print the results to stdout
void run_benchmarks(string language_name, TestCallbackInterface cb);

// Test functions
//
// These are intented to test the overhead of Rust function calls including:
// popping arguments from the stack, unpacking RustBuffers, pushing return
// values back to the stack, etc.

string test_function(i32 a, i32 b, TestData data); // Should return data.bar
void test_void_return(i32 a, i32 b, TestData data);
void test_no_args_void_return();
};

dictionary TestData {
string foo;
string bar;
};

enum TestCase {
"Function",
"VoidReturn",
"NoArgsVoidReturn",
};

callback interface TestCallbackInterface {
// Test callback methods.
//
// These are intented to test the overhead of callback interface calls
// including: popping arguments from the stack, unpacking RustBuffers,
// pushing return values back to the stack, etc.

string method(i32 a, i32 b, TestData data); // Should return data.bar
void method_with_void_return(i32 a, i32 b, TestData data);
void method_with_no_args_and_void_return();

// Run a performance test N times and return the elapsed time in nanoseconds
u64 run_test(TestCase test_case, u64 count);
};
88 changes: 88 additions & 0 deletions fixtures/benchmarks/src/cli.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */

use clap::Parser;
use criterion::Criterion;

#[derive(Parser, Debug)]
pub struct Args {
// Args to select which test scripts run. These are handled in `benchmarks.rs`.
/// Run Python tests
#[clap(short, long = "py", display_order = 0)]
pub python: bool,
/// Run Kotlin tests
#[clap(short, long = "kt", display_order = 0)]
pub kotlin: bool,
/// Run Swift tests
#[clap(short, long, display_order = 0)]
pub swift: bool,

/// Dump compiler output to the console. Good for debugging new benchmarks.
#[clap(long, display_order = 1, action)]
pub compiler_messages: bool,

// Args for running the metrics, these are handled in `lib.rs`
/// Skip benchmarks whose names do not contain FILTER
#[clap()]
pub filter: Option<String>,

// It would be great to also support the baseline arguments, but there doesn't seem to be any
// way to manually set those.

// Ignore the `--bench` arg, which Cargo passes to us
#[clap(long, hide = true)]
bench: bool,
}

impl Args {
/// Should we run the Python tests?
pub fn should_run_python(&self) -> bool {
self.python || self.no_languages_selected()
}

/// Should we run the Kotlin tests?
pub fn should_run_kotlin(&self) -> bool {
self.kotlin || self.no_languages_selected()
}

/// Should we run the Swift tests?
pub fn should_run_swift(&self) -> bool {
self.swift || self.no_languages_selected()
}

pub fn no_languages_selected(&self) -> bool {
!(self.python || self.kotlin || self.swift)
}

/// Parse arguments for run_benchmarks()
///
/// This is slightly tricky, because run_benchmarks() is called from the foreign bindings side.
/// This means that `sys::env::args()` will contain all the arguments needed to run the foreign
/// bindings script, and we don't want to parse all of that.
pub fn parse_for_run_benchmarks() -> Self {
Self::parse_from(
std::env::args()
.into_iter()
// This method finds the first "--" arg, which `benchmarks.rs` inserts to mark the start of
// our arguments.
.skip_while(|a| a != "--")
// Skip over any "--" args. This is mainly for the "--" arg that we use to separate
// our args. However, it's also needed to workaround some kotlinc behavior. We need
// to pass it the "--" arg to get it to start trying to parse our arguments, but
// kotlinc still passes "--" back to us.
.skip_while(|a| a == "--")
.collect::<Vec<String>>(),
)
}

/// Build a Criterion instance from the arguments
pub fn build_criterion(&self) -> Criterion {
let mut c = Criterion::default();
c = match &self.filter {
Some(f) => c.with_filter(f),
None => c,
};
c
}
}
Loading