Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Merged by Bors] - Commands benchmarking #2334

Closed

Conversation

NathanSWard
Copy link
Contributor

Objective

  • Currently the Commands and CommandQueue have no performance testing.
  • As Commands are quite expensive due to the Box<dyn Command> allocated for each command, there should be perf tests for implementations that attempt to improve the performance.

Solution

  • Add some benchmarking for Commands and CommandQueue.

@NathanSWard NathanSWard changed the title Commands benchmarking Commands benchmarking Jun 12, 2021
@github-actions github-actions bot added the S-Needs-Triage This issue needs to be labelled label Jun 12, 2021
@NathanSWard NathanSWard added A-ECS Entities, components, systems, and events C-Performance A change motivated by improving speed, memory usage or compile times and removed S-Needs-Triage This issue needs to be labelled labels Jun 12, 2021
@NathanSWard
Copy link
Contributor Author

I'm not too happy with these current benchmarks.
However this was split apart from #2332 to reduce the scope of that PR.
If you have suggestions on better perf tests for commands, please let me know!

@TheRawMeatball
Copy link
Member

A seeded RNG for optionally adding these components would be useful imo

@NathanSWard NathanSWard force-pushed the nward/commands-perf-testing branch 2 times, most recently from 3d19323 to 873906c Compare June 14, 2021 00:54
@mockersf
Copy link
Member

Looking at the benches, it feels like they test more entity spawning and component insertion than commands themselves.

Would it be interesting to add a dummy command that doesn't do much and use it in a bench?

@NathanSWard
Copy link
Contributor Author

Looking at the benches, it feels like they test more entity spawning and component insertion than commands themselves.
Would it be interesting to add a dummy command that doesn't do much and use it in a bench?

Yep, I like this idea :)
I went ahead and added benchmarks for FakeCommands

@mockersf
Copy link
Member

mockersf commented Jun 16, 2021

to show even more the impact of the size of the command, you could try with struct FakeCommand<T> {field: T} and run benches for each of:

  • FakeCommand<()> typical empty command like spawning an entity
  • FakeCommand<(u32, u32, u32)> typical small command like inserting a component
  • FakeCommand<LargeStruct> typical large command like inserting a bundle

@NathanSWard
Copy link
Contributor Author

to show even more the impact of the size of the command, you could try with struct FakeCommand {field: T} and run benches for each of:

Would the idea be to run benchmarks where you add only FakeCommand<()> N times to the command queue.
(In order words you don't mix different sized commands?)

@NathanSWard
Copy link
Contributor Author

FakeCommand typical large command like inserting a bundle

Also by LargeStruct how large are we talking?

@NathanSWard NathanSWard force-pushed the nward/commands-perf-testing branch 2 times, most recently from 16adc08 to d221214 Compare June 18, 2021 22:13
@mockersf
Copy link
Member

Would the idea be to run benchmarks where you add only FakeCommand<()> N times to the command queue.

... probably? 😄

Also by LargeStruct how large are we talking?

I think you picked a good value 👍

My idea was to explore how your other pr on commands would impact various sized commands, looks like you found something interesting with 12 bytes commands!

@NathanSWard
Copy link
Contributor Author

NathanSWard commented Jun 20, 2021

Ideally I'd like to merge this first, then merge #2332.
However I'll have to change one or the other because the Command trait got changed from self: Box<Self> to self

@NathanSWard NathanSWard added the S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it label Jun 20, 2021
@cart
Copy link
Member

cart commented Jun 22, 2021

RNG has overhead that I'm worried will get in the way of showing the actual cost of these operations. The goal of rng here is to produce "different archetype shapes", but we could almost as easily hard code these shapes, which would also give us the benefit of reproducibility.

@NathanSWard
Copy link
Contributor Author

RNG has overhead that I'm worried will get in the way of showing the actual cost of these operations. The goal of rng here is to produce "different archetype shapes", but we could almost as easily hard code these shapes, which would also give us the benefit of reproducibility.

Hopefully with a seeded-rng this would lessen the impact of rng, however, it is still rng soo... 🤷
It's not so much to make archetype shapes, but rather to have the CommandQueue not know what type of command will be pushed to it.

However, we can achieve similar results with just a modulo inside a black_box so we should all be good :)
I went ahead and removed the rng.

@mockersf
Copy link
Member

bors r+

bors bot pushed a commit that referenced this pull request Jun 28, 2021
# Objective

- Currently the `Commands` and `CommandQueue` have no performance testing. 
- As `Commands` are quite expensive due to the `Box<dyn Command>` allocated for each command, there should be perf tests for implementations that attempt to improve the performance.  

## Solution

- Add some benchmarking for `Commands` and `CommandQueue`.
@bors bors bot changed the title Commands benchmarking [Merged by Bors] - Commands benchmarking Jun 28, 2021
@bors bors bot closed this Jun 28, 2021
ostwilkens pushed a commit to ostwilkens/bevy that referenced this pull request Jul 27, 2021
# Objective

- Currently the `Commands` and `CommandQueue` have no performance testing. 
- As `Commands` are quite expensive due to the `Box<dyn Command>` allocated for each command, there should be perf tests for implementations that attempt to improve the performance.  

## Solution

- Add some benchmarking for `Commands` and `CommandQueue`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-ECS Entities, components, systems, and events C-Performance A change motivated by improving speed, memory usage or compile times S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants