
# Objective - Our benchmarks and `compile_fail` tests lag behind the rest of the engine because they are not in the Cargo workspace, so not checked by CI. - Fixes #16801, please see it for further context! ## Solution - Add benchmarks and `compile_fail` tests to the Cargo workspace. - Fix any leftover formatting issues and documentation. ## Testing - I think CI should catch most things! ## Questions <details> <summary>Outdated issue I was having with function reflection being optional</summary> The `reflection_types` example is failing in Rust-Analyzer for me, but not a normal check. ```rust error[E0004]: non-exhaustive patterns: `ReflectRef::Function(_)` not covered --> examples/reflection/reflection_types.rs:81:11 | 81 | match value.reflect_ref() { | ^^^^^^^^^^^^^^^^^^^ pattern `ReflectRef::Function(_)` not covered | note: `ReflectRef<'_>` defined here --> /Users/bdeep/dev/bevy/bevy/crates/bevy_reflect/src/kind.rs:178:1 | 178 | pub enum ReflectRef<'a> { | ^^^^^^^^^^^^^^^^^^^^^^^ ... 188 | Function(&'a dyn Function), | -------- not covered = note: the matched value is of type `ReflectRef<'_>` help: ensure that all possible cases are being handled by adding a match arm with a wildcard pattern or an explicit pattern as shown | 126 ~ ReflectRef::Opaque(_) => {}, 127 + ReflectRef::Function(_) => todo!() | ``` I think it is because the following line is feature-gated:cc0f6a8db4/examples/reflection/reflection_types.rs (L117-L122)
My theory for why this is happening is because the benchmarks enabled `bevy_reflect`'s `function` feature, which gets merged with the rest of the features when RA checks the workspace, but the `#[cfg(...)]` gate in the example isn't detecting it:cc0f6a8db4/benches/Cargo.toml (L19)
Any thoughts on how to fix this? It's not blocking, since the example still compiles as normal, but it's just RA and the command `cargo check --workspace --all-targets` appears to fail. </summary>
37 lines
1.2 KiB
Markdown
37 lines
1.2 KiB
Markdown
# Bevy Benchmarks
|
|
|
|
This is a crate with a collection of benchmarks for Bevy.
|
|
|
|
## Running benchmarks
|
|
|
|
Benchmarks can be run through Cargo:
|
|
|
|
```sh
|
|
# Run all benchmarks. (This will take a while!)
|
|
cargo bench -p benches
|
|
|
|
# Just compile the benchmarks, do not run them.
|
|
cargo bench -p benches --no-run
|
|
|
|
# Run the benchmarks for a specific crate. (See `Cargo.toml` for a complete list of crates
|
|
# tracked.)
|
|
cargo bench -p benches --bench ecs
|
|
|
|
# Filter which benchmarks are run based on the name. This will only run benchmarks whose name
|
|
# contains "name_fragment".
|
|
cargo bench -p benches -- name_fragment
|
|
|
|
# List all available benchmarks.
|
|
cargo bench -p benches -- --list
|
|
|
|
# Save a baseline to be compared against later.
|
|
cargo bench -p benches --save-baseline before
|
|
|
|
# Compare the current benchmarks against a baseline to find performance gains and regressions.
|
|
cargo bench -p benches --baseline before
|
|
```
|
|
|
|
## Criterion
|
|
|
|
Bevy's benchmarks use [Criterion](https://crates.io/crates/criterion). If you want to learn more about using Criterion for comparing performance against a baseline or generating detailed reports, you can read the [Criterion.rs documentation](https://bheisler.github.io/criterion.rs/book/criterion_rs.html).
|