![]() # Objective - Part of #16647. - This PR goes through our `ray_cast::ray_mesh_intersection()` benchmarks and overhauls them with more comments and better extensibility. The code is also a lot less duplicated! ## Solution - Create a `Benchmarks` enum that describes all of the different kind of scenarios we want to benchmark. - Merge all of our existing benchmark functions into a single one, `bench()`, which sets up the scenarios all at once. - Add comments to `mesh_creation()` and `ptoxznorm()`, and move some lines around to be a bit clearer. - Make the benchmarks use the new `bench!` macro, as part of #16647. - Rename many functions and benchmarks to be clearer. ## For reviewers I split this PR up into several, easier to digest commits. You might find it easier to review by looking through each commit, instead of the complete file changes. None of my changes actually modifies the behavior of the benchmarks; they still track the exact same test cases. There shouldn't be significant changes in benchmark performance before and after this PR. ## Testing List all picking benchmarks: `cargo bench -p benches --bench picking -- --list` Run the benchmarks once in debug mode: `cargo test -p benches --bench picking` Run the benchmarks and analyze their performance: `cargo bench -p benches --bench picking` - Check out the generated HTML report in `./target/criterion/report/index.html` once you're done! --- ## Showcase List of all picking benchmarks, after having been renamed: <img width="524" alt="image" src="https://github.com/user-attachments/assets/a1b53daf-4a8b-4c45-a25a-c6306c7175d1" /> Example report for `picking::ray_mesh_intersection::cull_intersect/100_vertices`: <img width="992" alt="image" src="https://github.com/user-attachments/assets/a1aaf53f-ce21-4bef-89c4-b982bb158f5d" /> |
||
---|---|---|
.. | ||
benches | ||
src | ||
Cargo.toml | ||
README.md |
Bevy Benchmarks
This is a crate with a collection of benchmarks for Bevy.
Running benchmarks
Benchmarks can be run through Cargo:
# Run all benchmarks. (This will take a while!)
cargo bench -p benches
# Just compile the benchmarks, do not run them.
cargo bench -p benches --no-run
# Run the benchmarks for a specific crate. (See `Cargo.toml` for a complete list of crates
# tracked.)
cargo bench -p benches --bench ecs
# Filter which benchmarks are run based on the name. This will only run benchmarks whose name
# contains "name_fragment".
cargo bench -p benches -- name_fragment
# List all available benchmarks.
cargo bench -p benches -- --list
# Save a baseline to be compared against later.
cargo bench -p benches --save-baseline before
# Compare the current benchmarks against a baseline to find performance gains and regressions.
cargo bench -p benches --baseline before
Criterion
Bevy's benchmarks use Criterion. If you want to learn more about using Criterion for comparing performance against a baseline or generating detailed reports, you can read the Criterion.rs documentation.