# Objective - Part of #16647. - This PR goes through our `ray_cast::ray_mesh_intersection()` benchmarks and overhauls them with more comments and better extensibility. The code is also a lot less duplicated! ## Solution - Create a `Benchmarks` enum that describes all of the different kind of scenarios we want to benchmark. - Merge all of our existing benchmark functions into a single one, `bench()`, which sets up the scenarios all at once. - Add comments to `mesh_creation()` and `ptoxznorm()`, and move some lines around to be a bit clearer. - Make the benchmarks use the new `bench!` macro, as part of #16647. - Rename many functions and benchmarks to be clearer. ## For reviewers I split this PR up into several, easier to digest commits. You might find it easier to review by looking through each commit, instead of the complete file changes. None of my changes actually modifies the behavior of the benchmarks; they still track the exact same test cases. There shouldn't be significant changes in benchmark performance before and after this PR. ## Testing List all picking benchmarks: `cargo bench -p benches --bench picking -- --list` Run the benchmarks once in debug mode: `cargo test -p benches --bench picking` Run the benchmarks and analyze their performance: `cargo bench -p benches --bench picking` - Check out the generated HTML report in `./target/criterion/report/index.html` once you're done! --- ## Showcase List of all picking benchmarks, after having been renamed: <img width="524" alt="image" src="https://github.com/user-attachments/assets/a1b53daf-4a8b-4c45-a25a-c6306c7175d1" /> Example report for `picking::ray_mesh_intersection::cull_intersect/100_vertices`: <img width="992" alt="image" src="https://github.com/user-attachments/assets/a1aaf53f-ce21-4bef-89c4-b982bb158f5d" /> |
||
|---|---|---|
| .. | ||
| bevy_ecs | ||
| bevy_math | ||
| bevy_picking | ||
| bevy_reflect | ||
| bevy_render | ||
| bevy_tasks | ||