![]() This pr uses the `extern crate self as` trick to make proc macros behave the same way inside and outside bevy. # Objective - Removes noise introduced by `crate as` in the whole bevy repo. - Fixes #17004. - Hardens proc macro path resolution. ## TODO - [x] `BevyManifest` needs cleanup. - [x] Cleanup remaining `crate as`. - [x] Add proper integration tests to the ci. ## Notes - `cargo-manifest-proc-macros` is written by me and based/inspired by the old `BevyManifest` implementation and [`bkchr/proc-macro-crate`](https://github.com/bkchr/proc-macro-crate). - What do you think about the new integration test machinery I added to the `ci`? More and better integration tests can be added at a later stage. The goal of these integration tests is to simulate an actual separate crate that uses bevy. Ideally they would lightly touch all bevy crates. ## Testing - Needs RA test - Needs testing from other users - Others need to run at least `cargo run -p ci integration-test` and verify that they work. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
---|---|---|
.. | ||
benches | ||
src | ||
Cargo.toml | ||
README.md |
Bevy Benchmarks
This is a crate with a collection of benchmarks for Bevy.
Running benchmarks
Benchmarks can be run through Cargo:
# Run all benchmarks. (This will take a while!)
cargo bench -p benches
# Just compile the benchmarks, do not run them.
cargo bench -p benches --no-run
# Run the benchmarks for a specific crate. (See `Cargo.toml` for a complete list of crates
# tracked.)
cargo bench -p benches --bench ecs
# Filter which benchmarks are run based on the name. This will only run benchmarks whose name
# contains "name_fragment".
cargo bench -p benches -- name_fragment
# List all available benchmarks.
cargo bench -p benches -- --list
# Save a baseline to be compared against later.
cargo bench -p benches --save-baseline before
# Compare the current benchmarks against a baseline to find performance gains and regressions.
cargo bench -p benches --baseline before
Criterion
Bevy's benchmarks use Criterion. If you want to learn more about using Criterion for comparing performance against a baseline or generating detailed reports, you can read the Criterion.rs documentation.