Add benchmarks and compile_fail tests back to workspace (#16858)

# Objective

- Our benchmarks and `compile_fail` tests lag behind the rest of the
engine because they are not in the Cargo workspace, so not checked by
CI.
- Fixes #16801, please see it for further context!

## Solution

- Add benchmarks and `compile_fail` tests to the Cargo workspace.
- Fix any leftover formatting issues and documentation.

## Testing

- I think CI should catch most things!

## Questions

<details>
<summary>Outdated issue I was having with function reflection being
optional</summary>

The `reflection_types` example is failing in Rust-Analyzer for me, but
not a normal check.

```rust
error[E0004]: non-exhaustive patterns: `ReflectRef::Function(_)` not covered
   --> examples/reflection/reflection_types.rs:81:11
    |
81  |     match value.reflect_ref() {
    |           ^^^^^^^^^^^^^^^^^^^ pattern `ReflectRef::Function(_)` not covered
    |
note: `ReflectRef<'_>` defined here
   --> /Users/bdeep/dev/bevy/bevy/crates/bevy_reflect/src/kind.rs:178:1
    |
178 | pub enum ReflectRef<'a> {
    | ^^^^^^^^^^^^^^^^^^^^^^^
...
188 |     Function(&'a dyn Function),
    |     -------- not covered
    = note: the matched value is of type `ReflectRef<'_>`
help: ensure that all possible cases are being handled by adding a match arm with a wildcard pattern or an explicit pattern as shown
    |
126 ~         ReflectRef::Opaque(_) => {},
127 +         ReflectRef::Function(_) => todo!()
    |
```

I think it is because the following line is feature-gated:


cc0f6a8db4/examples/reflection/reflection_types.rs (L117-L122)

My theory for why this is happening is because the benchmarks enabled
`bevy_reflect`'s `function` feature, which gets merged with the rest of
the features when RA checks the workspace, but the `#[cfg(...)]` gate in
the example isn't detecting it:


cc0f6a8db4/benches/Cargo.toml (L19)

Any thoughts on how to fix this? It's not blocking, since the example
still compiles as normal, but it's just RA and the command `cargo check
--workspace --all-targets` appears to fail.

</summary>
This commit is contained in:
BD103 2024-12-21 17:30:29 -05:00 committed by GitHub
parent cf21d9a37e
commit 20277006ce
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
9 changed files with 55 additions and 38 deletions

View File

@ -13,20 +13,20 @@ documentation = "https://docs.rs/bevy"
rust-version = "1.83.0"
[workspace]
exclude = [
"benches",
"crates/bevy_derive/compile_fail",
"crates/bevy_ecs/compile_fail",
"crates/bevy_reflect/compile_fail",
"tools/compile_fail_utils",
]
resolver = "2"
members = [
# All of Bevy's official crates are within the `crates` folder!
"crates/*",
# Several crates with macros have "compile fail" tests nested inside them, also known as UI
# tests, that verify diagnostic output does not accidentally change.
"crates/*/compile_fail",
# Examples of compiling Bevy for mobile platforms.
"examples/mobile",
"tools/ci",
"tools/build-templated-pages",
"tools/build-wasm-example",
"tools/example-showcase",
# Benchmarks
"benches",
# Internal tools that are not published.
"tools/*",
# Bevy's error codes. This is a crate so we can automatically check all of the code blocks.
"errors",
]

View File

@ -32,10 +32,6 @@ rand_chacha = "0.3"
[target.'cfg(target_os = "linux")'.dev-dependencies]
bevy_winit = { path = "../crates/bevy_winit", features = ["x11"] }
[profile.release]
opt-level = 3
lto = true
[lints.clippy]
doc_markdown = "warn"
manual_let_else = "warn"

View File

@ -1,26 +1,34 @@
# Bevy Benchmarks
This is a crate with a collection of benchmarks for Bevy, separate from the rest of the Bevy crates.
This is a crate with a collection of benchmarks for Bevy.
## Running the benchmarks
## Running benchmarks
1. Setup everything you need for Bevy with the [setup guide](https://bevyengine.org/learn/book/getting-started/setup/).
2. Move into the `benches` directory (where this README is located).
Benchmarks can be run through Cargo:
```sh
bevy $ cd benches
```
# Run all benchmarks. (This will take a while!)
cargo bench -p benches
3. Run the benchmarks with cargo (This will take a while)
# Just compile the benchmarks, do not run them.
cargo bench -p benches --no-run
```sh
bevy/benches $ cargo bench
```
# Run the benchmarks for a specific crate. (See `Cargo.toml` for a complete list of crates
# tracked.)
cargo bench -p benches --bench ecs
If you'd like to only compile the benchmarks (without running them), you can do that like this:
# Filter which benchmarks are run based on the name. This will only run benchmarks whose name
# contains "name_fragment".
cargo bench -p benches -- name_fragment
```sh
bevy/benches $ cargo bench --no-run
# List all available benchmarks.
cargo bench -p benches -- --list
# Save a baseline to be compared against later.
cargo bench -p benches --save-baseline before
# Compare the current benchmarks against a baseline to find performance gains and regressions.
cargo bench -p benches --baseline before
```
## Criterion

View File

@ -91,7 +91,7 @@ fn hierarchy<C: Bundle + Default + GetTypeRegistration>(
.spawn(black_box(C::default()))
.set_parent(parent_id)
.id();
hierarchy_level.push(child_id)
hierarchy_level.push(child_id);
}
}
}

View File

@ -217,7 +217,7 @@ fn dynamic_map_get(criterion: &mut Criterion) {
bencher.iter(|| {
for i in 0..size as u64 {
let key = black_box(i);
black_box(assert!(map.get(&key).is_some()));
black_box(map.get(&key));
}
});
},

View File

@ -1,4 +1,6 @@
fn main() -> compile_fail_utils::ui_test::Result<()> {
compile_fail_utils::test_multiple("derive_deref", ["tests/deref_derive", "tests/deref_mut_derive"])
compile_fail_utils::test_multiple(
"derive_deref",
["tests/deref_derive", "tests/deref_mut_derive"],
)
}

View File

@ -124,6 +124,11 @@ fn setup() {
// implementation. Opaque is implemented for opaque types like String and Instant,
// but also include primitive types like i32, usize, and f32 (despite not technically being opaque).
ReflectRef::Opaque(_) => {}
#[allow(
unreachable_patterns,
reason = "This example cannot always detect when `bevy_reflect/functions` is enabled."
)]
_ => {}
}
let mut dynamic_list = DynamicList::default();

View File

@ -1,6 +1,6 @@
# Helpers for compile fail tests
This crate contains everything needed to set up compile tests for the Bevy repo. It, like all Bevy compile test crates, is excluded from the Bevy workspace. This is done to not fail [`crater` tests](https://github.com/rust-lang/crater) for Bevy. The `CI` workflow executes these tests on the stable rust toolchain see ([tools/ci](../../tools/ci/src/main.rs)).
This crate contains everything needed to set up compile tests for the Bevy repo. The `CI` workflow executes these tests on the stable rust toolchain (see [tools/ci](../../tools/ci/src/main.rs)).
## Writing new test cases
@ -34,7 +34,6 @@ This will be a rather involved process. You'll have to:
- Create a folder called `tests` within the new crate.
- Add a test runner file to this folder. The file should contain a main function calling one of the test functions defined in this crate.
- Add a `[[test]]` table to the `Cargo.toml`. This table will need to contain `harness = false` and `name = <name of the test runner file you defined>`.
- Add the path of the new crate under `[workspace].exclude` in the root [`Cargo.toml`](../../Cargo.toml).
- Modify the [`CI`](../../tools/ci/) tool to run `cargo test` on this crate.
- And finally, write your compile tests.

View File

@ -109,10 +109,17 @@ pub fn test_with_multiple_configs(
test_name: impl Into<String>,
configs: impl IntoIterator<Item = ui_test::Result<Config>>,
) -> ui_test::Result<()> {
let configs = configs.into_iter().collect::<ui_test::Result<Vec<Config>>>()?;
let configs = configs
.into_iter()
.collect::<ui_test::Result<Vec<Config>>>()?;
let emitter: Box<dyn StatusEmitter + Send> = if env::var_os("CI").is_some() {
Box::new((Text::verbose(), Gha::<true> { name: test_name.into() }))
Box::new((
Text::verbose(),
Gha::<true> {
name: test_name.into(),
},
))
} else {
Box::new(Text::quiet())
};