![]() # Objective Allow combinator and pipe systems to delay validation of the second system, while still allowing the second system to be skipped. Fixes #18796 Allow fallible systems to be used as one-shot systems, reporting errors to the error handler when used through commands. Fixes #19722 Allow fallible systems to be used as run conditions, including when used with combinators. Alternative to #19580. Always validate parameters when calling the safe `run_without_applying_deferred`, `run`, and `run_readonly` methods on a `System`. ## Solution Have `System::run_unsafe` return a `Result`. We want pipe systems to run the first system before validating the second, since the first system may affect whether the second system has valid parameters. But if the second system skips then we have no output value to return! So, pipe systems must return a `Result` that indicates whether the second system ran. But if we just make pipe systems have `Out = Result<B::Out>`, then chaining `a.pipe(b).pipe(c)` becomes difficult. `c` would need to accept the `Result` from `a.pipe(b)`, which means it would likely need to return `Result` itself, giving `Result<Result<Out>>`! Instead, we make *all* systems return a `Result`! We move the handling of fallible systems from `IntoScheduleConfigs` and `IntoObserverSystem` to `SystemParamFunction` and `ExclusiveSystemParamFunction`, so that an infallible system can be wrapped before being passed to a combinator. As a side effect, this enables fallible systems to be used as run conditions and one-shot systems. Now that the safe `run_without_applying_deferred`, `run`, and `run_readonly` methods return a `Result`, we can have them perform parameter validation themselves instead of requiring each caller to remember to call them. `run_unsafe` will continue to not validate parameters, since it is used in the multi-threaded executor when we want to validate and run in separate tasks. Note that this makes type inference a little more brittle. A function that returns `Result<T>` can be considered either a fallible system returning `T` or an infallible system returning `Result<T>` (and this is important to continue supporting `pipe`-based error handling)! So there are some cases where the output type of a system can no longer be inferred. It will work fine when directly adding to a schedule, since then the output type is fixed to `()` (or `bool` for run conditions). And it will work fine when `pipe`ing to a system with a typed input parameter. I used a dedicated `RunSystemError` for the error type instead of plain `BevyError` so that skipping a system does not box an error or capture a backtrace. |
||
---|---|---|
.. | ||
benches | ||
src | ||
Cargo.toml | ||
README.md |
Bevy Benchmarks
This is a crate with a collection of benchmarks for Bevy.
Running benchmarks
Benchmarks can be run through Cargo:
# Run all benchmarks. (This will take a while!)
cargo bench -p benches
# Just compile the benchmarks, do not run them.
cargo bench -p benches --no-run
# Run the benchmarks for a specific crate. (See `Cargo.toml` for a complete list of crates
# tracked.)
cargo bench -p benches --bench ecs
# Filter which benchmarks are run based on the name. This will only run benchmarks whose name
# contains "name_fragment".
cargo bench -p benches -- name_fragment
# List all available benchmarks.
cargo bench -p benches -- --list
# Save a baseline to be compared against later.
cargo bench -p benches -- --save-baseline before
# Compare the current benchmarks against a baseline to find performance gains and regressions.
cargo bench -p benches -- --baseline before
Criterion
Bevy's benchmarks use Criterion. If you want to learn more about using Criterion for comparing performance against a baseline or generating detailed reports, you can read the Criterion.rs documentation.