38c3423693
31 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
38c3423693
|
Event Split: Event, EntityEvent, and BufferedEvent (#19647)
# Objective Closes #19564. The current `Event` trait looks like this: ```rust pub trait Event: Send + Sync + 'static { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` The `Event` trait is used by both buffered events (`EventReader`/`EventWriter`) and observer events. If they are observer events, they can optionally be targeted at specific `Entity`s or `ComponentId`s, and can even be propagated to other entities. However, there has long been a desire to split the trait semantically for a variety of reasons, see #14843, #14272, and #16031 for discussion. Some reasons include: - It's very uncommon to use a single event type as both a buffered event and targeted observer event. They are used differently and tend to have distinct semantics. - A common footgun is using buffered events with observers or event readers with observer events, as there is no type-level error that prevents this kind of misuse. - #19440 made `Trigger::target` return an `Option<Entity>`. This *seriously* hurts ergonomics for the general case of entity observers, as you need to `.unwrap()` each time. If we could statically determine whether the event is expected to have an entity target, this would be unnecessary. There's really two main ways that we can categorize events: push vs. pull (i.e. "observer event" vs. "buffered event") and global vs. targeted: | | Push | Pull | | ------------ | --------------- | --------------------------- | | **Global** | Global observer | `EventReader`/`EventWriter` | | **Targeted** | Entity observer | - | There are many ways to approach this, each with their tradeoffs. Ultimately, we kind of want to split events both ways: - A type-level distinction between observer events and buffered events, to prevent people from using the wrong kind of event in APIs - A statically designated entity target for observer events to avoid accidentally using untargeted events for targeted APIs This PR achieves these goals by splitting event traits into `Event`, `EntityEvent`, and `BufferedEvent`, with `Event` being the shared trait implemented by all events. ## `Event`, `EntityEvent`, and `BufferedEvent` `Event` is now a very simple trait shared by all events. ```rust pub trait Event: Send + Sync + 'static { // Required for observer APIs fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` You can call `trigger` for *any* event, and use a global observer for listening to the event. ```rust #[derive(Event)] struct Speak { message: String, } // ... app.add_observer(|trigger: On<Speak>| { println!("{}", trigger.message); }); // ... commands.trigger(Speak { message: "Y'all like these reworked events?".to_string(), }); ``` To allow an event to be targeted at entities and even propagated further, you can additionally implement the `EntityEvent` trait: ```rust pub trait EntityEvent: Event { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; } ``` This lets you call `trigger_targets`, and to use targeted observer APIs like `EntityCommands::observe`: ```rust #[derive(Event, EntityEvent)] #[entity_event(traversal = &'static ChildOf, auto_propagate)] struct Damage { amount: f32, } // ... let enemy = commands.spawn((Enemy, Health(100.0))).id(); // Spawn some armor as a child of the enemy entity. // When the armor takes damage, it will bubble the event up to the enemy. let armor_piece = commands .spawn((ArmorPiece, Health(25.0), ChildOf(enemy))) .observe(|trigger: On<Damage>, mut query: Query<&mut Health>| { // Note: `On::target` only exists because this is an `EntityEvent`. let mut health = query.get(trigger.target()).unwrap(); health.0 -= trigger.amount(); }); commands.trigger_targets(Damage { amount: 10.0 }, armor_piece); ``` > [!NOTE] > You *can* still also trigger an `EntityEvent` without targets using `trigger`. We probably *could* make this an either-or thing, but I'm not sure that's actually desirable. To allow an event to be used with the buffered API, you can implement `BufferedEvent`: ```rust pub trait BufferedEvent: Event {} ``` The event can then be used with `EventReader`/`EventWriter`: ```rust #[derive(Event, BufferedEvent)] struct Message(String); fn write_hello(mut writer: EventWriter<Message>) { writer.write(Message("I hope these examples are alright".to_string())); } fn read_messages(mut reader: EventReader<Message>) { // Process all buffered events of type `Message`. for Message(message) in reader.read() { println!("{message}"); } } ``` In summary: - Need a basic event you can trigger and observe? Derive `Event`! - Need the event to be targeted at an entity? Derive `EntityEvent`! - Need the event to be buffered and support the `EventReader`/`EventWriter` API? Derive `BufferedEvent`! ## Alternatives I'll now cover some of the alternative approaches I have considered and briefly explored. I made this section collapsible since it ended up being quite long :P <details> <summary>Expand this to see alternatives</summary> ### 1. Unified `Event` Trait One option is not to have *three* separate traits (`Event`, `EntityEvent`, `BufferedEvent`), and to instead just use associated constants on `Event` to determine whether an event supports targeting and buffering or not: ```rust pub trait Event: Send + Sync + 'static { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; const TARGETED: bool = false; const BUFFERED: bool = false; fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` Methods can then use bounds like `where E: Event<TARGETED = true>` or `where E: Event<BUFFERED = true>` to limit APIs to specific kinds of events. This would keep everything under one `Event` trait, but I don't think it's necessarily a good idea. It makes APIs harder to read, and docs can't easily refer to specific types of events. You can also create weird invariants: what if you specify `TARGETED = false`, but have `Traversal` and/or `AUTO_PROPAGATE` enabled? ### 2. `Event` and `Trigger` Another option is to only split the traits between buffered events and observer events, since that is the main thing people have been asking for, and they have the largest API difference. If we did this, I think we would need to make the terms *clearly* separate. We can't really use `Event` and `BufferedEvent` as the names, since it would be strange that `BufferedEvent` doesn't implement `Event`. Something like `ObserverEvent` and `BufferedEvent` could work, but it'd be more verbose. For this approach, I would instead keep `Event` for the current `EventReader`/`EventWriter` API, and call the observer event a `Trigger`, since the "trigger" terminology is already used in the observer context within Bevy (both as a noun and a verb). This is also what a long [bikeshed on Discord](https://discord.com/channels/691052431525675048/749335865876021248/1298057661878898791) seemed to land on at the end of last year. ```rust // For `EventReader`/`EventWriter` pub trait Event: Send + Sync + 'static {} // For observers pub trait Trigger: Send + Sync + 'static { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; const TARGETED: bool = false; fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` The problem is that "event" is just a really good term for something that "happens". Observers are rapidly becoming the more prominent API, so it'd be weird to give them the `Trigger` name and leave the good `Event` name for the less common API. So, even though a split like this seems neat on the surface, I think it ultimately wouldn't really work. We want to keep the `Event` name for observer events, and there is no good alternative for the buffered variant. (`Message` was suggested, but saying stuff like "sends a collision message" is weird.) ### 3. `GlobalEvent` + `TargetedEvent` What if instead of focusing on the buffered vs. observed split, we *only* make a distinction between global and targeted events? ```rust // A shared event trait to allow global observers to work pub trait Event: Send + Sync + 'static { fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } // For buffered events and non-targeted observer events pub trait GlobalEvent: Event {} // For targeted observer events pub trait TargetedEvent: Event { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; } ``` This is actually the first approach I implemented, and it has the neat characteristic that you can only use non-targeted APIs like `trigger` with a `GlobalEvent` and targeted APIs like `trigger_targets` with a `TargetedEvent`. You have full control over whether the entity should or should not have a target, as they are fully distinct at the type-level. However, there's a few problems: - There is no type-level indication of whether a `GlobalEvent` supports buffered events or just non-targeted observer events - An `Event` on its own does literally nothing, it's just a shared trait required to make global observers accept both non-targeted and targeted events - If an event is both a `GlobalEvent` and `TargetedEvent`, global observers again have ambiguity on whether an event has a target or not, undermining some of the benefits - The names are not ideal ### 4. `Event` and `EntityEvent` We can fix some of the problems of Alternative 3 by accepting that targeted events can also be used in non-targeted contexts, and simply having the `Event` and `EntityEvent` traits: ```rust // For buffered events and non-targeted observer events pub trait Event: Send + Sync + 'static { fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } // For targeted observer events pub trait EntityEvent: Event { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; } ``` This is essentially identical to this PR, just without a dedicated `BufferedEvent`. The remaining major "problem" is that there is still zero type-level indication of whether an `Event` event *actually* supports the buffered API. This leads us to the solution proposed in this PR, using `Event`, `EntityEvent`, and `BufferedEvent`. </details> ## Conclusion The `Event` + `EntityEvent` + `BufferedEvent` split proposed in this PR aims to solve all the common problems with Bevy's current event model while keeping the "weirdness" factor minimal. It splits in terms of both the push vs. pull *and* global vs. targeted aspects, while maintaining a shared concept for an "event". ### Why I Like This - The term "event" remains as a single concept for all the different kinds of events in Bevy. - Despite all event types being "events", they use fundamentally different APIs. Instead of assuming that you can use an event type with any pattern (when only one is typically supported), you explicitly opt in to each one with dedicated traits. - Using separate traits for each type of event helps with documentation and clearer function signatures. - I can safely make assumptions on expected usage. - If I see that an event is an `EntityEvent`, I can assume that I can use `observe` on it and get targeted events. - If I see that an event is a `BufferedEvent`, I can assume that I can use `EventReader` to read events. - If I see both `EntityEvent` and `BufferedEvent`, I can assume that both APIs are supported. In summary: This allows for a unified concept for events, while limiting the different ways to use them with opt-in traits. No more guess-work involved when using APIs. ### Problems? - Because `BufferedEvent` implements `Event` (for more consistent semantics etc.), you can still use all buffered events for non-targeted observers. I think this is fine/good. The important part is that if you see that an event implements `BufferedEvent`, you know that the `EventReader`/`EventWriter` API should be supported. Whether it *also* supports other APIs is secondary. - I currently only support `trigger_targets` for an `EntityEvent`. However, you can technically target components too, without targeting any entities. I consider that such a niche and advanced use case that it's not a huge problem to only support it for `EntityEvent`s, but we could also split `trigger_targets` into `trigger_entities` and `trigger_components` if we wanted to (or implement components as entities :P). - You can still trigger an `EntityEvent` *without* targets. I consider this correct, since `Event` implements the non-targeted behavior, and it'd be weird if implementing another trait *removed* behavior. However, it does mean that global observers for entity events can technically return `Entity::PLACEHOLDER` again (since I got rid of the `Option<Entity>` added in #19440 for ergonomics). I think that's enough of an edge case that it's not a huge problem, but it is worth keeping in mind. - ~~Deriving both `EntityEvent` and `BufferedEvent` for the same type currently duplicates the `Event` implementation, so you instead need to manually implement one of them.~~ Changed to always requiring `Event` to be derived. ## Related Work There are plans to implement multi-event support for observers, especially for UI contexts. [Cart's example](https://github.com/bevyengine/bevy/issues/14649#issuecomment-2960402508) API looked like this: ```rust // Truncated for brevity trigger: Trigger<( OnAdd<Pressed>, OnRemove<Pressed>, OnAdd<InteractionDisabled>, OnRemove<InteractionDisabled>, OnInsert<Hovered>, )>, ``` I believe this shouldn't be in conflict with this PR. If anything, this PR might *help* achieve the multi-event pattern for entity observers with fewer footguns: by statically enforcing that all of these events are `EntityEvent`s in the context of `EntityCommands::observe`, we can avoid misuse or weird cases where *some* events inside the trigger are targeted while others are not. |
||
|
|
e9a0ef49f9
|
Rename bevy_platform_support to bevy_platform (#18813)
# Objective The goal of `bevy_platform_support` is to provide a set of platform agnostic APIs, alongside platform-specific functionality. This is a high traffic crate (providing things like HashMap and Instant). Especially in light of https://github.com/bevyengine/bevy/discussions/18799, it deserves a friendlier / shorter name. Given that it hasn't had a full release yet, getting this change in before Bevy 0.16 makes sense. ## Solution - Rename `bevy_platform_support` to `bevy_platform`. |
||
|
|
44ad3bf62b
|
Move Resource trait to its own file (#17469)
# Objective `bevy_ecs`'s `system` module is something of a grab bag, and *very* large. This is particularly true for the `system_param` module, which is more than 2k lines long! While it could be defensible to put `Res` and `ResMut` there (lol no they're in change_detection.rs, obviously), it doesn't make any sense to put the `Resource` trait there. This is confusing to navigate (and painful to work on and review). ## Solution - Create a root level `bevy_ecs/resource.rs` module to mirror `bevy_ecs/component.rs` - move the `Resource` trait to that module - move the `Resource` derive macro to that module as well (Rust really likes when you pun on the names of the derive macro and trait and put them in the same path) - fix all of the imports ## Notes to reviewers - We could probably move more stuff into here, but I wanted to keep this PR as small as possible given the absurd level of import changes. - This PR is ground work for my upcoming attempts to store resource data on components (resources-as-entities). Splitting this code out will make the work and review a bit easier, and is the sort of overdue refactor that's good to do as part of more meaningful work. ## Testing cargo build works! ## Migration Guide `bevy_ecs::system::Resource` has been moved to `bevy_ecs::resource::Resource`. |
||
|
|
a64446b77e
|
Create bevy_platform_support Crate (#17250)
# Objective - Contributes to #16877 ## Solution - Initial creation of `bevy_platform_support` crate. - Moved `bevy_utils::Instant` into new `bevy_platform_support` crate. - Moved `portable-atomic`, `portable-atomic-util`, and `critical-section` into new `bevy_platform_support` crate. ## Testing - CI --- ## Showcase Instead of needing code like this to import an `Arc`: ```rust #[cfg(feature = "portable-atomic")] use portable_atomic_util::Arc; #[cfg(not(feature = "portable-atomic"))] use alloc::sync::Arc; ``` We can now use: ```rust use bevy_platform_support::sync::Arc; ``` This applies to many other types, but the goal is overall the same: allowing crates to use `std`-like types without the boilerplate of conditional compilation and platform-dependencies. ## Migration Guide - Replace imports of `bevy_utils::Instant` with `bevy_platform_support::time::Instant` - Replace imports of `bevy::utils::Instant` with `bevy::platform_support::time::Instant` ## Notes - `bevy_platform_support` hasn't been reserved on `crates.io` - ~~`bevy_platform_support` is not re-exported from `bevy` at this time. It may be worthwhile exporting this crate, but I am unsure of a reasonable name to export it under (`platform_support` may be a bit wordy for user-facing).~~ - I've included an implementation of `Instant` which is suitable for `no_std` platforms that are not Wasm for the sake of eliminating feature gates around its use. It may be a controversial inclusion, so I'm happy to remove it if required. - There are many other items (`spin`, `bevy_utils::Sync(Unsafe)Cell`, etc.) which should be added to this crate. I have kept the initial scope small to demonstrate utility without making this too unwieldy. --------- Co-authored-by: TimJentzsch <TimJentzsch@users.noreply.github.com> Co-authored-by: Chris Russell <8494645+chescock@users.noreply.github.com> Co-authored-by: François Mockers <francois.mockers@vleue.com> |
||
|
|
3c829d7f68
|
Remove everything except Instant from bevy_utils::time (#17158)
# Objective - Contributes to #11478 - Contributes to #16877 ## Solution - Removed everything except `Instant` from `bevy_utils::time` ## Testing - CI --- ## Migration Guide If you relied on any of the following from `bevy_utils::time`: - `Duration` - `TryFromFloatSecsError` Import these directly from `core::time` regardless of platform target (WASM, mobile, etc.) If you relied on any of the following from `bevy_utils::time`: - `SystemTime` - `SystemTimeError` Instead import these directly from either `std::time` or `web_time` as appropriate for your target platform. ## Notes `Duration` and `TryFromFloatSecsError` are both re-exports from `core::time` regardless of whether they are used from `web_time` or `std::time`, so there is no value gained from re-exporting them from `bevy_utils::time` as well. As for `SystemTime` and `SystemTimeError`, no Bevy internal crates or examples rely on these types. Since Bevy doesn't have a `Time<Wall>` resource for interacting with wall-time (and likely shouldn't need one), I think removing these from `bevy_utils` entirely and waiting for a use-case to justify inclusion is a reasonable path forward. |
||
|
|
30d84519a2
|
Use en-us locale for typos (#16037)
# Objective Bevy seems to want to standardize on "American English" spellings. Not sure if this is laid out anywhere in writing, but see also #15947. While perusing the docs for `typos`, I noticed that it has a `locale` config option and tried it out. ## Solution Switch to `en-us` locale in the `typos` config and run `typos -w` ## Migration Guide The following methods or fields have been renamed from `*dependants*` to `*dependents*`. - `ProcessorAssetInfo::dependants` - `ProcessorAssetInfos::add_dependant` - `ProcessorAssetInfos::non_existent_dependants` - `AssetInfo::dependants_waiting_on_load` - `AssetInfo::dependants_waiting_on_recursive_dep_load` - `AssetInfos::loader_dependants` - `AssetInfos::remove_dependants_and_labels` |
||
|
|
7482a0d26d
|
aligning public apis of Time,Timer and Stopwatch (#15962)
Fixes #15834 ## Migration Guide The APIs of `Time`, `Timer` and `Stopwatch` have been cleaned up for consistency with each other and the standard library's `Duration` type. The following methods have been renamed: - `Stowatch::paused` -> `Stopwatch::is_paused` - `Time::elapsed_seconds` -> `Time::elasped_secs` (including `_f64` and `_wrapped` variants) |
||
|
|
d70595b667
|
Add core and alloc over std Lints (#15281)
# Objective - Fixes #6370 - Closes #6581 ## Solution - Added the following lints to the workspace: - `std_instead_of_core` - `std_instead_of_alloc` - `alloc_instead_of_core` - Used `cargo +nightly fmt` with [item level use formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Item%5C%3A) to split all `use` statements into single items. - Used `cargo clippy --workspace --all-targets --all-features --fix --allow-dirty` to _attempt_ to resolve the new linting issues, and intervened where the lint was unable to resolve the issue automatically (usually due to needing an `extern crate alloc;` statement in a crate root). - Manually removed certain uses of `std` where negative feature gating prevented `--all-features` from finding the offending uses. - Used `cargo +nightly fmt` with [crate level use formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Crate%5C%3A) to re-merge all `use` statements matching Bevy's previous styling. - Manually fixed cases where the `fmt` tool could not re-merge `use` statements due to conditional compilation attributes. ## Testing - Ran CI locally ## Migration Guide The MSRV is now 1.81. Please update to this version or higher. ## Notes - This is a _massive_ change to try and push through, which is why I've outlined the semi-automatic steps I used to create this PR, in case this fails and someone else tries again in the future. - Making this change has no impact on user code, but does mean Bevy contributors will be warned to use `core` and `alloc` instead of `std` where possible. - This lint is a critical first step towards investigating `no_std` options for Bevy. --------- Co-authored-by: François Mockers <francois.mockers@vleue.com> |
||
|
|
de7ff295e1
|
Make bevy_time optionally depend on bevy_reflect (#13263)
# Objective Fixes #13246. |
||
|
|
8316166622
|
Fix uses of "it's" vs "its". (#13033)
Grammar changes only. |
||
|
|
fe28e0ec32
|
Add First/Pre/Post/Last schedules to the Fixed timestep (#10977)
Fixes https://github.com/bevyengine/bevy/issues/10974 # Objective Duplicate the ordering logic of the `Main` schedule into the `FixedMain` schedule. --- ## Changelog - `FixedUpdate` is no longer the main schedule ran in `RunFixedUpdateLoop`, `FixedMain` has replaced this and has a similar structure to `Main`. ## Migration Guide - Usage of `RunFixedUpdateLoop` should be renamed to `RunFixedMainLoop`. |
||
|
|
32a5c7db35
|
docs: use read instead of deprecated iter (#10376)
https://docs.rs/bevy/latest/bevy/ecs/event/struct.EventReader.html#method.iter # Objective - Remove doc example using deprecated `EventReader.iter` method ## Solution - Update docs to use `read` instead of `iter` |
||
|
|
3d79dc4cdc
|
Unify FixedTime and Time while fixing several problems (#8964)
# Objective Current `FixedTime` and `Time` have several problems. This pull aims to fix many of them at once. - If there is a longer pause between app updates, time will jump forward a lot at once and fixed time will iterate on `FixedUpdate` for a large number of steps. If the pause is merely seconds, then this will just mean jerkiness and possible unexpected behaviour in gameplay. If the pause is hours/days as with OS suspend, the game will appear to freeze until it has caught up with real time. - If calculating a fixed step takes longer than specified fixed step period, the game will enter a death spiral where rendering each frame takes longer and longer due to more and more fixed step updates being run per frame and the game appears to freeze. - There is no way to see current fixed step elapsed time inside fixed steps. In order to track this, the game designer needs to add a custom system inside `FixedUpdate` that calculates elapsed or step count in a resource. - Access to delta time inside fixed step is `FixedStep::period` rather than `Time::delta`. This, coupled with the issue that `Time::elapsed` isn't available at all for fixed steps, makes it that time requiring systems are either implemented to be run in `FixedUpdate` or `Update`, but rarely work in both. - Fixes #8800 - Fixes #8543 - Fixes #7439 - Fixes #5692 ## Solution - Create a generic `Time<T>` clock that has no processing logic but which can be instantiated for multiple usages. This is also exposed for users to add custom clocks. - Create three standard clocks, `Time<Real>`, `Time<Virtual>` and `Time<Fixed>`, all of which contain their individual logic. - Create one "default" clock, which is just `Time` (or `Time<()>`), which will be overwritten from `Time<Virtual>` on each update, and `Time<Fixed>` inside `FixedUpdate` schedule. This way systems that do not care specifically which time they track can work both in `Update` and `FixedUpdate` without changes and the behaviour is intuitive. - Add `max_delta` to virtual time update, which limits how much can be added to virtual time by a single update. This fixes both the behaviour after a long freeze, and also the death spiral by limiting how many fixed timestep iterations there can be per update. Possible future work could be adding `max_accumulator` to add a sort of "leaky bucket" time processing to possibly smooth out jumps in time while keeping frame rate stable. - Many minor tweaks and clarifications to the time functions and their documentation. ## Changelog - `Time::raw_delta()`, `Time::raw_elapsed()` and related methods are moved to `Time<Real>::delta()` and `Time<Real>::elapsed()` and now match `Time` API - `FixedTime` is now `Time<Fixed>` and matches `Time` API. - `Time<Fixed>` default timestep is now 64 Hz, or 15625 microseconds. - `Time` inside `FixedUpdate` now reflects fixed timestep time, making systems portable between `Update ` and `FixedUpdate`. - `Time::pause()`, `Time::set_relative_speed()` and related methods must now be called as `Time<Virtual>::pause()` etc. - There is a new `max_delta` setting in `Time<Virtual>` that limits how much the clock can jump by a single update. The default value is 0.25 seconds. - Removed `on_fixed_timer()` condition as `on_timer()` does the right thing inside `FixedUpdate` now. ## Migration Guide - Change all `Res<Time>` instances that access `raw_delta()`, `raw_elapsed()` and related methods to `Res<Time<Real>>` and `delta()`, `elapsed()`, etc. - Change access to `period` from `Res<FixedTime>` to `Res<Time<Fixed>>` and use `delta()`. - The default timestep has been changed from 60 Hz to 64 Hz. If you wish to restore the old behaviour, use `app.insert_resource(Time::<Fixed>::from_hz(60.0))`. - Change `app.insert_resource(FixedTime::new(duration))` to `app.insert_resource(Time::<Fixed>::from_duration(duration))` - Change `app.insert_resource(FixedTime::new_from_secs(secs))` to `app.insert_resource(Time::<Fixed>::from_seconds(secs))` - Change `system.on_fixed_timer(duration)` to `system.on_timer(duration)`. Timers in systems placed in `FixedUpdate` schedule automatically use the fixed time clock. - Change `ResMut<Time>` calls to `pause()`, `is_paused()`, `set_relative_speed()` and related methods to `ResMut<Time<Virtual>>` calls. The API is the same, with the exception that `relative_speed()` will return the actual last ste relative speed, while `effective_relative_speed()` returns 0.0 if the time is paused and corresponds to the speed that was set when the update for the current frame started. ## Todo - [x] Update pull name and description - [x] Top level documentation on usage - [x] Fix examples - [x] Decide on default `max_delta` value - [x] Decide naming of the three clocks: is `Real`, `Virtual`, `Fixed` good? - [x] Decide if the three clock inner structures should be in prelude - [x] Decide on best way to configure values at startup: is manually inserting a new clock instance okay, or should there be config struct separately? - [x] Fix links in docs - [x] Decide what should be public and what not - [x] Decide how `wrap_period` should be handled when it is changed - [x] ~~Add toggles to disable setting the clock as default?~~ No, separate pull if needed. - [x] Add tests - [x] Reformat, ensure adheres to conventions etc. - [x] Build documentation and see that it looks correct ## Contributors Huge thanks to @alice-i-cecile and @maniwani while building this pull. It was a shared effort! --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Cameron <51241057+maniwani@users.noreply.github.com> Co-authored-by: Jerome Humbert <djeedai@gmail.com> |
||
|
|
8eb6ccdd87
|
Remove useless single tuples and trailing commas (#9720)
# Objective Title |
||
|
|
33fdc5f5db
|
Move schedule name into Schedule (#9600)
# Objective - Move schedule name into `Schedule` to allow the schedule name to be used for errors and tracing in Schedule methods - Fixes #9510 ## Solution - Move label onto `Schedule` and adjust api's on `World` and `Schedule` to not pass explicit label where it makes sense to. - add name to errors and tracing. - `Schedule::new` now takes a label so either add the label or use `Schedule::default` which uses a default label. `default` is mostly used in doc examples and tests. --- ## Changelog - move label onto `Schedule` to improve error message and logging for schedules. ## Migration Guide `Schedule::new` and `App::add_schedule` ```rust // old let schedule = Schedule::new(); app.add_schedule(MyLabel, schedule); // new let schedule = Schedule::new(MyLabel); app.add_schedule(schedule); ``` if you aren't using a label and are using the schedule struct directly you can use the default constructor. ```rust // old let schedule = Schedule::new(); schedule.run(world); // new let schedule = Schedule::default(); schedule.run(world); ``` `Schedules:insert` ```rust // old let schedule = Schedule::new(); schedules.insert(MyLabel, schedule); // new let schedule = Schedule::new(MyLabel); schedules.insert(schedule); ``` `World::add_schedule` ```rust // old let schedule = Schedule::new(); world.add_schedule(MyLabel, schedule); // new let schedule = Schedule::new(MyLabel); world.add_schedule(schedule); ``` |
||
|
|
f99dcadf8a
|
Add missing documentation to bevy_time (#9428)
# Objective * Add documentation to undocumented items in `bevy_time`. * Add a warning for undocumented items. --------- Co-authored-by: Sélène Amanita <134181069+Selene-Amanita@users.noreply.github.com> |
||
|
|
aeeb20ec4c
|
bevy_reflect: FromReflect Ergonomics Implementation (#6056)
# Objective **This implementation is based on https://github.com/bevyengine/rfcs/pull/59.** --- Resolves #4597 Full details and motivation can be found in the RFC, but here's a brief summary. `FromReflect` is a very powerful and important trait within the reflection API. It allows Dynamic types (e.g., `DynamicList`, etc.) to be formed into Real ones (e.g., `Vec<i32>`, etc.). This mainly comes into play concerning deserialization, where the reflection deserializers both return a `Box<dyn Reflect>` that almost always contain one of these Dynamic representations of a Real type. To convert this to our Real type, we need to use `FromReflect`. It also sneaks up in other ways. For example, it's a required bound for `T` in `Vec<T>` so that `Vec<T>` as a whole can be made `FromReflect`. It's also required by all fields of an enum as it's used as part of the `Reflect::apply` implementation. So in other words, much like `GetTypeRegistration` and `Typed`, it is very much a core reflection trait. The problem is that it is not currently treated like a core trait and is not automatically derived alongside `Reflect`. This makes using it a bit cumbersome and easy to forget. ## Solution Automatically derive `FromReflect` when deriving `Reflect`. Users can then choose to opt-out if needed using the `#[reflect(from_reflect = false)]` attribute. ```rust #[derive(Reflect)] struct Foo; #[derive(Reflect)] #[reflect(from_reflect = false)] struct Bar; fn test<T: FromReflect>(value: T) {} test(Foo); // <-- OK test(Bar); // <-- Panic! Bar does not implement trait `FromReflect` ``` #### `ReflectFromReflect` This PR also automatically adds the `ReflectFromReflect` (introduced in #6245) registration to the derived `GetTypeRegistration` impl— if the type hasn't opted out of `FromReflect` of course. <details> <summary><h4>Improved Deserialization</h4></summary> > **Warning** > This section includes changes that have since been descoped from this PR. They will likely be implemented again in a followup PR. I am mainly leaving these details in for archival purposes, as well as for reference when implementing this logic again. And since we can do all the above, we might as well improve deserialization. We can now choose to deserialize into a Dynamic type or automatically convert it using `FromReflect` under the hood. `[Un]TypedReflectDeserializer::new` will now perform the conversion and return the `Box`'d Real type. `[Un]TypedReflectDeserializer::new_dynamic` will work like what we have now and simply return the `Box`'d Dynamic type. ```rust // Returns the Real type let reflect_deserializer = UntypedReflectDeserializer::new(®istry); let mut deserializer = ron:🇩🇪:Deserializer::from_str(input)?; let output: SomeStruct = reflect_deserializer.deserialize(&mut deserializer)?.take()?; // Returns the Dynamic type let reflect_deserializer = UntypedReflectDeserializer::new_dynamic(®istry); let mut deserializer = ron:🇩🇪:Deserializer::from_str(input)?; let output: DynamicStruct = reflect_deserializer.deserialize(&mut deserializer)?.take()?; ``` </details> --- ## Changelog * `FromReflect` is now automatically derived within the `Reflect` derive macro * This includes auto-registering `ReflectFromReflect` in the derived `GetTypeRegistration` impl * ~~Renamed `TypedReflectDeserializer::new` and `UntypedReflectDeserializer::new` to `TypedReflectDeserializer::new_dynamic` and `UntypedReflectDeserializer::new_dynamic`, respectively~~ **Descoped** * ~~Changed `TypedReflectDeserializer::new` and `UntypedReflectDeserializer::new` to automatically convert the deserialized output using `FromReflect`~~ **Descoped** ## Migration Guide * `FromReflect` is now automatically derived within the `Reflect` derive macro. Items with both derives will need to remove the `FromReflect` one. ```rust // OLD #[derive(Reflect, FromReflect)] struct Foo; // NEW #[derive(Reflect)] struct Foo; ``` If using a manual implementation of `FromReflect` and the `Reflect` derive, users will need to opt-out of the automatic implementation. ```rust // OLD #[derive(Reflect)] struct Foo; impl FromReflect for Foo {/* ... */} // NEW #[derive(Reflect)] #[reflect(from_reflect = false)] struct Foo; impl FromReflect for Foo {/* ... */} ``` <details> <summary><h4>Removed Migrations</h4></summary> > **Warning** > This section includes changes that have since been descoped from this PR. They will likely be implemented again in a followup PR. I am mainly leaving these details in for archival purposes, as well as for reference when implementing this logic again. * The reflect deserializers now perform a `FromReflect` conversion internally. The expected output of `TypedReflectDeserializer::new` and `UntypedReflectDeserializer::new` is no longer a Dynamic (e.g., `DynamicList`), but its Real counterpart (e.g., `Vec<i32>`). ```rust let reflect_deserializer = UntypedReflectDeserializer::new_dynamic(®istry); let mut deserializer = ron:🇩🇪:Deserializer::from_str(input)?; // OLD let output: DynamicStruct = reflect_deserializer.deserialize(&mut deserializer)?.take()?; // NEW let output: SomeStruct = reflect_deserializer.deserialize(&mut deserializer)?.take()?; ``` Alternatively, if this behavior isn't desired, use the `TypedReflectDeserializer::new_dynamic` and `UntypedReflectDeserializer::new_dynamic` methods instead: ```rust // OLD let reflect_deserializer = UntypedReflectDeserializer::new(®istry); // NEW let reflect_deserializer = UntypedReflectDeserializer::new_dynamic(®istry); ``` </details> --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
|
|
aefe1f0739
|
Schedule-First: the new and improved add_systems (#8079)
Co-authored-by: Mike <mike.hsu@gmail.com> |
||
|
|
d1a1f90902 |
fix Time::pause docs (missing "not") (#7807)
# Objective Time pausing does *not* affect `raw_*`. ## Solution - add missing word "not" |
||
|
|
104c74c3bc |
Allow relative speed of -0.0 (#7740)
# Objective -0.0 should typically be treated exactly the same as 0.0, but this assertion was rejecting it while allowing 0.0. The `Duration` API handles this correctly (on recent Rust versions) and returns a zero `Duration` for both -0.0 and 0.0. ## Solution Check for `ratio >= 0.0`, which is true if `ratio` is `-0.0`. --- ## Changelog Allow relative speed of -0.0 in the `Time::set_relative_speed_fXX` methods. |
||
|
|
206c7ce219 |
Migrate engine to Schedule v3 (#7267)
Huge thanks to @maniwani, @devil-ira, @hymm, @cart, @superdump and @jakobhellermann for the help with this PR. # Objective - Followup #6587. - Minimal integration for the Stageless Scheduling RFC: https://github.com/bevyengine/rfcs/pull/45 ## Solution - [x] Remove old scheduling module - [x] Migrate new methods to no longer use extension methods - [x] Fix compiler errors - [x] Fix benchmarks - [x] Fix examples - [x] Fix docs - [x] Fix tests ## Changelog ### Added - a large number of methods on `App` to work with schedules ergonomically - the `CoreSchedule` enum - `App::add_extract_system` via the `RenderingAppExtension` trait extension method - the private `prepare_view_uniforms` system now has a public system set for scheduling purposes, called `ViewSet::PrepareUniforms` ### Removed - stages, and all code that mentions stages - states have been dramatically simplified, and no longer use a stack - `RunCriteriaLabel` - `AsSystemLabel` trait - `on_hierarchy_reports_enabled` run criteria (now just uses an ad hoc resource checking run condition) - systems in `RenderSet/Stage::Extract` no longer warn when they do not read data from the main world - `RunCriteriaLabel` - `transform_propagate_system_set`: this was a nonstandard pattern that didn't actually provide enough control. The systems are already `pub`: the docs have been updated to ensure that the third-party usage is clear. ### Changed - `System::default_labels` is now `System::default_system_sets`. - `App::add_default_labels` is now `App::add_default_sets` - `CoreStage` and `StartupStage` enums are now `CoreSet` and `StartupSet` - `App::add_system_set` was renamed to `App::add_systems` - The `StartupSchedule` label is now defined as part of the `CoreSchedules` enum - `.label(SystemLabel)` is now referred to as `.in_set(SystemSet)` - `SystemLabel` trait was replaced by `SystemSet` - `SystemTypeIdLabel<T>` was replaced by `SystemSetType<T>` - The `ReportHierarchyIssue` resource now has a public constructor (`new`), and implements `PartialEq` - Fixed time steps now use a schedule (`CoreSchedule::FixedTimeStep`) rather than a run criteria. - Adding rendering extraction systems now panics rather than silently failing if no subapp with the `RenderApp` label is found. - the `calculate_bounds` system, with the `CalculateBounds` label, is now in `CoreSet::Update`, rather than in `CoreSet::PostUpdate` before commands are applied. - `SceneSpawnerSystem` now runs under `CoreSet::Update`, rather than `CoreStage::PreUpdate.at_end()`. - `bevy_pbr::add_clusters` is no longer an exclusive system - the top level `bevy_ecs::schedule` module was replaced with `bevy_ecs::scheduling` - `tick_global_task_pools_on_main_thread` is no longer run as an exclusive system. Instead, it has been replaced by `tick_global_task_pools`, which uses a `NonSend` resource to force running on the main thread. ## Migration Guide - Calls to `.label(MyLabel)` should be replaced with `.in_set(MySet)` - Stages have been removed. Replace these with system sets, and then add command flushes using the `apply_system_buffers` exclusive system where needed. - The `CoreStage`, `StartupStage, `RenderStage` and `AssetStage` enums have been replaced with `CoreSet`, `StartupSet, `RenderSet` and `AssetSet`. The same scheduling guarantees have been preserved. - Systems are no longer added to `CoreSet::Update` by default. Add systems manually if this behavior is needed, although you should consider adding your game logic systems to `CoreSchedule::FixedTimestep` instead for more reliable framerate-independent behavior. - Similarly, startup systems are no longer part of `StartupSet::Startup` by default. In most cases, this won't matter to you. - For example, `add_system_to_stage(CoreStage::PostUpdate, my_system)` should be replaced with - `add_system(my_system.in_set(CoreSet::PostUpdate)` - When testing systems or otherwise running them in a headless fashion, simply construct and run a schedule using `Schedule::new()` and `World::run_schedule` rather than constructing stages - Run criteria have been renamed to run conditions. These can now be combined with each other and with states. - Looping run criteria and state stacks have been removed. Use an exclusive system that runs a schedule if you need this level of control over system control flow. - For app-level control flow over which schedules get run when (such as for rollback networking), create your own schedule and insert it under the `CoreSchedule::Outer` label. - Fixed timesteps are now evaluated in a schedule, rather than controlled via run criteria. The `run_fixed_timestep` system runs this schedule between `CoreSet::First` and `CoreSet::PreUpdate` by default. - Command flush points introduced by `AssetStage` have been removed. If you were relying on these, add them back manually. - Adding extract systems is now typically done directly on the main app. Make sure the `RenderingAppExtension` trait is in scope, then call `app.add_extract_system(my_system)`. - the `calculate_bounds` system, with the `CalculateBounds` label, is now in `CoreSet::Update`, rather than in `CoreSet::PostUpdate` before commands are applied. You may need to order your movement systems to occur before this system in order to avoid system order ambiguities in culling behavior. - the `RenderLabel` `AppLabel` was renamed to `RenderApp` for clarity - `App::add_state` now takes 0 arguments: the starting state is set based on the `Default` impl. - Instead of creating `SystemSet` containers for systems that run in stages, simply use `.on_enter::<State::Variant>()` or its `on_exit` or `on_update` siblings. - `SystemLabel` derives should be replaced with `SystemSet`. You will also need to add the `Debug`, `PartialEq`, `Eq`, and `Hash` traits to satisfy the new trait bounds. - `with_run_criteria` has been renamed to `run_if`. Run criteria have been renamed to run conditions for clarity, and should now simply return a bool. - States have been dramatically simplified: there is no longer a "state stack". To queue a transition to the next state, call `NextState::set` ## TODO - [x] remove dead methods on App and World - [x] add `App::add_system_to_schedule` and `App::add_systems_to_schedule` - [x] avoid adding the default system set at inappropriate times - [x] remove any accidental cycles in the default plugins schedule - [x] migrate benchmarks - [x] expose explicit labels for the built-in command flush points - [x] migrate engine code - [x] remove all mentions of stages from the docs - [x] verify docs for States - [x] fix uses of exclusive systems that use .end / .at_start / .before_commands - [x] migrate RenderStage and AssetStage - [x] migrate examples - [x] ensure that transform propagation is exported in a sufficiently public way (the systems are already pub) - [x] ensure that on_enter schedules are run at least once before the main app - [x] re-enable opt-in to execution order ambiguities - [x] revert change to `update_bounds` to ensure it runs in `PostUpdate` - [x] test all examples - [x] unbreak directional lights - [x] unbreak shadows (see 3d_scene, 3d_shape, lighting, transparaency_3d examples) - [x] game menu example shows loading screen and menu simultaneously - [x] display settings menu is a blank screen - [x] `without_winit` example panics - [x] ensure all tests pass - [x] SubApp doc test fails - [x] runs_spawn_local tasks fails - [x] [Fix panic_when_hierachy_cycle test hanging](https://github.com/alice-i-cecile/bevy/pull/120) ## Points of Difficulty and Controversy **Reviewers, please give feedback on these and look closely** 1. Default sets, from the RFC, have been removed. These added a tremendous amount of implicit complexity and result in hard to debug scheduling errors. They're going to be tackled in the form of "base sets" by @cart in a followup. 2. The outer schedule controls which schedule is run when `App::update` is called. 3. I implemented `Label for `Box<dyn Label>` for our label types. This enables us to store schedule labels in concrete form, and then later run them. I ran into the same set of problems when working with one-shot systems. We've previously investigated this pattern in depth, and it does not appear to lead to extra indirection with nested boxes. 4. `SubApp::update` simply runs the default schedule once. This sucks, but this whole API is incomplete and this was the minimal changeset. 5. `time_system` and `tick_global_task_pools_on_main_thread` no longer use exclusive systems to attempt to force scheduling order 6. Implemetnation strategy for fixed timesteps 7. `AssetStage` was migrated to `AssetSet` without reintroducing command flush points. These did not appear to be used, and it's nice to remove these bottlenecks. 8. Migration of `bevy_render/lib.rs` and pipelined rendering. The logic here is unusually tricky, as we have complex scheduling requirements. ## Future Work (ideally before 0.10) - Rename schedule_v3 module to schedule or scheduling - Add a derive macro to states, and likely a `EnumIter` trait of some form - Figure out what exactly to do with the "systems added should basically work by default" problem - Improve ergonomics for working with fixed timesteps and states - Polish FixedTime API to match Time - Rebase and merge #7415 - Resolve all internal ambiguities (blocked on better tools, especially #7442) - Add "base sets" to replace the removed default sets. |
||
|
|
bfafa781c1 |
re-enable tests on apple silicon (#7400)
# Objective - Fixes #6687 ## Solution - The future is now! Rust 1.67 was released with the fix |
||
|
|
54a1e51623 |
TaskPool Panic Handling (#6443)
# Objective
Right now, the `TaskPool` implementation allows panics to permanently kill worker threads upon panicking. This is currently non-recoverable without using a `std::panic::catch_unwind` in every scheduled task. This is poor ergonomics and even poorer developer experience. This is exacerbated by #2250 as these threads are global and cannot be replaced after initialization.
Removes the need for temporary fixes like #4998. Fixes #4996. Fixes #6081. Fixes #5285. Fixes #5054. Supersedes #2307.
## Solution
The current solution is to wrap `Executor::run` in `TaskPool` with a `catch_unwind`, and discarding the potential panic. This was taken straight from [smol](
|
||
|
|
a083882cb2 |
ignore nanosec precision tests on apple m1 (#6377)
# Objective - Some tests are very flaky on a m1 - m1 currently have a 41 ns precision ## Solution - Do not run tests that compare a `Duration` or a `f64` on a m1 (and m2) |
||
|
|
1d22634cfb |
better wording for time scaling docs (#6340)
Quick follow-up to #5752. I think this is a slightly better wording. |
||
|
|
7989cb2650 |
Add global time scaling (#5752)
# Objective - Make `Time` API more consistent. - Support time accel/decel/pause. ## Solution This is just the `Time` half of #3002. I was told that part isn't controversial. - Give the "delta time" and "total elapsed time" methods `f32`, `f64`, and `Duration` variants with consistent naming. - Implement accelerating / decelerating the passage of time. - Implement stopping time. --- ## Changelog - Changed `time_since_startup` to `elapsed` because `time.time_*` is just silly. - Added `relative_speed` and `set_relative_speed` methods. - Added `is_paused`, `pause`, `unpause` , and methods. (I'd prefer `resume`, but `unpause` matches `Timer` API.) - Added `raw_*` variants of the "delta time" and "total elapsed time" methods. - Added `first_update` method because there's a non-zero duration between startup and the first update. ## Migration Guide - `time.time_since_startup()` -> `time.elapsed()` - `time.seconds_since_startup()` -> `time.elapsed_seconds_f64()` - `time.seconds_since_startup_wrapped_f32()` -> `time.elapsed_seconds_wrapped()` If you aren't sure which to use, most systems should continue to use "scaled" time (e.g. `time.delta_seconds()`). The realtime "unscaled" time measurements (e.g. `time.raw_delta_seconds()`) are mostly for debugging and profiling. |
||
|
|
c4d1ae0a47 |
add time wrapping to Time (#5982)
# Objective - Sometimes, like when using shaders, you can only use a time value in `f32`. Unfortunately this suffers from floating precision issues pretty quickly. The standard approach to this problem is to wrap the time after a given period - This is necessary for https://github.com/bevyengine/bevy/pull/5409 ## Solution - Add a `seconds_since_last_wrapping_period` method on `Time` that returns a `f32` that is the `seconds_since_startup` modulo the `max_wrapping_period` --- ## Changelog Added `seconds_since_last_wrapping_period` to `Time` ## Additional info I'm very opened to hearing better names. I don't really like the current naming, I just went with something descriptive. Co-authored-by: Charles <IceSentry@users.noreply.github.com> |
||
|
|
aed3232e38 |
bevy_reflect: Relax bounds on Option<T> (#5658)
# Objective The reflection impls on `Option<T>` have the bound `T: Reflect + Clone`. This means that using `FromReflect` requires `Clone` even though we can normally get away with just `FromReflect`. ## Solution Update the bounds on `Option<T>` to match that of `Vec<T>`, where `T: FromReflect`. This helps remove a `Clone` implementation that may be undesired but added for the sole purpose of getting the code to compile. --- ## Changelog * Reflection on `Option<T>` now has `T` bound by `FromReflect` rather than `Reflect + Clone` * Added a `FromReflect` impl for `Instant` ## Migration Guide If using `Option<T>` with Bevy's reflection API, `T` now needs to implement `FromReflect` rather than just `Clone`. This can be achieved easily by simply deriving `FromReflect`: ```rust // OLD #[derive(Reflect, Clone)] struct Foo; let reflected: Box<dyn Reflect> = Box::new(Some(Foo)); // NEW #[derive(Reflect, FromReflect)] struct Foo; let reflected: Box<dyn Reflect> = Box::new(Some(Foo)); ``` > Note: You can still derive `Clone`, but it's not required in order to compile. |
||
|
|
992681b59b |
Make Resource trait opt-in, requiring #[derive(Resource)] V2 (#5577)
*This PR description is an edited copy of #5007, written by @alice-i-cecile.* # Objective Follow-up to https://github.com/bevyengine/bevy/pull/2254. The `Resource` trait currently has a blanket implementation for all types that meet its bounds. While ergonomic, this results in several drawbacks: * it is possible to make confusing, silent mistakes such as inserting a function pointer (Foo) rather than a value (Foo::Bar) as a resource * it is challenging to discover if a type is intended to be used as a resource * we cannot later add customization options (see the [RFC](https://github.com/bevyengine/rfcs/blob/main/rfcs/27-derive-component.md) for the equivalent choice for Component). * dependencies can use the same Rust type as a resource in invisibly conflicting ways * raw Rust types used as resources cannot preserve privacy appropriately, as anyone able to access that type can read and write to internal values * we cannot capture a definitive list of possible resources to display to users in an editor ## Notes to reviewers * Review this commit-by-commit; there's effectively no back-tracking and there's a lot of churn in some of these commits. *ira: My commits are not as well organized :')* * I've relaxed the bound on Local to Send + Sync + 'static: I don't think these concerns apply there, so this can keep things simple. Storing e.g. a u32 in a Local is fine, because there's a variable name attached explaining what it does. * I think this is a bad place for the Resource trait to live, but I've left it in place to make reviewing easier. IMO that's best tackled with https://github.com/bevyengine/bevy/issues/4981. ## Changelog `Resource` is no longer automatically implemented for all matching types. Instead, use the new `#[derive(Resource)]` macro. ## Migration Guide Add `#[derive(Resource)]` to all types you are using as a resource. If you are using a third party type as a resource, wrap it in a tuple struct to bypass orphan rules. Consider deriving `Deref` and `DerefMut` to improve ergonomics. `ClearColor` no longer implements `Component`. Using `ClearColor` as a component in 0.8 did nothing. Use the `ClearColorConfig` in the `Camera3d` and `Camera2d` components instead. Co-authored-by: Alice <alice.i.cecile@gmail.com> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: devil-ira <justthecooldude@gmail.com> Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
|
|
4b191d968d |
remove blanket Serialize + Deserialize requirement for Reflect on generic types (#5197)
# Objective Some generic types like `Option<T>`, `Vec<T>` and `HashMap<K, V>` implement `Reflect` when where their generic types `T`/`K`/`V` implement `Serialize + for<'de> Deserialize<'de>`. This is so that in their `GetTypeRegistration` impl they can insert the `ReflectSerialize` and `ReflectDeserialize` type data structs. This has the annoying side effect that if your struct contains a `Option<NonSerdeStruct>` you won't be able to derive reflect (https://github.com/bevyengine/bevy/issues/4054). ## Solution - remove the `Serialize + Deserialize` bounds on wrapper types - this means that `ReflectSerialize` and `ReflectDeserialize` will no longer be inserted even for `.register::<Option<DoesImplSerde>>()` - add `register_type_data<T, D>` shorthand for `registry.get_mut(T).insert(D::from_type<T>())` - require users to register their specific generic types **and the serde types** separately like ```rust .register_type::<Option<String>>() .register_type_data::<Option<String>, ReflectSerialize>() .register_type_data::<Option<String>, ReflectDeserialize>() ``` I believe this is the best we can do for extensibility and convenience without specialization. ## Changelog - `.register_type` for generic types like `Option<T>`, `Vec<T>`, `HashMap<K, V>` will no longer insert `ReflectSerialize` and `ReflectDeserialize` type data. Instead you need to register it separately for concrete generic types like so: ```rust .register_type::<Option<String>>() .register_type_data::<Option<String>, ReflectSerialize>() .register_type_data::<Option<String>, ReflectDeserialize>() ``` TODO: more docs and tweaks to the scene example to demonstrate registering generic types. |
||
|
|
644bd5dbc6 |
Split time functionality into bevy_time (#4187)
# Objective Reduce the catch-all grab-bag of functionality in bevy_core by minimally splitting off time functionality into bevy_time. Functionality like that provided by #3002 would increase the complexity of bevy_time, so this is a good candidate for pulling into its own unit. A step in addressing #2931 and splitting bevy_core into more specific locations. ## Solution Pull the time module of bevy_core into a new crate, bevy_time. # Migration guide - Time related types (e.g. `Time`, `Timer`, `Stopwatch`, `FixedTimestep`, etc.) should be imported from `bevy::time::*` rather than `bevy::core::*`. - If you were adding `CorePlugin` manually, you'll also want to add `TimePlugin` from `bevy::time`. - The `bevy::core::CorePlugin::Time` system label is replaced with `bevy::time::TimeSystem`. Co-authored-by: Carter Anderson <mcanders1@gmail.com> |