0e46b04560
4 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
![]() |
992681b59b |
Make Resource trait opt-in, requiring #[derive(Resource)] V2 (#5577)
*This PR description is an edited copy of #5007, written by @alice-i-cecile.* # Objective Follow-up to https://github.com/bevyengine/bevy/pull/2254. The `Resource` trait currently has a blanket implementation for all types that meet its bounds. While ergonomic, this results in several drawbacks: * it is possible to make confusing, silent mistakes such as inserting a function pointer (Foo) rather than a value (Foo::Bar) as a resource * it is challenging to discover if a type is intended to be used as a resource * we cannot later add customization options (see the [RFC](https://github.com/bevyengine/rfcs/blob/main/rfcs/27-derive-component.md) for the equivalent choice for Component). * dependencies can use the same Rust type as a resource in invisibly conflicting ways * raw Rust types used as resources cannot preserve privacy appropriately, as anyone able to access that type can read and write to internal values * we cannot capture a definitive list of possible resources to display to users in an editor ## Notes to reviewers * Review this commit-by-commit; there's effectively no back-tracking and there's a lot of churn in some of these commits. *ira: My commits are not as well organized :')* * I've relaxed the bound on Local to Send + Sync + 'static: I don't think these concerns apply there, so this can keep things simple. Storing e.g. a u32 in a Local is fine, because there's a variable name attached explaining what it does. * I think this is a bad place for the Resource trait to live, but I've left it in place to make reviewing easier. IMO that's best tackled with https://github.com/bevyengine/bevy/issues/4981. ## Changelog `Resource` is no longer automatically implemented for all matching types. Instead, use the new `#[derive(Resource)]` macro. ## Migration Guide Add `#[derive(Resource)]` to all types you are using as a resource. If you are using a third party type as a resource, wrap it in a tuple struct to bypass orphan rules. Consider deriving `Deref` and `DerefMut` to improve ergonomics. `ClearColor` no longer implements `Component`. Using `ClearColor` as a component in 0.8 did nothing. Use the `ClearColorConfig` in the `Camera3d` and `Camera2d` components instead. Co-authored-by: Alice <alice.i.cecile@gmail.com> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: devil-ira <justthecooldude@gmail.com> Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
![]() |
40d4992401 |
Visibilty Inheritance, universal ComputedVisibility and RenderLayers support (#5310)
# Objective Fixes #4907. Fixes #838. Fixes #5089. Supersedes #5146. Supersedes #2087. Supersedes #865. Supersedes #5114 Visibility is currently entirely local. Set a parent entity to be invisible, and the children are still visible. This makes it hard for users to hide entire hierarchies of entities. Additionally, the semantics of `Visibility` vs `ComputedVisibility` are inconsistent across entity types. 3D meshes use `ComputedVisibility` as the "definitive" visibility component, with `Visibility` being just one data source. Sprites just use `Visibility`, which means they can't feed off of `ComputedVisibility` data, such as culling information, RenderLayers, and (added in this pr) visibility inheritance information. ## Solution Splits `ComputedVisibilty::is_visible` into `ComputedVisibilty::is_visible_in_view` and `ComputedVisibilty::is_visible_in_hierarchy`. For each visible entity, `is_visible_in_hierarchy` is computed by propagating visibility down the hierarchy. The `ComputedVisibility::is_visible()` function combines these two booleans for the canonical "is this entity visible" function. Additionally, all entities that have `Visibility` now also have `ComputedVisibility`. Sprites, Lights, and UI entities now use `ComputedVisibility` when appropriate. This means that in addition to visibility inheritance, everything using Visibility now also supports RenderLayers. Notably, Sprites (and other 2d objects) now support `RenderLayers` and work properly across multiple views. Also note that this does increase the amount of work done per sprite. Bevymark with 100,000 sprites on `main` runs in `0.017612` seconds and this runs in `0.01902`. That is certainly a gap, but I believe the api consistency and extra functionality this buys us is worth it. See [this thread](https://github.com/bevyengine/bevy/pull/5146#issuecomment-1182783452) for more info. Note that #5146 in combination with #5114 _are_ a viable alternative to this PR and _would_ perform better, but that comes at the cost of api inconsistencies and doing visibility calculations in the "wrong" place. The current visibility system does have potential for performance improvements. I would prefer to evolve that one system as a whole rather than doing custom hacks / different behaviors for each feature slice. Here is a "split screen" example where the left camera uses RenderLayers to filter out the blue sprite.  Note that this builds directly on #5146 and that @james7132 deserves the credit for the baseline visibility inheritance work. This pr moves the inherited visibility field into `ComputedVisibility`, then does the additional work of porting everything to `ComputedVisibility`. See my [comments here](https://github.com/bevyengine/bevy/pull/5146#issuecomment-1182783452) for rationale. ## Follow up work * Now that lights use ComputedVisibility, VisibleEntities now includes "visible lights" in the entity list. Functionally not a problem as we use queries to filter the list down in the desired context. But we should consider splitting this out into a separate`VisibleLights` collection for both clarity and performance reasons. And _maybe_ even consider scoping `VisibleEntities` down to `VisibleMeshes`?. * Investigate alternative sprite rendering impls (in combination with visibility system tweaks) that avoid re-generating a per-view fixedbitset of visible entities every frame, then checking each ExtractedEntity. This is where most of the performance overhead lives. Ex: we could generate ExtractedEntities per-view using the VisibleEntities list, avoiding the need for the bitset. * Should ComputedVisibility use bitflags under the hood? This would cut down on the size of the component, potentially speed up the `is_visible()` function, and allow us to cheaply expand ComputedVisibility with more data (ex: split out local visibility and parent visibility, add more culling classes, etc). --- ## Changelog * ComputedVisibility now takes hierarchy visibility into account. * 2D, UI and Light entities now use the ComputedVisibility component. ## Migration Guide If you were previously reading `Visibility::is_visible` as the "actual visibility" for sprites or lights, use `ComputedVisibilty::is_visible()` instead: ```rust // before (0.7) fn system(query: Query<&Visibility>) { for visibility in query.iter() { if visibility.is_visible { log!("found visible entity"); } } } // after (0.8) fn system(query: Query<&ComputedVisibility>) { for visibility in query.iter() { if visibility.is_visible() { log!("found visible entity"); } } } ``` Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
![]() |
7b2cf98896 |
Make RenderStage::Extract run on the render world (#4402)
# Objective - Currently, the `Extract` `RenderStage` is executed on the main world, with the render world available as a resource. - However, when needing access to resources in the render world (e.g. to mutate them), the only way to do so was to get exclusive access to the whole `RenderWorld` resource. - This meant that effectively only one extract which wrote to resources could run at a time. - We didn't previously make `Extract`ing writing to the world a non-happy path, even though we want to discourage that. ## Solution - Move the extract stage to run on the render world. - Add the main world as a `MainWorld` resource. - Add an `Extract` `SystemParam` as a convenience to access a (read only) `SystemParam` in the main world during `Extract`. ## Future work It should be possible to avoid needing to use `get_or_spawn` for the render commands, since now the `Commands`' `Entities` matches up with the world being executed on. We need to determine how this interacts with https://github.com/bevyengine/bevy/pull/3519 It's theoretically possible to remove the need for the `value` method on `Extract`. However, that requires slightly changing the `SystemParam` interface, which would make it more complicated. That would probably mess up the `SystemState` api too. ## Todo I still need to add doc comments to `Extract`. --- ## Changelog ### Changed - The `Extract` `RenderStage` now runs on the render world (instead of the main world as before). You must use the `Extract` `SystemParam` to access the main world during the extract phase. Resources on the render world can now be accessed using `ResMut` during extract. ### Removed - `Commands::spawn_and_forget`. Use `Commands::get_or_spawn(e).insert_bundle(bundle)` instead ## Migration Guide The `Extract` `RenderStage` now runs on the render world (instead of the main world as before). You must use the `Extract` `SystemParam` to access the main world during the extract phase. `Extract` takes a single type parameter, which is any system parameter (such as `Res`, `Query` etc.). It will extract this from the main world, and returns the result of this extraction when `value` is called on it. For example, if previously your extract system looked like: ```rust fn extract_clouds(mut commands: Commands, clouds: Query<Entity, With<Cloud>>) { for cloud in clouds.iter() { commands.get_or_spawn(cloud).insert(Cloud); } } ``` the new version would be: ```rust fn extract_clouds(mut commands: Commands, mut clouds: Extract<Query<Entity, With<Cloud>>>) { for cloud in clouds.value().iter() { commands.get_or_spawn(cloud).insert(Cloud); } } ``` The diff is: ```diff --- a/src/clouds.rs +++ b/src/clouds.rs @@ -1,5 +1,5 @@ -fn extract_clouds(mut commands: Commands, clouds: Query<Entity, With<Cloud>>) { - for cloud in clouds.iter() { +fn extract_clouds(mut commands: Commands, mut clouds: Extract<Query<Entity, With<Cloud>>>) { + for cloud in clouds.value().iter() { commands.get_or_spawn(cloud).insert(Cloud); } } ``` You can now also access resources from the render world using the normal system parameters during `Extract`: ```rust fn extract_assets(mut render_assets: ResMut<MyAssets>, source_assets: Extract<Res<MyAssets>>) { *render_assets = source_assets.clone(); } ``` Please note that all existing extract systems need to be updated to match this new style; even if they currently compile they will not run as expected. A warning will be emitted on a best-effort basis if this is not met. Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
![]() |
a0a3d8798b |
ExtractResourcePlugin (#3745)
# Objective - Add an `ExtractResourcePlugin` for convenience and consistency ## Solution - Add an `ExtractResourcePlugin` similar to `ExtractComponentPlugin` but for ECS `Resource`s. The system that is executed simply clones the main world resource into a render world resource, if and only if the main world resource was either added or changed since the last execution of the system. - Add an `ExtractResource` trait with a `fn extract_resource(res: &Self) -> Self` function. This is used by the `ExtractResourcePlugin` to extract the resource - Add a derive macro for `ExtractResource` on a `Resource` with the `Clone` trait, that simply returns `res.clone()` - Use `ExtractResourcePlugin` wherever both possible and appropriate |