37aae00120
14 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
![]() |
38c3423693
|
Event Split: Event , EntityEvent , and BufferedEvent (#19647)
# Objective Closes #19564. The current `Event` trait looks like this: ```rust pub trait Event: Send + Sync + 'static { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` The `Event` trait is used by both buffered events (`EventReader`/`EventWriter`) and observer events. If they are observer events, they can optionally be targeted at specific `Entity`s or `ComponentId`s, and can even be propagated to other entities. However, there has long been a desire to split the trait semantically for a variety of reasons, see #14843, #14272, and #16031 for discussion. Some reasons include: - It's very uncommon to use a single event type as both a buffered event and targeted observer event. They are used differently and tend to have distinct semantics. - A common footgun is using buffered events with observers or event readers with observer events, as there is no type-level error that prevents this kind of misuse. - #19440 made `Trigger::target` return an `Option<Entity>`. This *seriously* hurts ergonomics for the general case of entity observers, as you need to `.unwrap()` each time. If we could statically determine whether the event is expected to have an entity target, this would be unnecessary. There's really two main ways that we can categorize events: push vs. pull (i.e. "observer event" vs. "buffered event") and global vs. targeted: | | Push | Pull | | ------------ | --------------- | --------------------------- | | **Global** | Global observer | `EventReader`/`EventWriter` | | **Targeted** | Entity observer | - | There are many ways to approach this, each with their tradeoffs. Ultimately, we kind of want to split events both ways: - A type-level distinction between observer events and buffered events, to prevent people from using the wrong kind of event in APIs - A statically designated entity target for observer events to avoid accidentally using untargeted events for targeted APIs This PR achieves these goals by splitting event traits into `Event`, `EntityEvent`, and `BufferedEvent`, with `Event` being the shared trait implemented by all events. ## `Event`, `EntityEvent`, and `BufferedEvent` `Event` is now a very simple trait shared by all events. ```rust pub trait Event: Send + Sync + 'static { // Required for observer APIs fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` You can call `trigger` for *any* event, and use a global observer for listening to the event. ```rust #[derive(Event)] struct Speak { message: String, } // ... app.add_observer(|trigger: On<Speak>| { println!("{}", trigger.message); }); // ... commands.trigger(Speak { message: "Y'all like these reworked events?".to_string(), }); ``` To allow an event to be targeted at entities and even propagated further, you can additionally implement the `EntityEvent` trait: ```rust pub trait EntityEvent: Event { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; } ``` This lets you call `trigger_targets`, and to use targeted observer APIs like `EntityCommands::observe`: ```rust #[derive(Event, EntityEvent)] #[entity_event(traversal = &'static ChildOf, auto_propagate)] struct Damage { amount: f32, } // ... let enemy = commands.spawn((Enemy, Health(100.0))).id(); // Spawn some armor as a child of the enemy entity. // When the armor takes damage, it will bubble the event up to the enemy. let armor_piece = commands .spawn((ArmorPiece, Health(25.0), ChildOf(enemy))) .observe(|trigger: On<Damage>, mut query: Query<&mut Health>| { // Note: `On::target` only exists because this is an `EntityEvent`. let mut health = query.get(trigger.target()).unwrap(); health.0 -= trigger.amount(); }); commands.trigger_targets(Damage { amount: 10.0 }, armor_piece); ``` > [!NOTE] > You *can* still also trigger an `EntityEvent` without targets using `trigger`. We probably *could* make this an either-or thing, but I'm not sure that's actually desirable. To allow an event to be used with the buffered API, you can implement `BufferedEvent`: ```rust pub trait BufferedEvent: Event {} ``` The event can then be used with `EventReader`/`EventWriter`: ```rust #[derive(Event, BufferedEvent)] struct Message(String); fn write_hello(mut writer: EventWriter<Message>) { writer.write(Message("I hope these examples are alright".to_string())); } fn read_messages(mut reader: EventReader<Message>) { // Process all buffered events of type `Message`. for Message(message) in reader.read() { println!("{message}"); } } ``` In summary: - Need a basic event you can trigger and observe? Derive `Event`! - Need the event to be targeted at an entity? Derive `EntityEvent`! - Need the event to be buffered and support the `EventReader`/`EventWriter` API? Derive `BufferedEvent`! ## Alternatives I'll now cover some of the alternative approaches I have considered and briefly explored. I made this section collapsible since it ended up being quite long :P <details> <summary>Expand this to see alternatives</summary> ### 1. Unified `Event` Trait One option is not to have *three* separate traits (`Event`, `EntityEvent`, `BufferedEvent`), and to instead just use associated constants on `Event` to determine whether an event supports targeting and buffering or not: ```rust pub trait Event: Send + Sync + 'static { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; const TARGETED: bool = false; const BUFFERED: bool = false; fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` Methods can then use bounds like `where E: Event<TARGETED = true>` or `where E: Event<BUFFERED = true>` to limit APIs to specific kinds of events. This would keep everything under one `Event` trait, but I don't think it's necessarily a good idea. It makes APIs harder to read, and docs can't easily refer to specific types of events. You can also create weird invariants: what if you specify `TARGETED = false`, but have `Traversal` and/or `AUTO_PROPAGATE` enabled? ### 2. `Event` and `Trigger` Another option is to only split the traits between buffered events and observer events, since that is the main thing people have been asking for, and they have the largest API difference. If we did this, I think we would need to make the terms *clearly* separate. We can't really use `Event` and `BufferedEvent` as the names, since it would be strange that `BufferedEvent` doesn't implement `Event`. Something like `ObserverEvent` and `BufferedEvent` could work, but it'd be more verbose. For this approach, I would instead keep `Event` for the current `EventReader`/`EventWriter` API, and call the observer event a `Trigger`, since the "trigger" terminology is already used in the observer context within Bevy (both as a noun and a verb). This is also what a long [bikeshed on Discord](https://discord.com/channels/691052431525675048/749335865876021248/1298057661878898791) seemed to land on at the end of last year. ```rust // For `EventReader`/`EventWriter` pub trait Event: Send + Sync + 'static {} // For observers pub trait Trigger: Send + Sync + 'static { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; const TARGETED: bool = false; fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` The problem is that "event" is just a really good term for something that "happens". Observers are rapidly becoming the more prominent API, so it'd be weird to give them the `Trigger` name and leave the good `Event` name for the less common API. So, even though a split like this seems neat on the surface, I think it ultimately wouldn't really work. We want to keep the `Event` name for observer events, and there is no good alternative for the buffered variant. (`Message` was suggested, but saying stuff like "sends a collision message" is weird.) ### 3. `GlobalEvent` + `TargetedEvent` What if instead of focusing on the buffered vs. observed split, we *only* make a distinction between global and targeted events? ```rust // A shared event trait to allow global observers to work pub trait Event: Send + Sync + 'static { fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } // For buffered events and non-targeted observer events pub trait GlobalEvent: Event {} // For targeted observer events pub trait TargetedEvent: Event { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; } ``` This is actually the first approach I implemented, and it has the neat characteristic that you can only use non-targeted APIs like `trigger` with a `GlobalEvent` and targeted APIs like `trigger_targets` with a `TargetedEvent`. You have full control over whether the entity should or should not have a target, as they are fully distinct at the type-level. However, there's a few problems: - There is no type-level indication of whether a `GlobalEvent` supports buffered events or just non-targeted observer events - An `Event` on its own does literally nothing, it's just a shared trait required to make global observers accept both non-targeted and targeted events - If an event is both a `GlobalEvent` and `TargetedEvent`, global observers again have ambiguity on whether an event has a target or not, undermining some of the benefits - The names are not ideal ### 4. `Event` and `EntityEvent` We can fix some of the problems of Alternative 3 by accepting that targeted events can also be used in non-targeted contexts, and simply having the `Event` and `EntityEvent` traits: ```rust // For buffered events and non-targeted observer events pub trait Event: Send + Sync + 'static { fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } // For targeted observer events pub trait EntityEvent: Event { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; } ``` This is essentially identical to this PR, just without a dedicated `BufferedEvent`. The remaining major "problem" is that there is still zero type-level indication of whether an `Event` event *actually* supports the buffered API. This leads us to the solution proposed in this PR, using `Event`, `EntityEvent`, and `BufferedEvent`. </details> ## Conclusion The `Event` + `EntityEvent` + `BufferedEvent` split proposed in this PR aims to solve all the common problems with Bevy's current event model while keeping the "weirdness" factor minimal. It splits in terms of both the push vs. pull *and* global vs. targeted aspects, while maintaining a shared concept for an "event". ### Why I Like This - The term "event" remains as a single concept for all the different kinds of events in Bevy. - Despite all event types being "events", they use fundamentally different APIs. Instead of assuming that you can use an event type with any pattern (when only one is typically supported), you explicitly opt in to each one with dedicated traits. - Using separate traits for each type of event helps with documentation and clearer function signatures. - I can safely make assumptions on expected usage. - If I see that an event is an `EntityEvent`, I can assume that I can use `observe` on it and get targeted events. - If I see that an event is a `BufferedEvent`, I can assume that I can use `EventReader` to read events. - If I see both `EntityEvent` and `BufferedEvent`, I can assume that both APIs are supported. In summary: This allows for a unified concept for events, while limiting the different ways to use them with opt-in traits. No more guess-work involved when using APIs. ### Problems? - Because `BufferedEvent` implements `Event` (for more consistent semantics etc.), you can still use all buffered events for non-targeted observers. I think this is fine/good. The important part is that if you see that an event implements `BufferedEvent`, you know that the `EventReader`/`EventWriter` API should be supported. Whether it *also* supports other APIs is secondary. - I currently only support `trigger_targets` for an `EntityEvent`. However, you can technically target components too, without targeting any entities. I consider that such a niche and advanced use case that it's not a huge problem to only support it for `EntityEvent`s, but we could also split `trigger_targets` into `trigger_entities` and `trigger_components` if we wanted to (or implement components as entities :P). - You can still trigger an `EntityEvent` *without* targets. I consider this correct, since `Event` implements the non-targeted behavior, and it'd be weird if implementing another trait *removed* behavior. However, it does mean that global observers for entity events can technically return `Entity::PLACEHOLDER` again (since I got rid of the `Option<Entity>` added in #19440 for ergonomics). I think that's enough of an edge case that it's not a huge problem, but it is worth keeping in mind. - ~~Deriving both `EntityEvent` and `BufferedEvent` for the same type currently duplicates the `Event` implementation, so you instead need to manually implement one of them.~~ Changed to always requiring `Event` to be derived. ## Related Work There are plans to implement multi-event support for observers, especially for UI contexts. [Cart's example](https://github.com/bevyengine/bevy/issues/14649#issuecomment-2960402508) API looked like this: ```rust // Truncated for brevity trigger: Trigger<( OnAdd<Pressed>, OnRemove<Pressed>, OnAdd<InteractionDisabled>, OnRemove<InteractionDisabled>, OnInsert<Hovered>, )>, ``` I believe this shouldn't be in conflict with this PR. If anything, this PR might *help* achieve the multi-event pattern for entity observers with fewer footguns: by statically enforcing that all of these events are `EntityEvent`s in the context of `EntityCommands::observe`, we can avoid misuse or weird cases where *some* events inside the trigger are targeted while others are not. |
||
![]() |
7b1c9f192e
|
Adopt consistent FooSystems naming convention for system sets (#18900)
# Objective Fixes a part of #14274. Bevy has an incredibly inconsistent naming convention for its system sets, both internally and across the ecosystem. <img alt="System sets in Bevy" src="https://github.com/user-attachments/assets/d16e2027-793f-4ba4-9cc9-e780b14a5a1b" width="450" /> *Names of public system set types in Bevy* Most Bevy types use a naming of `FooSystem` or just `Foo`, but there are also a few `FooSystems` and `FooSet` types. In ecosystem crates on the other hand, `FooSet` is perhaps the most commonly used name in general. Conventions being so wildly inconsistent can make it harder for users to pick names for their own types, to search for system sets on docs.rs, or to even discern which types *are* system sets. To reign in the inconsistency a bit and help unify the ecosystem, it would be good to establish a common recommended naming convention for system sets in Bevy itself, similar to how plugins are commonly suffixed with `Plugin` (ex: `TimePlugin`). By adopting a consistent naming convention in first-party Bevy, we can softly nudge ecosystem crates to follow suit (for types where it makes sense to do so). Choosing a naming convention is also relevant now, as the [`bevy_cli` recently adopted lints](https://github.com/TheBevyFlock/bevy_cli/pull/345) to enforce naming for plugins and system sets, and the recommended naming used for system sets is still a bit open. ## Which Name To Use? Now the contentious part: what naming convention should we actually adopt? This was discussed on the Bevy Discord at the end of last year, starting [here](<https://discord.com/channels/691052431525675048/692572690833473578/1310659954683936789>). `FooSet` and `FooSystems` were the clear favorites, with `FooSet` very narrowly winning an unofficial poll. However, it seems to me like the consensus was broadly moving towards `FooSystems` at the end and after the poll, with Cart ([source](https://discord.com/channels/691052431525675048/692572690833473578/1311140204974706708)) and later Alice ([source](https://discord.com/channels/691052431525675048/692572690833473578/1311092530732859533)) and also me being in favor of it. Let's do a quick pros and cons list! Of course these are just what I thought of, so take it with a grain of salt. `FooSet`: - Pro: Nice and short! - Pro: Used by many ecosystem crates. - Pro: The `Set` suffix comes directly from the trait name `SystemSet`. - Pro: Pairs nicely with existing APIs like `in_set` and `configure_sets`. - Con: `Set` by itself doesn't actually indicate that it's related to systems *at all*, apart from the implemented trait. A set of what? - Con: Is `FooSet` a set of `Foo`s or a system set related to `Foo`? Ex: `ContactSet`, `MeshSet`, `EnemySet`... `FooSystems`: - Pro: Very clearly indicates that the type represents a collection of systems. The actual core concept, system(s), is in the name. - Pro: Parallels nicely with `FooPlugins` for plugin groups. - Pro: Low risk of conflicts with other names or misunderstandings about what the type is. - Pro: In most cases, reads *very* nicely and clearly. Ex: `PhysicsSystems` and `AnimationSystems` as opposed to `PhysicsSet` and `AnimationSet`. - Pro: Easy to search for on docs.rs. - Con: Usually results in longer names. - Con: Not yet as widely used. Really the big problem with `FooSet` is that it doesn't actually describe what it is. It describes what *kind of thing* it is (a set of something), but not *what it is a set of*, unless you know the type or check its docs or implemented traits. `FooSystems` on the other hand is much more self-descriptive in this regard, at the cost of being a bit longer to type. Ultimately, in some ways it comes down to preference and how you think of system sets. Personally, I was originally in favor of `FooSet`, but have been increasingly on the side of `FooSystems`, especially after seeing what the new names would actually look like in Avian and now Bevy. I prefer it because it usually reads better, is much more clearly related to groups of systems than `FooSet`, and overall *feels* more correct and natural to me in the long term. For these reasons, and because Alice and Cart also seemed to share a preference for it when it was previously being discussed, I propose that we adopt a `FooSystems` naming convention where applicable. ## Solution Rename Bevy's system set types to use a consistent `FooSet` naming where applicable. - `AccessibilitySystem` → `AccessibilitySystems` - `GizmoRenderSystem` → `GizmoRenderSystems` - `PickSet` → `PickingSystems` - `RunFixedMainLoopSystem` → `RunFixedMainLoopSystems` - `TransformSystem` → `TransformSystems` - `RemoteSet` → `RemoteSystems` - `RenderSet` → `RenderSystems` - `SpriteSystem` → `SpriteSystems` - `StateTransitionSteps` → `StateTransitionSystems` - `RenderUiSystem` → `RenderUiSystems` - `UiSystem` → `UiSystems` - `Animation` → `AnimationSystems` - `AssetEvents` → `AssetEventSystems` - `TrackAssets` → `AssetTrackingSystems` - `UpdateGizmoMeshes` → `GizmoMeshSystems` - `InputSystem` → `InputSystems` - `InputFocusSet` → `InputFocusSystems` - `ExtractMaterialsSet` → `MaterialExtractionSystems` - `ExtractMeshesSet` → `MeshExtractionSystems` - `RumbleSystem` → `RumbleSystems` - `CameraUpdateSystem` → `CameraUpdateSystems` - `ExtractAssetsSet` → `AssetExtractionSystems` - `Update2dText` → `Text2dUpdateSystems` - `TimeSystem` → `TimeSystems` - `AudioPlaySet` → `AudioPlaybackSystems` - `SendEvents` → `EventSenderSystems` - `EventUpdates` → `EventUpdateSystems` A lot of the names got slightly longer, but they are also a lot more consistent, and in my opinion the majority of them read much better. For a few of the names I took the liberty of rewording things a bit; definitely open to any further naming improvements. There are still also cases where the `FooSystems` naming doesn't really make sense, and those I left alone. This primarily includes system sets like `Interned<dyn SystemSet>`, `EnterSchedules<S>`, `ExitSchedules<S>`, or `TransitionSchedules<S>`, where the type has some special purpose and semantics. ## Todo - [x] Should I keep all the old names as deprecated type aliases? I can do this, but to avoid wasting work I'd prefer to first reach consensus on whether these renames are even desired. - [x] Migration guide - [x] Release notes |
||
![]() |
e9a0ef49f9
|
Rename bevy_platform_support to bevy_platform (#18813)
# Objective The goal of `bevy_platform_support` is to provide a set of platform agnostic APIs, alongside platform-specific functionality. This is a high traffic crate (providing things like HashMap and Instant). Especially in light of https://github.com/bevyengine/bevy/discussions/18799, it deserves a friendlier / shorter name. Given that it hasn't had a full release yet, getting this change in before Bevy 0.16 makes sense. ## Solution - Rename `bevy_platform_support` to `bevy_platform`. |
||
![]() |
ecccd57417
|
Generic system config (#17962)
# Objective Prevents duplicate implementation between IntoSystemConfigs and IntoSystemSetConfigs using a generic, adds a NodeType trait for more config flexibility (opening the door to implement https://github.com/bevyengine/bevy/issues/14195?). ## Solution Followed writeup by @ItsDoot: https://hackmd.io/@doot/rJeefFHc1x Removes IntoSystemConfigs and IntoSystemSetConfigs, instead using IntoNodeConfigs with generics. ## Testing Pending --- ## Showcase N/A ## Migration Guide SystemSetConfigs -> NodeConfigs<InternedSystemSet> SystemConfigs -> NodeConfigs<ScheduleSystem> IntoSystemSetConfigs -> IntoNodeConfigs<InternedSystemSet, M> IntoSystemConfigs -> IntoNodeConfigs<ScheduleSystem, M> --------- Co-authored-by: Christian Hughes <9044780+ItsDoot@users.noreply.github.com> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
669d139c13
|
Upgrade to wgpu v24 (#17542)
Didn't remove WgpuWrapper. Not sure if it's needed or not still. ## Testing - Did you test these changes? If so, how? Example runner - Are there any parts that need more testing? Web (portable atomics thingy?), DXC. ## Migration Guide - Bevy has upgraded to [wgpu v24](https://github.com/gfx-rs/wgpu/blob/trunk/CHANGELOG.md#v2400-2025-01-15). - When using the DirectX 12 rendering backend, the new priority system for choosing a shader compiler is as follows: - If the `WGPU_DX12_COMPILER` environment variable is set at runtime, it is used - Else if the new `statically-linked-dxc` feature is enabled, a custom version of DXC will be statically linked into your app at compile time. - Else Bevy will look in the app's working directory for `dxcompiler.dll` and `dxil.dll` at runtime. - Else if they are missing, Bevy will fall back to FXC (not recommended) --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: IceSentry <c.giguere42@gmail.com> Co-authored-by: François Mockers <francois.mockers@vleue.com> |
||
![]() |
9bc0ae33c3
|
Move hashbrown and foldhash out of bevy_utils (#17460)
# Objective - Contributes to #16877 ## Solution - Moved `hashbrown`, `foldhash`, and related types out of `bevy_utils` and into `bevy_platform_support` - Refactored the above to match the layout of these types in `std`. - Updated crates as required. ## Testing - CI --- ## Migration Guide - The following items were moved out of `bevy_utils` and into `bevy_platform_support::hash`: - `FixedState` - `DefaultHasher` - `RandomState` - `FixedHasher` - `Hashed` - `PassHash` - `PassHasher` - `NoOpHash` - The following items were moved out of `bevy_utils` and into `bevy_platform_support::collections`: - `HashMap` - `HashSet` - `bevy_utils::hashbrown` has been removed. Instead, import from `bevy_platform_support::collections` _or_ take a dependency on `hashbrown` directly. - `bevy_utils::Entry` has been removed. Instead, import from `bevy_platform_support::collections::hash_map` or `bevy_platform_support::collections::hash_set` as appropriate. - All of the above equally apply to `bevy::utils` and `bevy::platform_support`. ## Notes - I left `PreHashMap`, `PreHashMapExt`, and `TypeIdMap` in `bevy_utils` as they might be candidates for micro-crating. They can always be moved into `bevy_platform_support` at a later date if desired. |
||
![]() |
68c19defb6
|
Readback : Add support for texture depth/array layers (#17479)
# Objective Fixes #16963 ## Solution I am - no pun intended - somewhat out of my depth here but this worked in my testing. The validation error is gone and the data read from the GPU looks sensible. I'd greatly appreciate if somebody more familiar with the matter could double-check this. ## References Relevant documentation in [WebGPU](https://gpuweb.github.io/gpuweb/#gputexelcopybufferlayout) and [wgpu](https://github.com/gfx-rs/wgpu/blob/v23/wgpu-types/src/lib.rs#L6350). ## Testing <details><summary>Example code for testing</summary> <p> ```rust use bevy::{ image::{self as bevy_image, TextureFormatPixelInfo}, prelude::*, render::{ render_asset::RenderAssetUsages, render_resource::{ Extent3d, TextureDescriptor, TextureDimension, TextureFormat, TextureUsages, }, }, }; fn main() { let mut app = App::new(); app.add_plugins(DefaultPlugins) .add_systems(Startup, setup) .add_systems(Update, readback_system); app.run(); } #[derive(Resource)] struct ImageResource(Handle<Image>); const TEXTURE_HEIGHT: u32 = 64; const TEXTURE_WIDTH: u32 = 32; const TEXTURE_LAYERS: u32 = 4; const FORMAT: TextureFormat = TextureFormat::Rgba8Uint; fn setup(mut commands: Commands, mut images: ResMut<Assets<Image>>) { let layer_pixel_count = (TEXTURE_WIDTH * TEXTURE_HEIGHT) as usize; let layer_size = layer_pixel_count * FORMAT.pixel_size(); let data: Vec<u8> = (0..TEXTURE_LAYERS as u8) .flat_map(|layer| (0..layer_size).map(move |_| layer)) .collect(); let image_size = data.len(); println!("{image_size}"); let image = Image { data, texture_descriptor: TextureDescriptor { label: Some("image"), size: Extent3d { width: TEXTURE_WIDTH, height: TEXTURE_HEIGHT, depth_or_array_layers: TEXTURE_LAYERS, }, mip_level_count: 1, sample_count: 1, dimension: TextureDimension::D2, format: FORMAT, usage: TextureUsages::COPY_DST | TextureUsages::COPY_SRC, view_formats: &[], }, sampler: bevy_image::ImageSampler::Default, texture_view_descriptor: None, asset_usage: RenderAssetUsages::RENDER_WORLD, }; commands.insert_resource(ImageResource(images.add(image))); } fn readback_system( mut commands: Commands, keys: Res<ButtonInput<KeyCode>>, image: Res<ImageResource>, ) { if !keys.just_pressed(KeyCode::KeyR) { return; } commands .spawn(bevy::render::gpu_readback::Readback::Texture( image.0.clone(), )) .observe( |trigger: Trigger<bevy::render::gpu_readback::ReadbackComplete>, mut commands: Commands| { info!("readback complete"); println!("{:#?}", &trigger.0); commands.entity(trigger.observer()).despawn(); }, ); } ``` </p> </details> |
||
![]() |
a371ee3019
|
Remove tracing re-export from bevy_utils (#17161)
# Objective - Contributes to #11478 ## Solution - Made `bevy_utils::tracing` `doc(hidden)` - Re-exported `tracing` from `bevy_log` for end-users - Added `tracing` directly to crates that need it. ## Testing - CI --- ## Migration Guide If you were importing `tracing` via `bevy::utils::tracing`, instead use `bevy::log::tracing`. Note that many items within `tracing` are also directly re-exported from `bevy::log` as well, so you may only need `bevy::log` for the most common items (e.g., `warn!`, `trace!`, etc.). This also applies to the `log_once!` family of macros. ## Notes - While this doesn't reduce the line-count in `bevy_utils`, it further decouples the internal crates from `bevy_utils`, making its eventual removal more feasible in the future. - I have just imported `tracing` as we do for all dependencies. However, a workspace dependency may be more appropriate for version management. |
||
![]() |
54a3fd7754
|
Don't overalign aligned values in gpu_readback::align_byte_size (#17007)
# Objective Fix alignment calculations in our rendering code. Fixes #16992 The `gpu_readback::align_byte_size` function incorrectly rounds aligned values to the next alignment. If we assume the alignment to be 256 (because that's what wgpu says it its) the function would align 0 to 256, 256 to 512, etc... ## Solution Forward the `gpu_readback::align_byte_size` to `RenderDevice::align_copy_bytes_per_row` so we don't implement the same method twice. Simplify `RenderDevice::align_copy_bytes_per_row`. ## Testing Ran the code provided in #16992 to see if the issue has been solved + added a test to check if `align_copy_bytes_per_row` returns the correct values. |
||
![]() |
64efd08e13
|
Prefer Display over Debug (#16112)
# Objective Fixes #16104 ## Solution I removed all instances of `:?` and put them back one by one where it caused an error. I removed some bevy_utils helper functions that were only used in 2 places and don't add value. See: #11478 ## Testing CI should catch the mistakes ## Migration Guide `bevy::utils::{dbg,info,warn,error}` were removed. Use `bevy::utils::tracing::{debug,info,warn,error}` instead. --------- Co-authored-by: SpecificProtagonist <vincentjunge@posteo.net> |
||
![]() |
73d68d60bb
|
Change GpuImage::size from UVec2 to Extent3d (#16815)
# Objective When preparing `GpuImage`s, we currently discard the `depth_or_array_layers` of the `Image`'s size by converting it into a `UVec2`. Fixes #16715. ## Solution Change `GpuImage::size` to `Extent3d`, and just pass that through when creating `GpuImage`s. Also copy the `aspect_ratio`, and `size` (now `size_2d` for disambiguation from the field) functions from `Image` to `GpuImage` for ease of use with 2D textures. I originally copied all size-related functions (like `width`, and `height`), but i think they are unnecessary considering how visible the `size` field on `GpuImage` is compared to `Image`. ## Testing Tested via `cargo r -p ci` for everything except docs, when generating docs it keeps spitting out a ton of ``` error[E0554]: `#![feature]` may not be used on the stable release channel --> crates/bevy_dylib/src/lib.rs:1:21 | 1 | #![cfg_attr(docsrs, feature(doc_auto_cfg))] | ``` Not sure why this is happening, but it also happens without my changes, so it's almost certainly some strange issue specific to my machine. ## Migration Guide - `GpuImage::size` is now an `Extent3d`. To easily get 2D size, use `size_2d()`. |
||
![]() |
40640fdf42
|
Don't reëxport bevy_image from bevy_render (#16163)
# Objective Fixes #15940 ## Solution Remove the `pub use` and fix the compile errors. Make `bevy_image` available as `bevy::image`. ## Testing Feature Frenzy would be good here! Maybe I'll learn how to use it if I have some time this weekend, or maybe a reviewer can use it. ## Migration Guide Use `bevy_image` instead of `bevy_render::texture` items. --------- Co-authored-by: chompaa <antony.m.3012@gmail.com> Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
![]() |
2d1b4939d2
|
Synchronize removed components with the render world (#15582)
# Objective Fixes #15560 Fixes (most of) #15570 Currently a lot of examples (and presumably some user code) depend on toggling certain render features by adding/removing a single component to an entity, e.g. `SpotLight` to toggle a light. Because of the retained render world this no longer works: Extract will add any new components, but when it is removed the entity persists unchanged in the render world. ## Solution Add `SyncComponentPlugin<C: Component>` that registers `SyncToRenderWorld` as a required component for `C`, and adds a component hook that will clear all components from the render world entity when `C` is removed. We add this plugin to `ExtractComponentPlugin` which fixes most instances of the problem. For custom extraction logic we can manually add `SyncComponentPlugin` for that component. We also rename `WorldSyncPlugin` to `SyncWorldPlugin` so we start a naming convention like all the `Extract` plugins. In this PR I also fixed a bunch of breakage related to the retained render world, stemming from old code that assumed that `Entity` would be the same in both worlds. I found that using the `RenderEntity` wrapper instead of `Entity` in data structures when referring to render world entities makes intent much clearer, so I propose we make this an official pattern. ## Testing Run examples like ``` cargo run --features pbr_multi_layer_material_textures --example clearcoat cargo run --example volumetric_fog ``` and see that they work, and that toggles work correctly. But really we should test every single example, as we might not even have caught all the breakage yet. --- ## Migration Guide The retained render world notes should be updated to explain this edge case and `SyncComponentPlugin` --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Trashtalk217 <trashtalk217@gmail.com> |
||
![]() |
40c26f80aa
|
Gpu readback (#15419)
# Objective Adds a new `Readback` component to request for readback of a `Handle<Image>` or `Handle<ShaderStorageBuffer>` to the CPU in a future frame. ## Solution We track the `Readback` component and allocate a target buffer to write the gpu resource into and map it back asynchronously, which then fires a trigger on the entity in the main world. This proccess is asynchronous, and generally takes a few frames. ## Showcase ```rust let mut buffer = ShaderStorageBuffer::from(vec![0u32; 16]); buffer.buffer_description.usage |= BufferUsages::COPY_SRC; let buffer = buffers.add(buffer); commands .spawn(Readback::buffer(buffer.clone())) .observe(|trigger: Trigger<ReadbackComplete>| { info!("Buffer data from previous frame {:?}", trigger.event()); }); ``` --------- Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com> Co-authored-by: IceSentry <IceSentry@users.noreply.github.com> |