b6bd205e8a
510 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
![]() |
f7e112a3c9
|
Let query items borrow from query state to avoid needing to clone (#15396)
# Objective Improve the performance of `FilteredEntity(Ref|Mut)` and `Entity(Ref|Mut)Except`. `FilteredEntityRef` needs an `Access<ComponentId>` to determine what components it can access. There is one stored in the query state, but query items cannot borrow from the state, so it has to `clone()` the access for each row. Cloning the access involves memory allocations and can be expensive. ## Solution Let query items borrow from their query state. Add an `'s` lifetime to `WorldQuery::Item` and `WorldQuery::Fetch`, similar to the one in `SystemParam`, and provide `&'s Self::State` to the fetch so that it can borrow from the state. Unfortunately, there are a few cases where we currently return query items from temporary query states: the sorted iteration methods create a temporary state to query the sort keys, and the `EntityRef::components<Q>()` methods create a temporary state for their query. To allow these to continue to work with most `QueryData` implementations, introduce a new subtrait `ReleaseStateQueryData` that converts a `QueryItem<'w, 's>` to `QueryItem<'w, 'static>`, and is implemented for everything except `FilteredEntity(Ref|Mut)` and `Entity(Ref|Mut)Except`. `#[derive(QueryData)]` will generate `ReleaseStateQueryData` implementations that apply when all of the subqueries implement `ReleaseStateQueryData`. This PR does not actually change the implementation of `FilteredEntity(Ref|Mut)` or `Entity(Ref|Mut)Except`! That will be done as a follow-up PR so that the changes are easier to review. I have pushed the changes as chescock/bevy#5. ## Testing I ran performance traces of many_foxes, both against main and against chescock/bevy#5, both including #15282. These changes do appear to make generalized animation a bit faster: (Red is main, yellow is chescock/bevy#5)  ## Migration Guide The `WorldQuery::Item` and `WorldQuery::Fetch` associated types and the `QueryItem` and `ROQueryItem` type aliases now have an additional lifetime parameter corresponding to the `'s` lifetime in `Query`. Manual implementations of `WorldQuery` will need to update the method signatures to include the new lifetimes. Other uses of the types will need to be updated to include a lifetime parameter, although it can usually be passed as `'_`. In particular, `ROQueryItem` is used when implementing `RenderCommand`. Before: ```rust fn render<'w>( item: &P, view: ROQueryItem<'w, Self::ViewQuery>, entity: Option<ROQueryItem<'w, Self::ItemQuery>>, param: SystemParamItem<'w, '_, Self::Param>, pass: &mut TrackedRenderPass<'w>, ) -> RenderCommandResult; ``` After: ```rust fn render<'w>( item: &P, view: ROQueryItem<'w, '_, Self::ViewQuery>, entity: Option<ROQueryItem<'w, '_, Self::ItemQuery>>, param: SystemParamItem<'w, '_, Self::Param>, pass: &mut TrackedRenderPass<'w>, ) -> RenderCommandResult; ``` --- Methods on `QueryState` that take `&mut self` may now result in conflicting borrows if the query items capture the lifetime of the mutable reference. This affects `get()`, `iter()`, and others. To fix the errors, first call `QueryState::update_archetypes()`, and then replace a call `state.foo(world, param)` with `state.query_manual(world).foo_inner(param)`. Alternately, you may be able to restructure the code to call `state.query(world)` once and then make multiple calls using the `Query`. Before: ```rust let mut state: QueryState<_, _> = ...; let d1 = state.get(world, e1); let d2 = state.get(world, e2); // Error: cannot borrow `state` as mutable more than once at a time println!("{d1:?}"); println!("{d2:?}"); ``` After: ```rust let mut state: QueryState<_, _> = ...; state.update_archetypes(world); let d1 = state.get_manual(world, e1); let d2 = state.get_manual(world, e2); // OR state.update_archetypes(world); let d1 = state.query(world).get_inner(e1); let d2 = state.query(world).get_inner(e2); // OR let query = state.query(world); let d1 = query.get_inner(e1); let d1 = query.get_inner(e2); println!("{d1:?}"); println!("{d2:?}"); ``` |
||
![]() |
38c3423693
|
Event Split: Event , EntityEvent , and BufferedEvent (#19647)
# Objective Closes #19564. The current `Event` trait looks like this: ```rust pub trait Event: Send + Sync + 'static { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` The `Event` trait is used by both buffered events (`EventReader`/`EventWriter`) and observer events. If they are observer events, they can optionally be targeted at specific `Entity`s or `ComponentId`s, and can even be propagated to other entities. However, there has long been a desire to split the trait semantically for a variety of reasons, see #14843, #14272, and #16031 for discussion. Some reasons include: - It's very uncommon to use a single event type as both a buffered event and targeted observer event. They are used differently and tend to have distinct semantics. - A common footgun is using buffered events with observers or event readers with observer events, as there is no type-level error that prevents this kind of misuse. - #19440 made `Trigger::target` return an `Option<Entity>`. This *seriously* hurts ergonomics for the general case of entity observers, as you need to `.unwrap()` each time. If we could statically determine whether the event is expected to have an entity target, this would be unnecessary. There's really two main ways that we can categorize events: push vs. pull (i.e. "observer event" vs. "buffered event") and global vs. targeted: | | Push | Pull | | ------------ | --------------- | --------------------------- | | **Global** | Global observer | `EventReader`/`EventWriter` | | **Targeted** | Entity observer | - | There are many ways to approach this, each with their tradeoffs. Ultimately, we kind of want to split events both ways: - A type-level distinction between observer events and buffered events, to prevent people from using the wrong kind of event in APIs - A statically designated entity target for observer events to avoid accidentally using untargeted events for targeted APIs This PR achieves these goals by splitting event traits into `Event`, `EntityEvent`, and `BufferedEvent`, with `Event` being the shared trait implemented by all events. ## `Event`, `EntityEvent`, and `BufferedEvent` `Event` is now a very simple trait shared by all events. ```rust pub trait Event: Send + Sync + 'static { // Required for observer APIs fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` You can call `trigger` for *any* event, and use a global observer for listening to the event. ```rust #[derive(Event)] struct Speak { message: String, } // ... app.add_observer(|trigger: On<Speak>| { println!("{}", trigger.message); }); // ... commands.trigger(Speak { message: "Y'all like these reworked events?".to_string(), }); ``` To allow an event to be targeted at entities and even propagated further, you can additionally implement the `EntityEvent` trait: ```rust pub trait EntityEvent: Event { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; } ``` This lets you call `trigger_targets`, and to use targeted observer APIs like `EntityCommands::observe`: ```rust #[derive(Event, EntityEvent)] #[entity_event(traversal = &'static ChildOf, auto_propagate)] struct Damage { amount: f32, } // ... let enemy = commands.spawn((Enemy, Health(100.0))).id(); // Spawn some armor as a child of the enemy entity. // When the armor takes damage, it will bubble the event up to the enemy. let armor_piece = commands .spawn((ArmorPiece, Health(25.0), ChildOf(enemy))) .observe(|trigger: On<Damage>, mut query: Query<&mut Health>| { // Note: `On::target` only exists because this is an `EntityEvent`. let mut health = query.get(trigger.target()).unwrap(); health.0 -= trigger.amount(); }); commands.trigger_targets(Damage { amount: 10.0 }, armor_piece); ``` > [!NOTE] > You *can* still also trigger an `EntityEvent` without targets using `trigger`. We probably *could* make this an either-or thing, but I'm not sure that's actually desirable. To allow an event to be used with the buffered API, you can implement `BufferedEvent`: ```rust pub trait BufferedEvent: Event {} ``` The event can then be used with `EventReader`/`EventWriter`: ```rust #[derive(Event, BufferedEvent)] struct Message(String); fn write_hello(mut writer: EventWriter<Message>) { writer.write(Message("I hope these examples are alright".to_string())); } fn read_messages(mut reader: EventReader<Message>) { // Process all buffered events of type `Message`. for Message(message) in reader.read() { println!("{message}"); } } ``` In summary: - Need a basic event you can trigger and observe? Derive `Event`! - Need the event to be targeted at an entity? Derive `EntityEvent`! - Need the event to be buffered and support the `EventReader`/`EventWriter` API? Derive `BufferedEvent`! ## Alternatives I'll now cover some of the alternative approaches I have considered and briefly explored. I made this section collapsible since it ended up being quite long :P <details> <summary>Expand this to see alternatives</summary> ### 1. Unified `Event` Trait One option is not to have *three* separate traits (`Event`, `EntityEvent`, `BufferedEvent`), and to instead just use associated constants on `Event` to determine whether an event supports targeting and buffering or not: ```rust pub trait Event: Send + Sync + 'static { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; const TARGETED: bool = false; const BUFFERED: bool = false; fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` Methods can then use bounds like `where E: Event<TARGETED = true>` or `where E: Event<BUFFERED = true>` to limit APIs to specific kinds of events. This would keep everything under one `Event` trait, but I don't think it's necessarily a good idea. It makes APIs harder to read, and docs can't easily refer to specific types of events. You can also create weird invariants: what if you specify `TARGETED = false`, but have `Traversal` and/or `AUTO_PROPAGATE` enabled? ### 2. `Event` and `Trigger` Another option is to only split the traits between buffered events and observer events, since that is the main thing people have been asking for, and they have the largest API difference. If we did this, I think we would need to make the terms *clearly* separate. We can't really use `Event` and `BufferedEvent` as the names, since it would be strange that `BufferedEvent` doesn't implement `Event`. Something like `ObserverEvent` and `BufferedEvent` could work, but it'd be more verbose. For this approach, I would instead keep `Event` for the current `EventReader`/`EventWriter` API, and call the observer event a `Trigger`, since the "trigger" terminology is already used in the observer context within Bevy (both as a noun and a verb). This is also what a long [bikeshed on Discord](https://discord.com/channels/691052431525675048/749335865876021248/1298057661878898791) seemed to land on at the end of last year. ```rust // For `EventReader`/`EventWriter` pub trait Event: Send + Sync + 'static {} // For observers pub trait Trigger: Send + Sync + 'static { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; const TARGETED: bool = false; fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` The problem is that "event" is just a really good term for something that "happens". Observers are rapidly becoming the more prominent API, so it'd be weird to give them the `Trigger` name and leave the good `Event` name for the less common API. So, even though a split like this seems neat on the surface, I think it ultimately wouldn't really work. We want to keep the `Event` name for observer events, and there is no good alternative for the buffered variant. (`Message` was suggested, but saying stuff like "sends a collision message" is weird.) ### 3. `GlobalEvent` + `TargetedEvent` What if instead of focusing on the buffered vs. observed split, we *only* make a distinction between global and targeted events? ```rust // A shared event trait to allow global observers to work pub trait Event: Send + Sync + 'static { fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } // For buffered events and non-targeted observer events pub trait GlobalEvent: Event {} // For targeted observer events pub trait TargetedEvent: Event { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; } ``` This is actually the first approach I implemented, and it has the neat characteristic that you can only use non-targeted APIs like `trigger` with a `GlobalEvent` and targeted APIs like `trigger_targets` with a `TargetedEvent`. You have full control over whether the entity should or should not have a target, as they are fully distinct at the type-level. However, there's a few problems: - There is no type-level indication of whether a `GlobalEvent` supports buffered events or just non-targeted observer events - An `Event` on its own does literally nothing, it's just a shared trait required to make global observers accept both non-targeted and targeted events - If an event is both a `GlobalEvent` and `TargetedEvent`, global observers again have ambiguity on whether an event has a target or not, undermining some of the benefits - The names are not ideal ### 4. `Event` and `EntityEvent` We can fix some of the problems of Alternative 3 by accepting that targeted events can also be used in non-targeted contexts, and simply having the `Event` and `EntityEvent` traits: ```rust // For buffered events and non-targeted observer events pub trait Event: Send + Sync + 'static { fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } // For targeted observer events pub trait EntityEvent: Event { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; } ``` This is essentially identical to this PR, just without a dedicated `BufferedEvent`. The remaining major "problem" is that there is still zero type-level indication of whether an `Event` event *actually* supports the buffered API. This leads us to the solution proposed in this PR, using `Event`, `EntityEvent`, and `BufferedEvent`. </details> ## Conclusion The `Event` + `EntityEvent` + `BufferedEvent` split proposed in this PR aims to solve all the common problems with Bevy's current event model while keeping the "weirdness" factor minimal. It splits in terms of both the push vs. pull *and* global vs. targeted aspects, while maintaining a shared concept for an "event". ### Why I Like This - The term "event" remains as a single concept for all the different kinds of events in Bevy. - Despite all event types being "events", they use fundamentally different APIs. Instead of assuming that you can use an event type with any pattern (when only one is typically supported), you explicitly opt in to each one with dedicated traits. - Using separate traits for each type of event helps with documentation and clearer function signatures. - I can safely make assumptions on expected usage. - If I see that an event is an `EntityEvent`, I can assume that I can use `observe` on it and get targeted events. - If I see that an event is a `BufferedEvent`, I can assume that I can use `EventReader` to read events. - If I see both `EntityEvent` and `BufferedEvent`, I can assume that both APIs are supported. In summary: This allows for a unified concept for events, while limiting the different ways to use them with opt-in traits. No more guess-work involved when using APIs. ### Problems? - Because `BufferedEvent` implements `Event` (for more consistent semantics etc.), you can still use all buffered events for non-targeted observers. I think this is fine/good. The important part is that if you see that an event implements `BufferedEvent`, you know that the `EventReader`/`EventWriter` API should be supported. Whether it *also* supports other APIs is secondary. - I currently only support `trigger_targets` for an `EntityEvent`. However, you can technically target components too, without targeting any entities. I consider that such a niche and advanced use case that it's not a huge problem to only support it for `EntityEvent`s, but we could also split `trigger_targets` into `trigger_entities` and `trigger_components` if we wanted to (or implement components as entities :P). - You can still trigger an `EntityEvent` *without* targets. I consider this correct, since `Event` implements the non-targeted behavior, and it'd be weird if implementing another trait *removed* behavior. However, it does mean that global observers for entity events can technically return `Entity::PLACEHOLDER` again (since I got rid of the `Option<Entity>` added in #19440 for ergonomics). I think that's enough of an edge case that it's not a huge problem, but it is worth keeping in mind. - ~~Deriving both `EntityEvent` and `BufferedEvent` for the same type currently duplicates the `Event` implementation, so you instead need to manually implement one of them.~~ Changed to always requiring `Event` to be derived. ## Related Work There are plans to implement multi-event support for observers, especially for UI contexts. [Cart's example](https://github.com/bevyengine/bevy/issues/14649#issuecomment-2960402508) API looked like this: ```rust // Truncated for brevity trigger: Trigger<( OnAdd<Pressed>, OnRemove<Pressed>, OnAdd<InteractionDisabled>, OnRemove<InteractionDisabled>, OnInsert<Hovered>, )>, ``` I believe this shouldn't be in conflict with this PR. If anything, this PR might *help* achieve the multi-event pattern for entity observers with fewer footguns: by statically enforcing that all of these events are `EntityEvent`s in the context of `EntityCommands::observe`, we can avoid misuse or weird cases where *some* events inside the trigger are targeted while others are not. |
||
![]() |
e5dc177b4b
|
Rename Trigger to On (#19596)
# Objective Currently, the observer API looks like this: ```rust app.add_observer(|trigger: Trigger<Explode>| { info!("Entity {} exploded!", trigger.target()); }); ``` Future plans for observers also include "multi-event observers" with a trigger that looks like this (see [Cart's example](https://github.com/bevyengine/bevy/issues/14649#issuecomment-2960402508)): ```rust trigger: Trigger<( OnAdd<Pressed>, OnRemove<Pressed>, OnAdd<InteractionDisabled>, OnRemove<InteractionDisabled>, OnInsert<Hovered>, )>, ``` In scenarios like this, there is a lot of repetition of `On`. These are expected to be very high-traffic APIs especially in UI contexts, so ergonomics and readability are critical. By renaming `Trigger` to `On`, we can make these APIs read more cleanly and get rid of the repetition: ```rust app.add_observer(|trigger: On<Explode>| { info!("Entity {} exploded!", trigger.target()); }); ``` ```rust trigger: On<( Add<Pressed>, Remove<Pressed>, Add<InteractionDisabled>, Remove<InteractionDisabled>, Insert<Hovered>, )>, ``` Names like `On<Add<Pressed>>` emphasize the actual event listener nature more than `Trigger<OnAdd<Pressed>>`, and look cleaner. This *also* frees up the `Trigger` name if we want to use it for the observer event type, splitting them out from buffered events (bikeshedding this is out of scope for this PR though). For prior art: [`bevy_eventlistener`](https://github.com/aevyrie/bevy_eventlistener) used [`On`](https://docs.rs/bevy_eventlistener/latest/bevy_eventlistener/event_listener/struct.On.html) for its event listener type. Though in our case, the observer is the event listener, and `On` is just a type containing information about the triggered event. ## Solution Steal from `bevy_event_listener` by @aevyrie and use `On`. - Rename `Trigger` to `On` - Rename `OnAdd` to `Add` - Rename `OnInsert` to `Insert` - Rename `OnReplace` to `Replace` - Rename `OnRemove` to `Remove` - Rename `OnDespawn` to `Despawn` ## Discussion ### Naming Conflicts?? Using a name like `Add` might initially feel like a very bad idea, since it risks conflict with `core::ops::Add`. However, I don't expect this to be a big problem in practice. - You rarely need to actually implement the `Add` trait, especially in modules that would use the Bevy ECS. - In the rare cases where you *do* get a conflict, it is very easy to fix by just disambiguating, for example using `ops::Add`. - The `Add` event is a struct while the `Add` trait is a trait (duh), so the compiler error should be very obvious. For the record, renaming `OnAdd` to `Add`, I got exactly *zero* errors or conflicts within Bevy itself. But this is of course not entirely representative of actual projects *using* Bevy. You might then wonder, why not use `Added`? This would conflict with the `Added` query filter, so it wouldn't work. Additionally, the current naming convention for observer events does not use past tense. ### Documentation This does make documentation slightly more awkward when referring to `On` or its methods. Previous docs often referred to `Trigger::target` or "sends a `Trigger`" (which is... a bit strange anyway), which would now be `On::target` and "sends an observer `Event`". You can see the diff in this PR to see some of the effects. I think it should be fine though, we may just need to reword more documentation to read better. |
||
![]() |
064e5e48b4
|
Remove entity placeholder from observers (#19440)
# Objective `Entity::PLACEHOLDER` acts as a magic number that will *probably* never really exist, but it certainly could. And, `Entity` has a niche, so the only reason to use `PLACEHOLDER` is as an alternative to `MaybeUninit` that trades safety risks for logic risks. As a result, bevy has generally advised against using `PLACEHOLDER`, but we still use if for a lot internally. This pr starts removing internal uses of it, starting from observers. ## Solution Change all trigger target related types from `Entity` to `Option<Entity>` Small migration guide to come. ## Testing CI ## Future Work This turned a lot of code from ```rust trigger.target() ``` to ```rust trigger.target().unwrap() ``` The extra panic is no worse than before; it's just earlier than panicking after passing the placeholder to something else. But this is kinda annoying. I would like to add a `TriggerMode` or something to `Event` that would restrict what kinds of targets can be used for that event. Many events like `Removed` etc, are always triggered with a target. We can make those have a way to assume Some, etc. But I wanted to save that for a future pr. |
||
![]() |
c617fc49ae
|
fix distinct directional lights per view (#19147)
# Objective after #15156 it seems like using distinct directional lights on different views is broken (and will probably break spotlights too). fix them ## Solution the reason is a bit hairy so with an example: - camera 0 on layer 0 - camera 1 on layer 1 - dir light 0 on layer 0 (2 cascades) - dir light 1 on layer 1 (2 cascades) in render/lights.rs: - outside of any view loop, - we count the total number of shadow casting directional light cascades (4) and assign an incrementing `depth_texture_base_index` for each (0-1 for one light, 2-3 for the other, depending on iteration order) (line 1034) - allocate a texture array for the total number of cascades plus spotlight maps (4) (line 1106) - in the view loop, for directional lights we - skip lights that don't intersect on renderlayers (line 1440) - assign an incrementing texture layer to each light/cascade starting from 0 (resets to 0 per view) (assigning 0 and 1 each time for the 2 cascades of the intersecting light) (line 1509, init at 1421) then in the rendergraph: - camera 0 renders the shadow map for light 0 to texture indices 0 and 1 - camera 0 renders using shadows from the `depth_texture_base_index` (maybe 0-1, maybe 2-3 depending on the iteration order) - camera 1 renders the shadow map for light 1 to texture indices 0 and 1 - camera 0 renders using shadows from the `depth_texture_base_index` (maybe 0-1, maybe 2-3 depending on the iteration order) issues: - one of the views uses empty shadow maps (bug) - we allocated a texture layer per cascade per light, even though not all lights are used on all views (just inefficient) - I think we're allocating texture layers even for lights with `shadows_enabled: false` (just inefficient) solution: - calculate upfront the view with the largest number of directional cascades - allocate this many layers (plus layers for spotlights) in the texture array - keep using texture layers 0..n in the per-view loop, but build GpuLights.gpu_directional_lights within the loop too so it refers to the same layers we render to nice side effects: - we can now use `max_texture_array_layers / MAX_CASCADES_PER_LIGHT` shadow-casting directional lights per view, rather than overall. - we can remove the `GpuDirectionalLight::skip` field, since the gpu lights struct is constructed per view a simpler approach would be to keep everything the same, and just increment the texture layer index in the view loop even for non-intersecting lights. this pr reduces the total shadowmap vram used as well and isn't *much* extra complexity. but if we want something less risky/intrusive for 16.1 that would be the way. ## Testing i edited the split screen example to put separate lights on layer 1 and layer 2, and put the plane and fox on both layers (using lots of unrelated code for render layer propagation from #17575). without the fix the directional shadows will only render on one of the top 2 views even though there are directional lights on both layers. ```rs //! Renders two cameras to the same window to accomplish "split screen". use std::f32::consts::PI; use bevy::{ pbr::CascadeShadowConfigBuilder, prelude::*, render:📷:Viewport, window::WindowResized, }; use bevy_render::view::RenderLayers; fn main() { App::new() .add_plugins(DefaultPlugins) .add_plugins(HierarchyPropagatePlugin::<RenderLayers>::default()) .add_systems(Startup, setup) .add_systems(Update, (set_camera_viewports, button_system)) .run(); } /// set up a simple 3D scene fn setup( mut commands: Commands, asset_server: Res<AssetServer>, mut meshes: ResMut<Assets<Mesh>>, mut materials: ResMut<Assets<StandardMaterial>>, ) { let all_layers = RenderLayers::layer(1).with(2).with(3).with(4); // plane commands.spawn(( Mesh3d(meshes.add(Plane3d::default().mesh().size(100.0, 100.0))), MeshMaterial3d(materials.add(Color::srgb(0.3, 0.5, 0.3))), all_layers.clone() )); commands.spawn(( SceneRoot( asset_server.load(GltfAssetLabel::Scene(0).from_asset("models/animated/Fox.glb")), ), Propagate(all_layers.clone()), )); // Light commands.spawn(( Transform::from_rotation(Quat::from_euler(EulerRot::ZYX, 0.0, 1.0, -PI / 4.)), DirectionalLight { shadows_enabled: true, ..default() }, CascadeShadowConfigBuilder { num_cascades: if cfg!(all( feature = "webgl2", target_arch = "wasm32", not(feature = "webgpu") )) { // Limited to 1 cascade in WebGL 1 } else { 2 }, first_cascade_far_bound: 200.0, maximum_distance: 280.0, ..default() } .build(), RenderLayers::layer(1), )); commands.spawn(( Transform::from_rotation(Quat::from_euler(EulerRot::ZYX, 0.0, 1.0, -PI / 4.)), DirectionalLight { shadows_enabled: true, ..default() }, CascadeShadowConfigBuilder { num_cascades: if cfg!(all( feature = "webgl2", target_arch = "wasm32", not(feature = "webgpu") )) { // Limited to 1 cascade in WebGL 1 } else { 2 }, first_cascade_far_bound: 200.0, maximum_distance: 280.0, ..default() } .build(), RenderLayers::layer(2), )); // Cameras and their dedicated UI for (index, (camera_name, camera_pos)) in [ ("Player 1", Vec3::new(0.0, 200.0, -150.0)), ("Player 2", Vec3::new(150.0, 150., 50.0)), ("Player 3", Vec3::new(100.0, 150., -150.0)), ("Player 4", Vec3::new(-100.0, 80., 150.0)), ] .iter() .enumerate() { let camera = commands .spawn(( Camera3d::default(), Transform::from_translation(*camera_pos).looking_at(Vec3::ZERO, Vec3::Y), Camera { // Renders cameras with different priorities to prevent ambiguities order: index as isize, ..default() }, CameraPosition { pos: UVec2::new((index % 2) as u32, (index / 2) as u32), }, RenderLayers::layer(index+1) )) .id(); // Set up UI commands .spawn(( UiTargetCamera(camera), Node { width: Val::Percent(100.), height: Val::Percent(100.), ..default() }, )) .with_children(|parent| { parent.spawn(( Text::new(*camera_name), Node { position_type: PositionType::Absolute, top: Val::Px(12.), left: Val::Px(12.), ..default() }, )); buttons_panel(parent); }); } fn buttons_panel(parent: &mut ChildSpawnerCommands) { parent .spawn(Node { position_type: PositionType::Absolute, width: Val::Percent(100.), height: Val::Percent(100.), display: Display::Flex, flex_direction: FlexDirection::Row, justify_content: JustifyContent::SpaceBetween, align_items: AlignItems::Center, padding: UiRect::all(Val::Px(20.)), ..default() }) .with_children(|parent| { rotate_button(parent, "<", Direction::Left); rotate_button(parent, ">", Direction::Right); }); } fn rotate_button(parent: &mut ChildSpawnerCommands, caption: &str, direction: Direction) { parent .spawn(( RotateCamera(direction), Button, Node { width: Val::Px(40.), height: Val::Px(40.), border: UiRect::all(Val::Px(2.)), justify_content: JustifyContent::Center, align_items: AlignItems::Center, ..default() }, BorderColor(Color::WHITE), BackgroundColor(Color::srgb(0.25, 0.25, 0.25)), )) .with_children(|parent| { parent.spawn(Text::new(caption)); }); } } #[derive(Component)] struct CameraPosition { pos: UVec2, } #[derive(Component)] struct RotateCamera(Direction); enum Direction { Left, Right, } fn set_camera_viewports( windows: Query<&Window>, mut resize_events: EventReader<WindowResized>, mut query: Query<(&CameraPosition, &mut Camera)>, ) { // We need to dynamically resize the camera's viewports whenever the window size changes // so then each camera always takes up half the screen. // A resize_event is sent when the window is first created, allowing us to reuse this system for initial setup. for resize_event in resize_events.read() { let window = windows.get(resize_event.window).unwrap(); let size = window.physical_size() / 2; for (camera_position, mut camera) in &mut query { camera.viewport = Some(Viewport { physical_position: camera_position.pos * size, physical_size: size, ..default() }); } } } fn button_system( interaction_query: Query< (&Interaction, &ComputedNodeTarget, &RotateCamera), (Changed<Interaction>, With<Button>), >, mut camera_query: Query<&mut Transform, With<Camera>>, ) { for (interaction, computed_target, RotateCamera(direction)) in &interaction_query { if let Interaction::Pressed = *interaction { // Since TargetCamera propagates to the children, we can use it to find // which side of the screen the button is on. if let Some(mut camera_transform) = computed_target .camera() .and_then(|camera| camera_query.get_mut(camera).ok()) { let angle = match direction { Direction::Left => -0.1, Direction::Right => 0.1, }; camera_transform.rotate_around(Vec3::ZERO, Quat::from_axis_angle(Vec3::Y, angle)); } } } } use std::marker::PhantomData; use bevy::{ app::{App, Plugin, Update}, ecs::query::QueryFilter, prelude::{ Changed, Children, Commands, Component, Entity, Local, Query, RemovedComponents, SystemSet, With, Without, }, }; /// Causes the inner component to be added to this entity and all children. /// A child with a Propagate<C> component of it's own will override propagation from /// that point in the tree #[derive(Component, Clone, PartialEq)] pub struct Propagate<C: Component + Clone + PartialEq>(pub C); /// Internal struct for managing propagation #[derive(Component, Clone, PartialEq)] pub struct Inherited<C: Component + Clone + PartialEq>(pub C); /// Stops the output component being added to this entity. /// Children will still inherit the component from this entity or its parents #[derive(Component, Default)] pub struct PropagateOver<C: Component + Clone + PartialEq>(PhantomData<fn() -> C>); /// Stops the propagation at this entity. Children will not inherit the component. #[derive(Component, Default)] pub struct PropagateStop<C: Component + Clone + PartialEq>(PhantomData<fn() -> C>); pub struct HierarchyPropagatePlugin<C: Component + Clone + PartialEq, F: QueryFilter = ()> { _p: PhantomData<fn() -> (C, F)>, } impl<C: Component + Clone + PartialEq, F: QueryFilter> Default for HierarchyPropagatePlugin<C, F> { fn default() -> Self { Self { _p: Default::default(), } } } #[derive(SystemSet, Clone, PartialEq, PartialOrd, Ord)] pub struct PropagateSet<C: Component + Clone + PartialEq> { _p: PhantomData<fn() -> C>, } impl<C: Component + Clone + PartialEq> std::fmt::Debug for PropagateSet<C> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("PropagateSet") .field("_p", &self._p) .finish() } } impl<C: Component + Clone + PartialEq> Eq for PropagateSet<C> {} impl<C: Component + Clone + PartialEq> std:#️⃣:Hash for PropagateSet<C> { fn hash<H: std:#️⃣:Hasher>(&self, state: &mut H) { self._p.hash(state); } } impl<C: Component + Clone + PartialEq> Default for PropagateSet<C> { fn default() -> Self { Self { _p: Default::default(), } } } impl<C: Component + Clone + PartialEq, F: QueryFilter + 'static> Plugin for HierarchyPropagatePlugin<C, F> { fn build(&self, app: &mut App) { app.add_systems( Update, ( update_source::<C, F>, update_stopped::<C, F>, update_reparented::<C, F>, propagate_inherited::<C, F>, propagate_output::<C, F>, ) .chain() .in_set(PropagateSet::<C>::default()), ); } } pub fn update_source<C: Component + Clone + PartialEq, F: QueryFilter>( mut commands: Commands, changed: Query<(Entity, &Propagate<C>), (Changed<Propagate<C>>, Without<PropagateStop<C>>)>, mut removed: RemovedComponents<Propagate<C>>, ) { for (entity, source) in &changed { commands .entity(entity) .try_insert(Inherited(source.0.clone())); } for removed in removed.read() { if let Ok(mut commands) = commands.get_entity(removed) { commands.remove::<(Inherited<C>, C)>(); } } } pub fn update_stopped<C: Component + Clone + PartialEq, F: QueryFilter>( mut commands: Commands, q: Query<Entity, (With<Inherited<C>>, F, With<PropagateStop<C>>)>, ) { for entity in q.iter() { let mut cmds = commands.entity(entity); cmds.remove::<Inherited<C>>(); } } pub fn update_reparented<C: Component + Clone + PartialEq, F: QueryFilter>( mut commands: Commands, moved: Query< (Entity, &ChildOf, Option<&Inherited<C>>), ( Changed<ChildOf>, Without<Propagate<C>>, Without<PropagateStop<C>>, F, ), >, parents: Query<&Inherited<C>>, ) { for (entity, parent, maybe_inherited) in &moved { if let Ok(inherited) = parents.get(parent.parent()) { commands.entity(entity).try_insert(inherited.clone()); } else if maybe_inherited.is_some() { commands.entity(entity).remove::<(Inherited<C>, C)>(); } } } pub fn propagate_inherited<C: Component + Clone + PartialEq, F: QueryFilter>( mut commands: Commands, changed: Query< (&Inherited<C>, &Children), (Changed<Inherited<C>>, Without<PropagateStop<C>>, F), >, recurse: Query< (Option<&Children>, Option<&Inherited<C>>), (Without<Propagate<C>>, Without<PropagateStop<C>>, F), >, mut to_process: Local<Vec<(Entity, Option<Inherited<C>>)>>, mut removed: RemovedComponents<Inherited<C>>, ) { // gather changed for (inherited, children) in &changed { to_process.extend( children .iter() .map(|child| (child, Some(inherited.clone()))), ); } // and removed for entity in removed.read() { if let Ok((Some(children), _)) = recurse.get(entity) { to_process.extend(children.iter().map(|child| (child, None))) } } // propagate while let Some((entity, maybe_inherited)) = (*to_process).pop() { let Ok((maybe_children, maybe_current)) = recurse.get(entity) else { continue; }; if maybe_current == maybe_inherited.as_ref() { continue; } if let Some(children) = maybe_children { to_process.extend( children .iter() .map(|child| (child, maybe_inherited.clone())), ); } if let Some(inherited) = maybe_inherited { commands.entity(entity).try_insert(inherited.clone()); } else { commands.entity(entity).remove::<(Inherited<C>, C)>(); } } } pub fn propagate_output<C: Component + Clone + PartialEq, F: QueryFilter>( mut commands: Commands, changed: Query< (Entity, &Inherited<C>, Option<&C>), (Changed<Inherited<C>>, Without<PropagateOver<C>>, F), >, ) { for (entity, inherited, maybe_current) in &changed { if maybe_current.is_some_and(|c| &inherited.0 == c) { continue; } commands.entity(entity).try_insert(inherited.0.clone()); } } ``` |
||
![]() |
b866bb4254
|
Remove Shader weak_handles from bevy_pbr (excluding meshlets). (#19365)
# Objective - Related to #19024 ## Solution - Use the new `load_shader_library` macro for the shader libraries and `embedded_asset`/`load_embedded_asset` for the "shader binaries" in `bevy_pbr` (excluding meshlets). ## Testing - `atmosphere` example still works - `fog` example still works - `decal` example still works P.S. I don't think this needs a migration guide. Technically users could be using the `pub` weak handles, but there's no actual good use for them, so omitting it seems fine. Alternatively, we could mix this in with the migration guide notes for #19137. |
||
![]() |
17914943a3
|
Fix spot light shadow glitches (#19273)
# Objective Spot light shadows are still broken after fixing point lights in #19265 ## Solution Fix spot lights in the same way, just using the spot light specific visible entities component. I also changed the query to be directly in the render world instead of being extracted to be more accurate. ## Testing Tested with the same code but changing `PointLight` to `SpotLight`. |
||
![]() |
4d1b045855
|
Fix point light shadow glitches (#19265)
# Objective Fixes #18945 ## Solution Entities that are not visible in any view (camera or light), get their render meshes removed. When they become visible somewhere again, the meshes get recreated and assigned possibly different ids. Point/spot light visible entities weren't cleared when the lights themseves went out of view, which caused them to try to queue these fake visible entities for rendering every frame. The shadow phase cache usually flushes non visible entites, but because of this bug it never flushed them and continued to queue meshes with outdated ids. The simple solution is to every frame clear all visible entities for all point/spot lights that may or may not be visible. The visible entities get repopulated directly afterwards. I also renamed the `global_point_lights` to `global_visible_clusterable` to make it clear that it includes only visible things. ## Testing - Tested with the code from the issue. |
||
![]() |
139515278c
|
Use embedded_asset to load PBR shaders (#19137)
# Objective - Get in-engine shader hot reloading working ## Solution - Adopt #12009 - Cut back on everything possible to land an MVP: we only hot-reload PBR in deferred shading mode. This is to minimize the diff and avoid merge hell. The rest shall come in followups. ## Testing - `cargo run --example pbr --features="embedded_watcher"` and edit some pbr shader code |
||
![]() |
673e70c72e
|
Fix specular cutoff on lights with radius overlapping with mesh (#19157)
# Objective - Fixes #13318 ## Solution - Clamp a dot product to be positive to avoid choosing a `centerToRay` which is not on the ray but behind it. ## Testing - Repro in #13318 Main: <img width="963" alt="{DA2A2B99-27C7-4A76-83B6-CCB70FB57CAD}" src="https://github.com/user-attachments/assets/afae8001-48ee-4762-9522-e247bbe3577a" /> This PR: <img width="963" alt="{2C4BC3E7-C6A6-4736-A916-0366FBB618DA}" src="https://github.com/user-attachments/assets/5bea4162-0b58-4df0-bf22-09fcb27dc167" /> Eevee reference:  |
||
![]() |
eb0f4f76ba
|
Fix: Provide CPU mesh processing with MaterialBindingId (#19083)
# Objective Fixes #19027 ## Solution Query for the material binding id if using fallback CPU processing ## Testing I've honestly no clue how to test for this, and I imagine that this isn't entirely failsafe :( but would highly appreciate a suggestion! To verify this works, please run the the texture.rs example using WebGL 2. Additionally, I'm extremely naive about the nuances of pbr. This PR is essentially to kinda *get the ball rolling* of sorts. Thanks :) --------- Co-authored-by: Gilles Henaux <ghx_github_priv@fastmail.com> Co-authored-by: charlotte <charlotte.c.mcelwain@gmail.com> |
||
![]() |
7b1c9f192e
|
Adopt consistent FooSystems naming convention for system sets (#18900)
# Objective Fixes a part of #14274. Bevy has an incredibly inconsistent naming convention for its system sets, both internally and across the ecosystem. <img alt="System sets in Bevy" src="https://github.com/user-attachments/assets/d16e2027-793f-4ba4-9cc9-e780b14a5a1b" width="450" /> *Names of public system set types in Bevy* Most Bevy types use a naming of `FooSystem` or just `Foo`, but there are also a few `FooSystems` and `FooSet` types. In ecosystem crates on the other hand, `FooSet` is perhaps the most commonly used name in general. Conventions being so wildly inconsistent can make it harder for users to pick names for their own types, to search for system sets on docs.rs, or to even discern which types *are* system sets. To reign in the inconsistency a bit and help unify the ecosystem, it would be good to establish a common recommended naming convention for system sets in Bevy itself, similar to how plugins are commonly suffixed with `Plugin` (ex: `TimePlugin`). By adopting a consistent naming convention in first-party Bevy, we can softly nudge ecosystem crates to follow suit (for types where it makes sense to do so). Choosing a naming convention is also relevant now, as the [`bevy_cli` recently adopted lints](https://github.com/TheBevyFlock/bevy_cli/pull/345) to enforce naming for plugins and system sets, and the recommended naming used for system sets is still a bit open. ## Which Name To Use? Now the contentious part: what naming convention should we actually adopt? This was discussed on the Bevy Discord at the end of last year, starting [here](<https://discord.com/channels/691052431525675048/692572690833473578/1310659954683936789>). `FooSet` and `FooSystems` were the clear favorites, with `FooSet` very narrowly winning an unofficial poll. However, it seems to me like the consensus was broadly moving towards `FooSystems` at the end and after the poll, with Cart ([source](https://discord.com/channels/691052431525675048/692572690833473578/1311140204974706708)) and later Alice ([source](https://discord.com/channels/691052431525675048/692572690833473578/1311092530732859533)) and also me being in favor of it. Let's do a quick pros and cons list! Of course these are just what I thought of, so take it with a grain of salt. `FooSet`: - Pro: Nice and short! - Pro: Used by many ecosystem crates. - Pro: The `Set` suffix comes directly from the trait name `SystemSet`. - Pro: Pairs nicely with existing APIs like `in_set` and `configure_sets`. - Con: `Set` by itself doesn't actually indicate that it's related to systems *at all*, apart from the implemented trait. A set of what? - Con: Is `FooSet` a set of `Foo`s or a system set related to `Foo`? Ex: `ContactSet`, `MeshSet`, `EnemySet`... `FooSystems`: - Pro: Very clearly indicates that the type represents a collection of systems. The actual core concept, system(s), is in the name. - Pro: Parallels nicely with `FooPlugins` for plugin groups. - Pro: Low risk of conflicts with other names or misunderstandings about what the type is. - Pro: In most cases, reads *very* nicely and clearly. Ex: `PhysicsSystems` and `AnimationSystems` as opposed to `PhysicsSet` and `AnimationSet`. - Pro: Easy to search for on docs.rs. - Con: Usually results in longer names. - Con: Not yet as widely used. Really the big problem with `FooSet` is that it doesn't actually describe what it is. It describes what *kind of thing* it is (a set of something), but not *what it is a set of*, unless you know the type or check its docs or implemented traits. `FooSystems` on the other hand is much more self-descriptive in this regard, at the cost of being a bit longer to type. Ultimately, in some ways it comes down to preference and how you think of system sets. Personally, I was originally in favor of `FooSet`, but have been increasingly on the side of `FooSystems`, especially after seeing what the new names would actually look like in Avian and now Bevy. I prefer it because it usually reads better, is much more clearly related to groups of systems than `FooSet`, and overall *feels* more correct and natural to me in the long term. For these reasons, and because Alice and Cart also seemed to share a preference for it when it was previously being discussed, I propose that we adopt a `FooSystems` naming convention where applicable. ## Solution Rename Bevy's system set types to use a consistent `FooSet` naming where applicable. - `AccessibilitySystem` → `AccessibilitySystems` - `GizmoRenderSystem` → `GizmoRenderSystems` - `PickSet` → `PickingSystems` - `RunFixedMainLoopSystem` → `RunFixedMainLoopSystems` - `TransformSystem` → `TransformSystems` - `RemoteSet` → `RemoteSystems` - `RenderSet` → `RenderSystems` - `SpriteSystem` → `SpriteSystems` - `StateTransitionSteps` → `StateTransitionSystems` - `RenderUiSystem` → `RenderUiSystems` - `UiSystem` → `UiSystems` - `Animation` → `AnimationSystems` - `AssetEvents` → `AssetEventSystems` - `TrackAssets` → `AssetTrackingSystems` - `UpdateGizmoMeshes` → `GizmoMeshSystems` - `InputSystem` → `InputSystems` - `InputFocusSet` → `InputFocusSystems` - `ExtractMaterialsSet` → `MaterialExtractionSystems` - `ExtractMeshesSet` → `MeshExtractionSystems` - `RumbleSystem` → `RumbleSystems` - `CameraUpdateSystem` → `CameraUpdateSystems` - `ExtractAssetsSet` → `AssetExtractionSystems` - `Update2dText` → `Text2dUpdateSystems` - `TimeSystem` → `TimeSystems` - `AudioPlaySet` → `AudioPlaybackSystems` - `SendEvents` → `EventSenderSystems` - `EventUpdates` → `EventUpdateSystems` A lot of the names got slightly longer, but they are also a lot more consistent, and in my opinion the majority of them read much better. For a few of the names I took the liberty of rewording things a bit; definitely open to any further naming improvements. There are still also cases where the `FooSystems` naming doesn't really make sense, and those I left alone. This primarily includes system sets like `Interned<dyn SystemSet>`, `EnterSchedules<S>`, `ExitSchedules<S>`, or `TransitionSchedules<S>`, where the type has some special purpose and semantics. ## Todo - [x] Should I keep all the old names as deprecated type aliases? I can do this, but to avoid wasting work I'd prefer to first reach consensus on whether these renames are even desired. - [x] Migration guide - [x] Release notes |
||
![]() |
56784de769
|
Fix the ordering of the systems introduced in #18734. (#18825)
There's still a race resulting in blank materials whenever a material of type A is added on the same frame that a material of type B is removed. PR #18734 improved the situation, but ultimately didn't fix the race because of two issues: 1. The `late_sweep_material_instances` system was never scheduled. This PR fixes the problem by scheduling that system. 2. `early_sweep_material_instances` needs to be called after *every* material type has been extracted, not just when the material of *that* type has been extracted. The `chain()` added during the review process in PR #18734 broke this logic. This PR reverts that and fixes the ordering by introducing a new `SystemSet` that contains all material extraction systems. I also took the opportunity to switch a manual reference to `AssetId::<StandardMaterial>::invalid()` to the new `DUMMY_MESH_MATERIAL` constant for clarity. Because this is a bug that can affect any application that switches material types in a single frame, I think this should be uplifted to Bevy 0.16. |
||
![]() |
6f3ea06060
|
Make sure the mesh actually exists before we try to specialize. (#18836)
Fixes #18809 Fixes #18823 Meshes despawned in `Last` can still be in visisible entities if they were visible as of `PostUpdate`. Sanity check that the mesh actually exists before we specialize. We still want to unconditionally assume that the entity is in `EntitySpecializationTicks` as its absence from that cache would likely suggest another bug. |
||
![]() |
e9a0ef49f9
|
Rename bevy_platform_support to bevy_platform (#18813)
# Objective The goal of `bevy_platform_support` is to provide a set of platform agnostic APIs, alongside platform-specific functionality. This is a high traffic crate (providing things like HashMap and Instant). Especially in light of https://github.com/bevyengine/bevy/discussions/18799, it deserves a friendlier / shorter name. Given that it hasn't had a full release yet, getting this change in before Bevy 0.16 makes sense. ## Solution - Rename `bevy_platform_support` to `bevy_platform`. |
||
![]() |
e799625ea5
|
Add binned 2d/3d Wireframe render phase (#18587)
# Objective Fixes #16896 Fixes #17737 ## Solution Adds a new render phase, including all the new cold specialization patterns, for wireframes. There's a *lot* of regrettable duplication here between 3d/2d. ## Testing All the examples. ## Migration Guide - `WireframePlugin` must now be created with `WireframePlugin::default()`. |
||
![]() |
6c619397d5
|
Unify RenderMaterialInstances and RenderMeshMaterialIds , and fix an associated race condition. (#18734)
Currently, `RenderMaterialInstances` and `RenderMeshMaterialIds` are very similar render-world resources: the former maps main world meshes to typed material asset IDs, and the latter maps main world meshes to untyped material asset IDs. This is needlessly-complex and wasteful, so this patch unifies the two in favor of a single untyped `RenderMaterialInstances` resource. This patch also fixes a subtle issue that could cause mesh materials to be incorrect if a `MeshMaterial3d<A>` was removed and replaced with a `MeshMaterial3d<B>` material in the same frame. The problematic pattern looks like: 1. `extract_mesh_materials<B>` runs and, seeing the `Changed<MeshMaterial3d<B>>` condition, adds an entry mapping the mesh to the new material to the untyped `RenderMeshMaterialIds`. 2. `extract_mesh_materials<A>` runs and, seeing that the entity is present in `RemovedComponents<MeshMaterial3d<A>>`, removes the entry from `RenderMeshMaterialIds`. 3. The material slot is now empty, and the mesh will show up as whatever material happens to be in slot 0 in the material data slab. This commit fixes the issue by splitting out `extract_mesh_materials` into *three* phases: *extraction*, *early sweeping*, and *late sweeping*, which run in that order: 1. The *extraction* system, which runs for each material, updates `RenderMaterialInstances` records whenever `MeshMaterial3d` components change, and updates a change tick so that the following system will know not to remove it. 2. The *early sweeping* system, which runs for each material, processes entities present in `RemovedComponents<MeshMaterial3d>` and removes each such entity's record from `RenderMeshInstances` only if the extraction system didn't update it this frame. This system runs after *all* extraction systems have completed, fixing the race condition. 3. The *late sweeping* system, which runs only once regardless of the number of materials in the scene, processes entities present in `RemovedComponents<ViewVisibility>` and, as in the early sweeping phase, removes each such entity's record from `RenderMeshInstances` only if the extraction system didn't update it this frame. At the end, the late sweeping system updates the change tick. Because this pattern happens relatively frequently, I think this PR should land for 0.16. |
||
![]() |
f75078676b
|
Initialize pre-processing pipelines only when culling is enabled. (#18759)
Better fix for #18463 that still allows enabling mesh preprocessing on webgpu. Fixes #18463 |
||
![]() |
065a95e0a1
|
Make the StandardMaterial bindless index table have a fixed size regardless of the features that are enabled. (#18771)
Due to the preprocessor usage in the shader, different combinations of features could cause the fields of `StandardMaterialBindings` to shift around. In certain cases, this could cause them to not line up with the bindings specified in `StandardMaterial`. This resulted in #18104. This commit fixes the issue by making `StandardMaterialBindings` have a fixed size. On the CPU side, it uses the `#[bindless(index_table(range(M..N)))]` feature I added to `AsBindGroup` in #18025 to do so. Thus this patch has a dependency on #18025. Closes #18104. --------- Co-authored-by: Robert Swain <robert.swain@gmail.com> |
||
![]() |
2f80d081bc
|
Fix many_foxes + motion blur = crash on WebGL (#18715)
## Objective Fix #18714. ## Solution Make sure `SkinUniforms::prev_buffer` is resized at the same time as `current_buffer`. There will be a one frame visual glitch when the buffers are resized, since `prev_buffer` is incorrectly initialised with the current joint transforms. Note that #18074 includes the same fix. I'm assuming this smaller PR will land first. ## Testing See repro instructions in #18714. Tested on `animated_mesh`, `many_foxes`, `custom_skinned_mesh`, Win10/Nvidia with Vulkan, WebGL/Chrome, WebGPU/Chrome. |
||
![]() |
5da64ddbee
|
Fix motion blur on skinned meshes (#18712)
## Objective Fix motion blur not working on skinned meshes. ## Solution `set_mesh_motion_vector_flags` can set `RenderMeshInstanceFlags::HAS_PREVIOUS_SKIN` after specialization has already cached the material. This can lead to `MeshPipelineKey::HAS_PREVIOUS_SKIN` never getting set, disabling motion blur. The fix is to make sure `set_mesh_motion_vector_flags` happens before specialization. Note that the bug is fixed in a different way by #18074, which includes other fixes but is a much larger change. ## Testing Open the `animated_mesh` example and add these components to the `Camera3d` entity: ```rust MotionBlur { shutter_angle: 5.0, samples: 2, #[cfg(all(feature = "webgl2", target_arch = "wasm32", not(feature = "webgpu")))] _webgl2_padding: Default::default(), }, #[cfg(all(feature = "webgl2", target_arch = "wasm32", not(feature = "webgpu")))] Msaa::Off, ``` Tested on `animated_mesh`, `many_foxes`, `custom_skinned_mesh`, Win10/Nvidia with Vulkan, WebGL/Chrome, WebGPU/Chrome. Note that testing `many_foxes` WebGL requires #18715. |
||
![]() |
ec70a0f4f5
|
Delete unused weak handle and remove duplicate loads. (#18635)
# Objective - Cleanup ## Solution - Remove completely unused weak_handle (`MESH_PREPROCESS_TYPES_SHADER_HANDLE`). This value is not used directly, and is never populated. - Delete multiple loads of `BUILD_INDIRECT_PARAMS_SHADER_HANDLE`. We load it three times right after one another. This looks to be a copy-paste error. ## Testing - None. |
||
![]() |
6734abe3f5
|
Expose symbols needed to replicate SetMeshBindGroup in ecosystem crates. (#18613)
# Objective My ecosystem crate, bevy_mod_outline, currently uses `SetMeshBindGroup` as part of its custom rendering pipeline. I would like to allow for possibility that, due to changes in 0.16, I need to customise the behaviour of `SetMeshBindGroup` in order to make it work. However, not all of the symbol needed to implement this render command are public outside of Bevy. ## Solution - Include `MorphIndices` in re-export list. I feel this is morally equivalent to `SkinUniforms` already being exported. - Change `MorphIndex::index` field to be public. I feel this is morally equivalent to the `SkinByteOffset::byte_offset` field already being public. - Change `RenderMeshIntances::mesh_asset_id()` to be public (although since all the fields of `RenderMeshInstances` are public it's possible to work around this one by reimplementing). These changes exclude: - Making any change to the `RenderLightmaps` type as I don't need to bind the light-maps for my use-case and I wanted to keep these changes minimal. It has a private field which would need to be public or have access methods. - The changes already included in #18612. ## Testing Confirmed that a copy of `SetMeshBindGroup` can be compiled outside of Bevy with these changes, provided that the light-map code is removed. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
17e3efac12
|
Fix mesh extraction for meshes without associated material. (#18631)
# Objective Fixes #17986 Fixes #18608 ## Solution Guard against situations where an extracted mesh does not have an associated material. The way that mesh is dependent on the material api (although decoupled) here is a bit unfortunate and we might consider ways in the future to support these material features without this indirect dependency. |
||
![]() |
f57c7a43c4
|
reexport entity set collections in entity module (#18413)
# Objective Unlike for their helper typers, the import paths for `unique_array::UniqueEntityArray`, `unique_slice::UniqueEntitySlice`, `unique_vec::UniqueEntityVec`, `hash_set::EntityHashSet`, `hash_map::EntityHashMap`, `index_set::EntityIndexSet`, `index_map::EntityIndexMap` are quite redundant. When looking at the structure of `hashbrown`, we can also see that while both `HashSet` and `HashMap` have their own modules, the main types themselves are re-exported to the crate level. ## Solution Re-export the types in their shared `entity` parent module, and simplify the imports where they're used. |
||
![]() |
4519ff677a
|
Fix diffuse transmission for anisotropic materials (#18610)
Expand the diff, this was obviously just a copy paste bug at some point. |
||
![]() |
041d3d7c92
|
Expose skins_use_uniform_buffers() necessary to use pre-existing setup_morph_and_skinning_defs() API. (#18612)
# Objective As of bevy 0.16-dev, the pre-existing public function `bevy::pbr::setup_morph_and_skinning_defs()` is now passed a boolean flag called `skins_use_uniform_buffers`. The value of this boolean is computed by the function `bevy_pbr::render::skin::skins_use_uniform_buffers()`, but it is not exported publicly. Found while porting [bevy_mod_outline](https://github.com/komadori/bevy_mod_outline) to 0.16. ## Solution Add `skin::skins_use_uniform_buffers` to the re-export list of `bevy_pbr::render`. ## Testing Confirmed test program can access public API. |
||
![]() |
d1f3d950a1
|
Fix shader pre-pass compile failure when using AlphaMode::Blend and a Mesh without UVs (0.16.0-rc.2 ) (#18602)
# Objective The flags are referenced later outside of the VERTEX_UVS ifdef/endif block. The current behavior causes the pre-pass shader to fail to compile when UVs are not present in the mesh, such as when using a `LineStrip` to render a grid. Fixes #18600 ## Solution Move the definition of the `flags` outside of the ifdef/endif block. ## Testing Ran a modified `3d_example` that used a mesh and material with alpha_mode blend, `LineStrip` topology, and no UVs. |
||
![]() |
f4a5e8bc51
|
Tracy GPU support (#18490)
# Objective - Add tracy GPU support ## Solution - Build on top of the existing render diagnostics recording to also upload gpu timestamps to tracy - Copy code from https://github.com/Wumpf/wgpu-profiler ## Showcase  |
||
![]() |
8d5474a2f2
|
Fix unecessary specialization checks for apps with many materials (#18410)
# Objective For materials that aren't being used or a visible entity doesn't have an instance of, we were unnecessarily constantly checking whether they needed specialization, saying yes (because the material had never been specialized for that entity), and failing to look up the material instance. ## Solution If an entity doesn't have an instance of the material, it can't possibly need specialization, so exit early before spending time doing the check. Fixes #18388. |
||
![]() |
195a74afad
|
Add missing system ordering constraint to prepare_lights (#18308)
Fix https://github.com/bevyengine/bevy/issues/18094. |
||
![]() |
bc7416aa22
|
Use 4-byte LightmapSlabIndex for batching instead of 16-byte AssetId<Image> (#18326)
Less data accessed and compared gives better batching performance. # Objective - Use a smaller id to represent the lightmap in batch data to enable a faster implementation of draw streams. - Improve batching performance for 3D sorted render phases. ## Solution - 3D batching can use `LightmapSlabIndex` (a `NonMaxU32` which is 4 bytes) instead of the lightmap `AssetId<Image>` (an enum where the largest variant is a 16-byte UUID) to discern the ability to batch. ## Testing Tested main (yellow) vs this PR (red) on an M4 Max using the `many_cubes` example with `WGPU_SETTINGS_PRIO=webgl2` to avoid GPU-preprocessing, and modifying the materials in `many_cubes` to have `AlphaMode::Blend` so that they would rely on the less efficient sorted render phase batching. <img width="1500" alt="Screenshot_2025-03-15_at_12 17 21" src="https://github.com/user-attachments/assets/14709bd3-6d06-40fb-aa51-e1d2d606ebe3" /> A 44.75us or 7.5% reduction in median execution time of the batch and prepare sorted render phase system for the `Transparent3d` phase (handling 160k cubes). --- ## Migration Guide - Changed: `RenderLightmap::new()` no longer takes an `AssetId<Image>` argument for the asset id of the lightmap image. |
||
![]() |
ecccd57417
|
Generic system config (#17962)
# Objective Prevents duplicate implementation between IntoSystemConfigs and IntoSystemSetConfigs using a generic, adds a NodeType trait for more config flexibility (opening the door to implement https://github.com/bevyengine/bevy/issues/14195?). ## Solution Followed writeup by @ItsDoot: https://hackmd.io/@doot/rJeefFHc1x Removes IntoSystemConfigs and IntoSystemSetConfigs, instead using IntoNodeConfigs with generics. ## Testing Pending --- ## Showcase N/A ## Migration Guide SystemSetConfigs -> NodeConfigs<InternedSystemSet> SystemConfigs -> NodeConfigs<ScheduleSystem> IntoSystemSetConfigs -> IntoNodeConfigs<InternedSystemSet, M> IntoSystemConfigs -> IntoNodeConfigs<ScheduleSystem, M> --------- Co-authored-by: Christian Hughes <9044780+ItsDoot@users.noreply.github.com> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
09ff7ce9f6
|
Partially fix panics when setting WGPU_SETTINGS_PRIO=webgl2 (#18113)
# Overview Fixes https://github.com/bevyengine/bevy/issues/17869. # Summary `WGPU_SETTINGS_PRIO` changes various limits on `RenderDevice`. This is useful to simulate platforms with lower limits. However, some plugins only check the limits on `RenderAdapter` (the actual GPU) - these limits are not affected by `WGPU_SETTINGS_PRIO`. So the plugins try to use features that are unavailable and wgpu says "oh no". See https://github.com/bevyengine/bevy/issues/17869 for details. The PR adds various checks on `RenderDevice` limits. This is enough to get most examples working, but some are not fixed (see below). # Testing - Tested native, with and without "WGPU_SETTINGS=webgl2". Win10/Vulkan/Nvidia". - Also WebGL. Win10/Chrome/Nvidia. ``` $env:WGPU_SETTINGS_PRIO = "webgl2" cargo run --example testbed_3d cargo run --example testbed_2d cargo run --example testbed_ui cargo run --example deferred_rendering cargo run --example many_lights cargo run --example order_independent_transparency # Still broken, see below. cargo run --example occlusion_culling # Still broken, see below. ``` # Not Fixed While testing I found a few other cases of limits being broken. "Compatibility" settings (WebGPU minimums) breaks native in various examples. ``` $env:WGPU_SETTINGS_PRIO = "compatibility" cargo run --example testbed_3d In Device::create_bind_group_layout, label = 'build mesh uniforms GPU early occlusion culling bind group layout' Too many bindings of type StorageBuffers in Stage ShaderStages(COMPUTE), limit is 8, count was 9. Check the limit `max_storage_buffers_per_shader_stage` passed to `Adapter::request_device` ``` `occlusion_culling` breaks fake webgl. ``` $env:WGPU_SETTINGS_PRIO = "webgl2" cargo run --example occlusion_culling thread '<unnamed>' panicked at C:\Projects\bevy\crates\bevy_render\src\render_resource\pipeline_cache.rs:555:28: index out of bounds: the len is 0 but the index is 2 Encountered a panic in system `bevy_render::renderer::render_system`! ``` `occlusion_culling` breaks real webgl. ``` cargo run --example occlusion_culling --target wasm32-unknown-unknown ERROR app: panicked at C:\Users\T\.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\glow-0.16.0\src\web_sys.rs:4223:9: Tex storage 2D multisample is not supported ``` OIT breaks fake webgl. ``` $env:WGPU_SETTINGS_PRIO = "webgl2" cargo run --example order_independent_transparency In Device::create_bind_group, label = 'mesh_view_bind_group' Number of bindings in bind group descriptor (28) does not match the number of bindings defined in the bind group layout (25) ``` OIT breaks real webgl ``` cargo run --example order_independent_transparency --target wasm32-unknown-unknown In Device::create_render_pipeline, label = 'pbr_oit_mesh_pipeline' Error matching ShaderStages(FRAGMENT) shader requirements against the pipeline Shader global ResourceBinding { group: 0, binding: 34 } is not available in the pipeline layout Binding is missing from the pipeline layout ``` |
||
![]() |
47509ef6a9
|
Helper function for getting inverse model matrix in WGSL shaders (#10462)
In 0.11 you could easily access the inverse model matrix inside a WGSL shader with `transpose(mesh.inverse_transpose_model)`. This was changed in 0.12 when `inverse_transpose_model` was removed and it's now not as straightfoward. I wrote a helper function for my own code and thought I'd submit a pull request in case it would be helpful to others. |
||
![]() |
d6db78b5dd
|
Replace internal uses of insert_or_spawn_batch (#18035)
## Objective `insert_or_spawn_batch` is due to be deprecated eventually (#15704), and removing uses internally will make that easier. ## Solution Replaced internal uses of `insert_or_spawn_batch` with `try_insert_batch` (non-panicking variant because `insert_or_spawn_batch` didn't panic). All of the internal uses are in rendering code. Since retained rendering was meant to get rid non-opaque entity IDs, I assume the code was just using `insert_or_spawn_batch` because `insert_batch` didn't exist and not because it actually wanted to spawn something. However, I am *not* confident in my ability to judge rendering code. |
||
![]() |
54701a844e
|
Revert "Replace Ambient Lights with Environment Map Lights (#17482)" (#18167)
This reverts commit
|
||
![]() |
0b5302d96a
|
Replace Ambient Lights with Environment Map Lights (#17482)
# Objective Transparently uses simple `EnvironmentMapLight`s to mimic `AmbientLight`s. Implements the first part of #17468, but I can implement hemispherical lights in this PR too if needed. ## Solution - A function `EnvironmentMapLight::solid_color(&mut Assets<Image>, Color)` is provided to make an environment light with a solid color. - A new system is added to `SimulationLightSystems` that maps `AmbientLight`s on views or the world to a corresponding `EnvironmentMapLight`. I have never worked with (or on) Bevy before, so nitpicky comments on how I did things are appreciated :). ## Testing Testing was done on a modified version of the `3d/lighting` example, where I removed all lights except the ambient light. I have not included the example, but can if required. ## Migration `bevy_pbr::AmbientLight` has been deprecated, so all usages of it should be replaced by a `bevy_pbr::EnvironmentMapLight` created with `EnvironmentMapLight::solid_color` placed on the camera. There is no alternative to ambient lights as resources. |
||
![]() |
57143a8298
|
Add missing bindless imports in pbr_prepass.wgsl . (#18110)
The bindless reform PR neglected to include these, causing the prepass shader to fail to compile in some circumstances. |
||
![]() |
058497e0bb
|
Change Commands::get_entity to return Result and remove panic from Commands::entity (#18043)
## Objective Alternative to #18001. - Now that systems can handle the `?` operator, `get_entity` returning `Result` would be more useful than `Option`. - With `get_entity` being more flexible, combined with entity commands now checking the entity's existence automatically, the panic in `entity` isn't really necessary. ## Solution - Changed `Commands::get_entity` to return `Result<EntityCommands, EntityDoesNotExistError>`. - Removed panic from `Commands::entity`. |
||
![]() |
5d7a60592d
|
Add a new #[data] attribute to AsBindGroup that allows packing data for multiple materials into a single array. (#17965)
Currently, the structure-level `#[uniform]` attribute of `AsBindGroup` creates a binding array of individual buffers, each of which contains data for a single material. A more efficient approach would be to provide a single buffer with an array containing all of the data for all materials in the bind group. Because `StandardMaterial` uses `#[uniform]`, this can be notably inefficient with large numbers of materials. This patch introduces a new attribute on `AsBindGroup`, `#[data]`, which works identically to `#[uniform]` except that it concatenates all the data into a single buffer that the material bind group allocator itself manages. It also converts `StandardMaterial` to use this new functionality. This effectively provides the "material data in arrays" feature. |
||
![]() |
5241e09671
|
Upgrade to Rust Edition 2024 (#17967)
# Objective - Fixes #17960 ## Solution - Followed the [edition upgrade guide](https://doc.rust-lang.org/edition-guide/editions/transitioning-an-existing-project-to-a-new-edition.html) ## Testing - CI --- ## Summary of Changes ### Documentation Indentation When using lists in documentation, proper indentation is now linted for. This means subsequent lines within the same list item must start at the same indentation level as the item. ```rust /* Valid */ /// - Item 1 /// Run-on sentence. /// - Item 2 struct Foo; /* Invalid */ /// - Item 1 /// Run-on sentence. /// - Item 2 struct Foo; ``` ### Implicit `!` to `()` Conversion `!` (the never return type, returned by `panic!`, etc.) no longer implicitly converts to `()`. This is particularly painful for systems with `todo!` or `panic!` statements, as they will no longer be functions returning `()` (or `Result<()>`), making them invalid systems for functions like `add_systems`. The ideal fix would be to accept functions returning `!` (or rather, _not_ returning), but this is blocked on the [stabilisation of the `!` type itself](https://doc.rust-lang.org/std/primitive.never.html), which is not done. The "simple" fix would be to add an explicit `-> ()` to system signatures (e.g., `|| { todo!() }` becomes `|| -> () { todo!() }`). However, this is _also_ banned, as there is an existing lint which (IMO, incorrectly) marks this as an unnecessary annotation. So, the "fix" (read: workaround) is to put these kinds of `|| -> ! { ... }` closuers into variables and give the variable an explicit type (e.g., `fn()`). ```rust // Valid let system: fn() = || todo!("Not implemented yet!"); app.add_systems(..., system); // Invalid app.add_systems(..., || todo!("Not implemented yet!")); ``` ### Temporary Variable Lifetimes The order in which temporary variables are dropped has changed. The simple fix here is _usually_ to just assign temporaries to a named variable before use. ### `gen` is a keyword We can no longer use the name `gen` as it is reserved for a future generator syntax. This involved replacing uses of the name `gen` with `r#gen` (the raw-identifier syntax). ### Formatting has changed Use statements have had the order of imports changed, causing a substantial +/-3,000 diff when applied. For now, I have opted-out of this change by amending `rustfmt.toml` ```toml style_edition = "2021" ``` This preserves the original formatting for now, reducing the size of this PR. It would be a simple followup to update this to 2024 and run `cargo fmt`. ### New `use<>` Opt-Out Syntax Lifetimes are now implicitly included in RPIT types. There was a handful of instances where it needed to be added to satisfy the borrow checker, but there may be more cases where it _should_ be added to avoid breakages in user code. ### `MyUnitStruct { .. }` is an invalid pattern Previously, you could match against unit structs (and unit enum variants) with a `{ .. }` destructuring. This is no longer valid. ### Pretty much every use of `ref` and `mut` are gone Pattern binding has changed to the point where these terms are largely unused now. They still serve a purpose, but it is far more niche now. ### `iter::repeat(...).take(...)` is bad New lint recommends using the more explicit `iter::repeat_n(..., ...)` instead. ## Migration Guide The lifetimes of functions using return-position impl-trait (RPIT) are likely _more_ conservative than they had been previously. If you encounter lifetime issues with such a function, please create an issue to investigate the addition of `+ use<...>`. ## Notes - Check the individual commits for a clearer breakdown for what _actually_ changed. --------- Co-authored-by: François Mockers <francois.mockers@vleue.com> |
||
![]() |
465306bc5e
|
Reextract a mesh on the next frame if its material couldn't be prepared on the frame we first encountered it. (#17963)
We might not be able to prepare a material on the first frame we encounter a mesh using it for various reasons, including that the material hasn't been loaded yet or that preparing the material is exceeding the per-frame cap on number of bytes to load. When this happens, we currently try to find the material in the `MaterialBindGroupAllocator`, fail, and then fall back to group 0, slot 0, the default `MaterialBindGroupId`, which is obviously incorrect. Worse, we then fail to dirty the mesh and reextract it when we *do* finish preparing the material, so the mesh will continue to be rendered with an incorrect material. This patch fixes both problems. In `collect_meshes_for_gpu_building`, if we fail to find a mesh's material in the `MeshBindGroupAllocator`, then we detect that case, bail out, and add it to a list, `MeshesToReextractNextFrame`. On subsequent frames, we process all the meshes in `MeshesToReextractNextFrame` as though they were changed. This ensures both that we don't render a mesh if its material hasn't been loaded and that we start rendering the mesh once its material does load. This was first noticed in the intermittent Pixel Eagle failures in the `testbed_3d` patch in #17898, although the problem has actually existed for some time. I believe it just so happened that the changes to the allocator in that PR caused the problem to appear more commonly than it did before. |
||
![]() |
4880a231de
|
Implement occlusion culling for directional light shadow maps. (#17951)
Two-phase occlusion culling can be helpful for shadow maps just as it can for a prepass, in order to reduce vertex and alpha mask fragment shading overhead. This patch implements occlusion culling for shadow maps from directional lights, when the `OcclusionCulling` component is present on the entities containing the lights. Shadow maps from point lights are deferred to a follow-up patch. Much of this patch involves expanding the hierarchical Z-buffer to cover shadow maps in addition to standard view depth buffers. The `scene_viewer` example has been updated to add `OcclusionCulling` to the directional light that it creates. This improved the performance of the rend3 sci-fi test scene when enabling shadows. |
||
![]() |
28441337bb
|
Use global binding arrays for bindless resources. (#17898)
Currently, Bevy's implementation of bindless resources is rather unusual: every binding in an object that implements `AsBindGroup` (most commonly, a material) becomes its own separate binding array in the shader. This is inefficient for two reasons: 1. If multiple materials reference the same texture or other resource, the reference to that resource will be duplicated many times. This increases `wgpu` validation overhead. 2. It creates many unused binding array slots. This increases `wgpu` and driver overhead and makes it easier to hit limits on APIs that `wgpu` currently imposes tight resource limits on, like Metal. This PR fixes these issues by switching Bevy to use the standard approach in GPU-driven renderers, in which resources are de-duplicated and passed as global arrays, one for each type of resource. Along the way, this patch introduces per-platform resource limits and bumps them from 16 resources per binding array to 64 resources per bind group on Metal and 2048 resources per bind group on other platforms. (Note that the number of resources per *binding array* isn't the same as the number of resources per *bind group*; as it currently stands, if all the PBR features are turned on, Bevy could pack as many as 496 resources into a single slab.) The limits have been increased because `wgpu` now has universal support for partially-bound binding arrays, which mean that we no longer need to fill the binding arrays with fallback resources on Direct3D 12. The `#[bindless(LIMIT)]` declaration when deriving `AsBindGroup` can now simply be written `#[bindless]` in order to have Bevy choose a default limit size for the current platform. Custom limits are still available with the new `#[bindless(limit(LIMIT))]` syntax: e.g. `#[bindless(limit(8))]`. The material bind group allocator has been completely rewritten. Now there are two allocators: one for bindless materials and one for non-bindless materials. The new non-bindless material allocator simply maintains a 1:1 mapping from material to bind group. The new bindless material allocator maintains a list of slabs and allocates materials into slabs on a first-fit basis. This unfortunately makes its performance O(number of resources per object * number of slabs), but the number of slabs is likely to be low, and it's planned to become even lower in the future with `wgpu` improvements. Resources are de-duplicated with in a slab and reference counted. So, for instance, if multiple materials refer to the same texture, that texture will exist only once in the appropriate binding array. To support these new features, this patch adds the concept of a *bindless descriptor* to the `AsBindGroup` trait. The bindless descriptor allows the material bind group allocator to probe the layout of the material, now that an array of `BindGroupLayoutEntry` records is insufficient to describe the group. The `#[derive(AsBindGroup)]` has been heavily modified to support the new features. The most important user-facing change to that macro is that the struct-level `uniform` attribute, `#[uniform(BINDING_NUMBER, StandardMaterial)]`, now reads `#[uniform(BINDLESS_INDEX, MATERIAL_UNIFORM_TYPE, binding_array(BINDING_NUMBER)]`, allowing the material to specify the binding number for the binding array that holds the uniform data. To make this patch simpler, I removed support for bindless `ExtendedMaterial`s, as well as field-level bindless uniform and storage buffers. I intend to add back support for these as a follow-up. Because they aren't in any released Bevy version yet, I figured this was OK. Finally, this patch updates `StandardMaterial` for the new bindless changes. Generally, code throughout the PBR shaders that looked like `base_color_texture[slot]` now looks like `bindless_2d_textures[material_indices[slot].base_color_texture]`. This patch fixes a system hang that I experienced on the [Caldera test] when running with `caldera --random-materials --texture-count 100`. The time per frame is around 19.75 ms, down from 154.2 ms in Bevy 0.14: a 7.8× speedup. [Caldera test]: https://github.com/DGriffin91/bevy_caldera_scene |
||
![]() |
8de6b16e9d
|
Implement occlusion culling for the deferred rendering pipeline. (#17934)
Deferred rendering currently doesn't support occlusion culling. This PR implements it in a straightforward way, mirroring what we already do for the non-deferred pipeline. On the rend3 sci-fi base test scene, this resulted in roughly a 2× speedup when applied on top of my other patches. For that scene, it was useful to add another option, `--add-light`, which forces the addition of a shadow-casting light, to the scene viewer, which I included in this patch. |
||
![]() |
f15437e4dc
|
Rewrite the multidrawable batch set builder for performance. (#17923)
This commit restructures the multidrawable batch set builder for better performance in various ways: * The bin traversal is optimized to make the best use of the CPU cache. * The inner loop that iterates over the bins, which is the hottest part of `batch_and_prepare_binned_render_phase`, has been shrunk as small as possible. * Where possible, multiple elements are added to or reserved from GPU buffers as a batch instead of one at a time. * Methods that LLVM wasn't inlining have been marked `#[inline]` where doing so would unlock optimizations. This code has also been refactored to avoid duplication between the logic for indexed and non-indexed meshes via the introduction of a `MultidrawableBatchSetPreparer` object. Together, this improved the `batch_and_prepare_binned_render_phase` time on Caldera by approximately 2×. Eventually, we should optimize the batchable-but-not-multidrawable and unbatchable logic as well, but these meshes are much rarer, so in the interests of keeping this patch relatively small I opted to leave those to a follow-up. |
||
![]() |
9e11e96a59
|
Fix false positive GPU frustum culling (#17939)
# Objective Fix incorrect mesh culling where objects (particularly directional shadows) were being incorrectly culled during the early preprocessing phase. The issue manifested specifically on Apple M1 GPUs but not on newer devices like the M4. The bug was in the `view_frustum_intersects_obb` function, where including the w component (plane distance) in the dot product calculations led to false positive culling results. This caused objects to be incorrectly culled before shadow casting could begin. ## Issue Details The problem of missing shadows is reproducible on Apple M1 GPUs as of this commit (bisected): ``` |
||
![]() |
0517b9621b
|
Fix motion vector computation after #17688. (#17717)
PR #17688 broke motion vector computation, and therefore motion blur, because it enabled retention of `MeshInputUniform`s, and `MeshInputUniform`s contain the indices of the previous frame's transform and the previous frame's skinned mesh joint matrices. On frame N, if a `MeshInputUniform` is retained on GPU from the previous frame, the `previous_input_index` and `previous_skin_index` would refer to the indices for frame N - 2, not the index for frame N - 1. This patch fixes the problems. It solves these issues in two different ways, one for transforms and one for skins: 1. To fix transforms, this patch supplies the *frame index* to the shader as part of the view uniforms, and specifies which frame index each mesh's previous transform refers to. So, in the situation described above, the frame index would be N, the previous frame index would be N - 1, and the `previous_input_frame_number` would be N - 2. The shader can now detect this situation and infer that the mesh has been retained, and can therefore conclude that the mesh's transform hasn't changed. 2. To fix skins, this patch replaces the explicit `previous_skin_index` with an invariant that the index of the joints for the current frame and the index of the joints for the previous frame are the same. This means that the `MeshInputUniform` never has to be updated even if the skin is animated. The downside is that we have to copy joint matrices from the previous frame's buffer to the current frame's buffer in `extract_skins`. The rationale behind (2) is that we currently have no mechanism to detect when joints that affect a skin have been updated, short of comparing all the transforms and setting a flag for `extract_meshes_for_gpu_building` to consume, which would regress performance as we want `extract_skins` and `extract_meshes_for_gpu_building` to be able to run in parallel. To test this change, use `cargo run --example motion_blur`. |
||
![]() |
5e569af2d0
|
Make the specialized pipeline cache two-level. (#17915)
Currently, the specialized pipeline cache maps a (view entity, mesh entity) tuple to the retained pipeline for that entity. This causes two problems: 1. Using the view entity is incorrect, because the view entity isn't stable from frame to frame. 2. Switching the view entity to a `RetainedViewEntity`, which is necessary for correctness, significantly regresses performance of `specialize_material_meshes` and `specialize_shadows` because of the loss of the fast `EntityHash`. This patch fixes both problems by switching to a *two-level* hash table. The outer level of the table maps each `RetainedViewEntity` to an inner table, which maps each `MainEntity` to its pipeline ID and change tick. Because we loop over views first and, within that loop, loop over entities visible from that view, we hoist the slow lookup of the view entity out of the inner entity loop. Additionally, this patch fixes a bug whereby pipeline IDs were leaked when removing the view. We still have a problem with leaking pipeline IDs for deleted entities, but that won't be fixed until the specialized pipeline cache is retained. This patch improves performance of the [Caldera benchmark] from 7.8× faster than 0.14 to 9.0× faster than 0.14, when applied on top of the global binding arrays PR, #17898. [Caldera benchmark]: https://github.com/DGriffin91/bevy_caldera_scene |