b0df3fb4d0
26 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
![]() |
96dcbc5f8c
|
Ugrade to wgpu version 25.0 (#19563)
# Objective Upgrade to `wgpu` version `25.0`. Depends on https://github.com/bevyengine/naga_oil/pull/121 ## Solution ### Problem The biggest issue we face upgrading is the following requirement: > To facilitate this change, there was an additional validation rule put in place: if there is a binding array in a bind group, you may not use dynamic offset buffers or uniform buffers in that bind group. This requirement comes from vulkan rules on UpdateAfterBind descriptors. This is a major difficulty for us, as there are a number of binding arrays that are used in the view bind group. Note, this requirement does not affect merely uniform buffors that use dynamic offset but the use of *any* uniform in a bind group that also has a binding array. ### Attempted fixes The easiest fix would be to change uniforms to be storage buffers whenever binding arrays are in use: ```wgsl #ifdef BINDING_ARRAYS_ARE_USED @group(0) @binding(0) var<uniform> view: View; @group(0) @binding(1) var<uniform> lights: types::Lights; #else @group(0) @binding(0) var<storage> view: array<View>; @group(0) @binding(1) var<storage> lights: array<types::Lights>; #endif ``` This requires passing the view index to the shader so that we know where to index into the buffer: ```wgsl struct PushConstants { view_index: u32, } var<push_constant> push_constants: PushConstants; ``` Using push constants is no problem because binding arrays are only usable on native anyway. However, this greatly complicates the ability to access `view` in shaders. For example: ```wgsl #ifdef BINDING_ARRAYS_ARE_USED mesh_view_bindings::view.view_from_world[0].z #else mesh_view_bindings::view[mesh_view_bindings::view_index].view_from_world[0].z #endif ``` Using this approach would work but would have the effect of polluting our shaders with ifdef spam basically *everywhere*. Why not use a function? Unfortunately, the following is not valid wgsl as it returns a binding directly from a function in the uniform path. ```wgsl fn get_view() -> View { #if BINDING_ARRAYS_ARE_USED let view_index = push_constants.view_index; let view = views[view_index]; #endif return view; } ``` This also poses problems for things like lights where we want to return a ptr to the light data. Returning ptrs from wgsl functions isn't allowed even if both bindings were buffers. The next attempt was to simply use indexed buffers everywhere, in both the binding array and non binding array path. This would be viable if push constants were available everywhere to pass the view index, but unfortunately they are not available on webgpu. This means either passing the view index in a storage buffer (not ideal for such a small amount of state) or using push constants sometimes and uniform buffers only on webgpu. However, this kind of conditional layout infects absolutely everything. Even if we were to accept just using storage buffer for the view index, there's also the additional problem that some dynamic offsets aren't actually per-view but per-use of a setting on a camera, which would require passing that uniform data on *every* camera regardless of whether that rendering feature is being used, which is also gross. As such, although it's gross, the simplest solution just to bump binding arrays into `@group(1)` and all other bindings up one bind group. This should still bring us under the device limit of 4 for most users. ### Next steps / looking towards the future I'd like to avoid needing split our view bind group into multiple parts. In the future, if `wgpu` were to add `@builtin(draw_index)`, we could build a list of draw state in gpu processing and avoid the need for any kind of state change at all (see https://github.com/gfx-rs/wgpu/issues/6823). This would also provide significantly more flexibility to handle things like offsets into other arrays that may not be per-view. ### Testing Tested a number of examples, there are probably more that are still broken. --------- Co-authored-by: François Mockers <mockersf@gmail.com> Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com> |
||
![]() |
6ddd0f16a8
|
Component lifecycle reorganization and documentation (#19543)
# Objective I set out with one simple goal: clearly document the differences between each of the component lifecycle events via module docs. Unfortunately, no such module existed: the various lifecycle code was scattered to the wind. Without a unified module, it's very hard to discover the related types, and there's nowhere good to put my shiny new documentation. ## Solution 1. Unify the assorted types into a single `bevy_ecs::component_lifecycle` module. 2. Write docs. 3. Write a migration guide. ## Testing Thanks CI! ## Follow-up 1. The lifecycle event names are pretty confusing, especially `OnReplace`. We should consider renaming those. No bikeshedding in my PR though! 2. Observers need real module docs too :( 3. Any additional functional changes should be done elsewhere; this is a simple docs and re-org PR. --------- Co-authored-by: theotherphil <phil.j.ellison@gmail.com> |
||
![]() |
b866bb4254
|
Remove Shader weak_handles from bevy_pbr (excluding meshlets). (#19365)
# Objective - Related to #19024 ## Solution - Use the new `load_shader_library` macro for the shader libraries and `embedded_asset`/`load_embedded_asset` for the "shader binaries" in `bevy_pbr` (excluding meshlets). ## Testing - `atmosphere` example still works - `fog` example still works - `decal` example still works P.S. I don't think this needs a migration guide. Technically users could be using the `pub` weak handles, but there's no actual good use for them, so omitting it seems fine. Alternatively, we could mix this in with the migration guide notes for #19137. |
||
![]() |
7b1c9f192e
|
Adopt consistent FooSystems naming convention for system sets (#18900)
# Objective Fixes a part of #14274. Bevy has an incredibly inconsistent naming convention for its system sets, both internally and across the ecosystem. <img alt="System sets in Bevy" src="https://github.com/user-attachments/assets/d16e2027-793f-4ba4-9cc9-e780b14a5a1b" width="450" /> *Names of public system set types in Bevy* Most Bevy types use a naming of `FooSystem` or just `Foo`, but there are also a few `FooSystems` and `FooSet` types. In ecosystem crates on the other hand, `FooSet` is perhaps the most commonly used name in general. Conventions being so wildly inconsistent can make it harder for users to pick names for their own types, to search for system sets on docs.rs, or to even discern which types *are* system sets. To reign in the inconsistency a bit and help unify the ecosystem, it would be good to establish a common recommended naming convention for system sets in Bevy itself, similar to how plugins are commonly suffixed with `Plugin` (ex: `TimePlugin`). By adopting a consistent naming convention in first-party Bevy, we can softly nudge ecosystem crates to follow suit (for types where it makes sense to do so). Choosing a naming convention is also relevant now, as the [`bevy_cli` recently adopted lints](https://github.com/TheBevyFlock/bevy_cli/pull/345) to enforce naming for plugins and system sets, and the recommended naming used for system sets is still a bit open. ## Which Name To Use? Now the contentious part: what naming convention should we actually adopt? This was discussed on the Bevy Discord at the end of last year, starting [here](<https://discord.com/channels/691052431525675048/692572690833473578/1310659954683936789>). `FooSet` and `FooSystems` were the clear favorites, with `FooSet` very narrowly winning an unofficial poll. However, it seems to me like the consensus was broadly moving towards `FooSystems` at the end and after the poll, with Cart ([source](https://discord.com/channels/691052431525675048/692572690833473578/1311140204974706708)) and later Alice ([source](https://discord.com/channels/691052431525675048/692572690833473578/1311092530732859533)) and also me being in favor of it. Let's do a quick pros and cons list! Of course these are just what I thought of, so take it with a grain of salt. `FooSet`: - Pro: Nice and short! - Pro: Used by many ecosystem crates. - Pro: The `Set` suffix comes directly from the trait name `SystemSet`. - Pro: Pairs nicely with existing APIs like `in_set` and `configure_sets`. - Con: `Set` by itself doesn't actually indicate that it's related to systems *at all*, apart from the implemented trait. A set of what? - Con: Is `FooSet` a set of `Foo`s or a system set related to `Foo`? Ex: `ContactSet`, `MeshSet`, `EnemySet`... `FooSystems`: - Pro: Very clearly indicates that the type represents a collection of systems. The actual core concept, system(s), is in the name. - Pro: Parallels nicely with `FooPlugins` for plugin groups. - Pro: Low risk of conflicts with other names or misunderstandings about what the type is. - Pro: In most cases, reads *very* nicely and clearly. Ex: `PhysicsSystems` and `AnimationSystems` as opposed to `PhysicsSet` and `AnimationSet`. - Pro: Easy to search for on docs.rs. - Con: Usually results in longer names. - Con: Not yet as widely used. Really the big problem with `FooSet` is that it doesn't actually describe what it is. It describes what *kind of thing* it is (a set of something), but not *what it is a set of*, unless you know the type or check its docs or implemented traits. `FooSystems` on the other hand is much more self-descriptive in this regard, at the cost of being a bit longer to type. Ultimately, in some ways it comes down to preference and how you think of system sets. Personally, I was originally in favor of `FooSet`, but have been increasingly on the side of `FooSystems`, especially after seeing what the new names would actually look like in Avian and now Bevy. I prefer it because it usually reads better, is much more clearly related to groups of systems than `FooSet`, and overall *feels* more correct and natural to me in the long term. For these reasons, and because Alice and Cart also seemed to share a preference for it when it was previously being discussed, I propose that we adopt a `FooSystems` naming convention where applicable. ## Solution Rename Bevy's system set types to use a consistent `FooSet` naming where applicable. - `AccessibilitySystem` → `AccessibilitySystems` - `GizmoRenderSystem` → `GizmoRenderSystems` - `PickSet` → `PickingSystems` - `RunFixedMainLoopSystem` → `RunFixedMainLoopSystems` - `TransformSystem` → `TransformSystems` - `RemoteSet` → `RemoteSystems` - `RenderSet` → `RenderSystems` - `SpriteSystem` → `SpriteSystems` - `StateTransitionSteps` → `StateTransitionSystems` - `RenderUiSystem` → `RenderUiSystems` - `UiSystem` → `UiSystems` - `Animation` → `AnimationSystems` - `AssetEvents` → `AssetEventSystems` - `TrackAssets` → `AssetTrackingSystems` - `UpdateGizmoMeshes` → `GizmoMeshSystems` - `InputSystem` → `InputSystems` - `InputFocusSet` → `InputFocusSystems` - `ExtractMaterialsSet` → `MaterialExtractionSystems` - `ExtractMeshesSet` → `MeshExtractionSystems` - `RumbleSystem` → `RumbleSystems` - `CameraUpdateSystem` → `CameraUpdateSystems` - `ExtractAssetsSet` → `AssetExtractionSystems` - `Update2dText` → `Text2dUpdateSystems` - `TimeSystem` → `TimeSystems` - `AudioPlaySet` → `AudioPlaybackSystems` - `SendEvents` → `EventSenderSystems` - `EventUpdates` → `EventUpdateSystems` A lot of the names got slightly longer, but they are also a lot more consistent, and in my opinion the majority of them read much better. For a few of the names I took the liberty of rewording things a bit; definitely open to any further naming improvements. There are still also cases where the `FooSystems` naming doesn't really make sense, and those I left alone. This primarily includes system sets like `Interned<dyn SystemSet>`, `EnterSchedules<S>`, `ExitSchedules<S>`, or `TransitionSchedules<S>`, where the type has some special purpose and semantics. ## Todo - [x] Should I keep all the old names as deprecated type aliases? I can do this, but to avoid wasting work I'd prefer to first reach consensus on whether these renames are even desired. - [x] Migration guide - [x] Release notes |
||
![]() |
e9a0ef49f9
|
Rename bevy_platform_support to bevy_platform (#18813)
# Objective The goal of `bevy_platform_support` is to provide a set of platform agnostic APIs, alongside platform-specific functionality. This is a high traffic crate (providing things like HashMap and Instant). Especially in light of https://github.com/bevyengine/bevy/discussions/18799, it deserves a friendlier / shorter name. Given that it hasn't had a full release yet, getting this change in before Bevy 0.16 makes sense. ## Solution - Rename `bevy_platform_support` to `bevy_platform`. |
||
![]() |
9b32e09551
|
bevy_reflect: Add clone registrations project-wide (#18307)
# Objective Now that #13432 has been merged, it's important we update our reflected types to properly opt into this feature. If we do not, then this could cause issues for users downstream who want to make use of reflection-based cloning. ## Solution This PR is broken into 4 commits: 1. Add `#[reflect(Clone)]` on all types marked `#[reflect(opaque)]` that are also `Clone`. This is mandatory as these types would otherwise cause the cloning operation to fail for any type that contains it at any depth. 2. Update the reflection example to suggest adding `#[reflect(Clone)]` on opaque types. 3. Add `#[reflect(clone)]` attributes on all fields marked `#[reflect(ignore)]` that are also `Clone`. This prevents the ignored field from causing the cloning operation to fail. Note that some of the types that contain these fields are also `Clone`, and thus can be marked `#[reflect(Clone)]`. This makes the `#[reflect(clone)]` attribute redundant. However, I think it's safer to keep it marked in the case that the `Clone` impl/derive is ever removed. I'm open to removing them, though, if people disagree. 4. Finally, I added `#[reflect(Clone)]` on all types that are also `Clone`. While not strictly necessary, it enables us to reduce the generated output since we can just call `Clone::clone` directly instead of calling `PartialReflect::reflect_clone` on each variant/field. It also means we benefit from any optimizations or customizations made in the `Clone` impl, including directly dereferencing `Copy` values and increasing reference counters. Along with that change I also took the liberty of adding any missing registrations that I saw could be applied to the type as well, such as `Default`, `PartialEq`, and `Hash`. There were hundreds of these to edit, though, so it's possible I missed quite a few. That last commit is **_massive_**. There were nearly 700 types to update. So it's recommended to review the first three before moving onto that last one. Additionally, I can break the last commit off into its own PR or into smaller PRs, but I figured this would be the easiest way of doing it (and in a timely manner since I unfortunately don't have as much time as I used to for code contributions). ## Testing You can test locally with a `cargo check`: ``` cargo check --workspace --all-features ``` |
||
![]() |
bc7416aa22
|
Use 4-byte LightmapSlabIndex for batching instead of 16-byte AssetId<Image> (#18326)
Less data accessed and compared gives better batching performance. # Objective - Use a smaller id to represent the lightmap in batch data to enable a faster implementation of draw streams. - Improve batching performance for 3D sorted render phases. ## Solution - 3D batching can use `LightmapSlabIndex` (a `NonMaxU32` which is 4 bytes) instead of the lightmap `AssetId<Image>` (an enum where the largest variant is a 16-byte UUID) to discern the ability to batch. ## Testing Tested main (yellow) vs this PR (red) on an M4 Max using the `many_cubes` example with `WGPU_SETTINGS_PRIO=webgl2` to avoid GPU-preprocessing, and modifying the materials in `many_cubes` to have `AlphaMode::Blend` so that they would rely on the less efficient sorted render phase batching. <img width="1500" alt="Screenshot_2025-03-15_at_12 17 21" src="https://github.com/user-attachments/assets/14709bd3-6d06-40fb-aa51-e1d2d606ebe3" /> A 44.75us or 7.5% reduction in median execution time of the batch and prepare sorted render phase system for the `Transparent3d` phase (handling 160k cubes). --- ## Migration Guide - Changed: `RenderLightmap::new()` no longer takes an `AssetId<Image>` argument for the asset id of the lightmap image. |
||
![]() |
ecccd57417
|
Generic system config (#17962)
# Objective Prevents duplicate implementation between IntoSystemConfigs and IntoSystemSetConfigs using a generic, adds a NodeType trait for more config flexibility (opening the door to implement https://github.com/bevyengine/bevy/issues/14195?). ## Solution Followed writeup by @ItsDoot: https://hackmd.io/@doot/rJeefFHc1x Removes IntoSystemConfigs and IntoSystemSetConfigs, instead using IntoNodeConfigs with generics. ## Testing Pending --- ## Showcase N/A ## Migration Guide SystemSetConfigs -> NodeConfigs<InternedSystemSet> SystemConfigs -> NodeConfigs<ScheduleSystem> IntoSystemSetConfigs -> IntoNodeConfigs<InternedSystemSet, M> IntoSystemConfigs -> IntoNodeConfigs<ScheduleSystem, M> --------- Co-authored-by: Christian Hughes <9044780+ItsDoot@users.noreply.github.com> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
989f547080
|
Weak handle migration (#17695)
# Objective - Make use of the new `weak_handle!` macro added in https://github.com/bevyengine/bevy/pull/17384 ## Solution - Migrate bevy from `Handle::weak_from_u128` to the new `weak_handle!` macro that takes a random UUID - Deprecate `Handle::weak_from_u128`, since there are no remaining use cases that can't also be addressed by constructing the type manually ## Testing - `cargo run -p ci -- test` --- ## Migration Guide Replace `Handle::weak_from_u128` with `weak_handle!` and a random UUID. |
||
![]() |
9bc0ae33c3
|
Move hashbrown and foldhash out of bevy_utils (#17460)
# Objective - Contributes to #16877 ## Solution - Moved `hashbrown`, `foldhash`, and related types out of `bevy_utils` and into `bevy_platform_support` - Refactored the above to match the layout of these types in `std`. - Updated crates as required. ## Testing - CI --- ## Migration Guide - The following items were moved out of `bevy_utils` and into `bevy_platform_support::hash`: - `FixedState` - `DefaultHasher` - `RandomState` - `FixedHasher` - `Hashed` - `PassHash` - `PassHasher` - `NoOpHash` - The following items were moved out of `bevy_utils` and into `bevy_platform_support::collections`: - `HashMap` - `HashSet` - `bevy_utils::hashbrown` has been removed. Instead, import from `bevy_platform_support::collections` _or_ take a dependency on `hashbrown` directly. - `bevy_utils::Entry` has been removed. Instead, import from `bevy_platform_support::collections::hash_map` or `bevy_platform_support::collections::hash_set` as appropriate. - All of the above equally apply to `bevy::utils` and `bevy::platform_support`. ## Notes - I left `PreHashMap`, `PreHashMapExt`, and `TypeIdMap` in `bevy_utils` as they might be candidates for micro-crating. They can always be moved into `bevy_platform_support` at a later date if desired. |
||
![]() |
44ad3bf62b
|
Move Resource trait to its own file (#17469)
# Objective `bevy_ecs`'s `system` module is something of a grab bag, and *very* large. This is particularly true for the `system_param` module, which is more than 2k lines long! While it could be defensible to put `Res` and `ResMut` there (lol no they're in change_detection.rs, obviously), it doesn't make any sense to put the `Resource` trait there. This is confusing to navigate (and painful to work on and review). ## Solution - Create a root level `bevy_ecs/resource.rs` module to mirror `bevy_ecs/component.rs` - move the `Resource` trait to that module - move the `Resource` derive macro to that module as well (Rust really likes when you pun on the names of the derive macro and trait and put them in the same path) - fix all of the imports ## Notes to reviewers - We could probably move more stuff into here, but I wanted to keep this PR as small as possible given the absurd level of import changes. - This PR is ground work for my upcoming attempts to store resource data on components (resources-as-entities). Splitting this code out will make the work and review a bit easier, and is the sort of overdue refactor that's good to do as part of more meaningful work. ## Testing cargo build works! ## Migration Guide `bevy_ecs::system::Resource` has been moved to `bevy_ecs::resource::Resource`. |
||
![]() |
bb0a82b9a7
|
Higher quality bicubic lightmap sampling (#16740)
# Objective - Closes https://github.com/bevyengine/bevy/issues/14322. ## Solution - Implement fast 4-sample bicubic filtering based on this shader toy https://www.shadertoy.com/view/4df3Dn, with a small speedup from a ghost of tushima presentation. ## Testing - Did you test these changes? If so, how? - Ran on lightmapped example. Practically no difference in that scene. - Are there any parts that need more testing? - Lightmapping a better scene. ## Changelog - Lightmaps now have a higher quality bicubic sampling method (off by default). --------- Co-authored-by: Patrick Walton <pcwalton@mimiga.net> |
||
![]() |
a371ee3019
|
Remove tracing re-export from bevy_utils (#17161)
# Objective - Contributes to #11478 ## Solution - Made `bevy_utils::tracing` `doc(hidden)` - Re-exported `tracing` from `bevy_log` for end-users - Added `tracing` directly to crates that need it. ## Testing - CI --- ## Migration Guide If you were importing `tracing` via `bevy::utils::tracing`, instead use `bevy::log::tracing`. Note that many items within `tracing` are also directly re-exported from `bevy::log` as well, so you may only need `bevy::log` for the most common items (e.g., `warn!`, `trace!`, etc.). This also applies to the `log_once!` family of macros. ## Notes - While this doesn't reduce the line-count in `bevy_utils`, it further decouples the internal crates from `bevy_utils`, making its eventual removal more feasible in the future. - I have just imported `tracing` as we do for all dependencies. However, a workspace dependency may be more appropriate for version management. |
||
![]() |
6a4e0c801e
|
Fix several regressions from recent rendering changes. (#16890)
This commit fixes the following regressions: 1. Transmission-specific calls to shader lighting functions didn't pass the `enable_diffuse` parameter, breaking the `transmission` example. 2. The combination of bindless `StandardMaterial` and bindless lightmaps caused us to blow past the 128 texture limit on M1/M2 chips in some cases, in particular the `depth_of_field` example. https://github.com/gfx-rs/wgpu/issues/3334 should fix this, but in the meantime this patch reduces the number of bindless lightmaps from 16 to 4 in order to stay under the limit. 3. The renderer was crashing on startup on Adreno 610 chips. This PR simply disables bindless on Adreno 610 and lower. |
||
![]() |
35826be6f7
|
Implement bindless lightmaps. (#16653)
This commit allows Bevy to bind 16 lightmaps at a time, if the current platform supports bindless textures. Naturally, if bindless textures aren't supported, Bevy falls back to binding only a single lightmap at a time. As lightmaps are usually heavily atlased, I doubt many scenes will use more than 16 lightmap textures. This has little performance impact now, but it's desirable for us to reap the benefits of multidraw and bindless textures on scenes that use lightmaps. Otherwise, we might have to break batches in order to switch those lightmaps. Additionally, this PR slightly reduces the cost of binning because it makes the lightmap index in `Opaque3dBinKey` 32 bits instead of an `AssetId`. ## Migration Guide * The `Opaque3dBinKey::lightmap_image` field is now `Opaque3dBinKey::lightmap_slab`, which is a lightweight identifier for an entire binding array of lightmaps. |
||
![]() |
40640fdf42
|
Don't reëxport bevy_image from bevy_render (#16163)
# Objective Fixes #15940 ## Solution Remove the `pub use` and fix the compile errors. Make `bevy_image` available as `bevy::image`. ## Testing Feature Frenzy would be good here! Maybe I'll learn how to use it if I have some time this weekend, or maybe a reviewer can use it. ## Migration Guide Use `bevy_image` instead of `bevy_render::texture` items. --------- Co-authored-by: chompaa <antony.m.3012@gmail.com> Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
![]() |
dd812b3e49
|
Type safe retained render world (#15756)
# Objective In the Render World, there are a number of collections that are derived from Main World entities and are used to drive rendering. The most notable are: - `VisibleEntities`, which is generated in the `check_visibility` system and contains visible entities for a view. - `ExtractedInstances`, which maps entity ids to asset ids. In the old model, these collections were trivially kept in sync -- any extracted phase item could look itself up because the render entity id was guaranteed to always match the corresponding main world id. After #15320, this became much more complicated, and was leading to a number of subtle bugs in the Render World. The main rendering systems, i.e. `queue_material_meshes` and `queue_material2d_meshes`, follow a similar pattern: ```rust for visible_entity in visible_entities.iter::<With<Mesh2d>>() { let Some(mesh_instance) = render_mesh_instances.get_mut(visible_entity) else { continue; }; // Look some more stuff up and specialize the pipeline... let bin_key = Opaque2dBinKey { pipeline: pipeline_id, draw_function: draw_opaque_2d, asset_id: mesh_instance.mesh_asset_id.into(), material_bind_group_id: material_2d.get_bind_group_id().0, }; opaque_phase.add( bin_key, *visible_entity, BinnedRenderPhaseType::mesh(mesh_instance.automatic_batching), ); } ``` In this case, `visible_entities` and `render_mesh_instances` are both collections that are created and keyed by Main World entity ids, and so this lookup happens to work by coincidence. However, there is a major unintentional bug here: namely, because `visible_entities` is a collection of Main World ids, the phase item being queued is created with a Main World id rather than its correct Render World id. This happens to not break mesh rendering because the render commands used for drawing meshes do not access the `ItemQuery` parameter, but demonstrates the confusion that is now possible: our UI phase items are correctly being queued with Render World ids while our meshes aren't. Additionally, this makes it very easy and error prone to use the wrong entity id to look up things like assets. For example, if instead we ignored visibility checks and queued our meshes via a query, we'd have to be extra careful to use `&MainEntity` instead of the natural `Entity`. ## Solution Make all collections that are derived from Main World data use `MainEntity` as their key, to ensure type safety and avoid accidentally looking up data with the wrong entity id: ```rust pub type MainEntityHashMap<V> = hashbrown::HashMap<MainEntity, V, EntityHash>; ``` Additionally, we make all `PhaseItem` be able to provide both their Main and Render World ids, to allow render phase implementors maximum flexibility as to what id should be used to look up data. You can think of this like tracking at the type level whether something in the Render World should use it's "primary key", i.e. entity id, or needs to use a foreign key, i.e. `MainEntity`. ## Testing ##### TODO: This will require extensive testing to make sure things didn't break! Additionally, some extraction logic has become more complicated and needs to be checked for regressions. ## Migration Guide With the advent of the retained render world, collections that contain references to `Entity` that are extracted into the render world have been changed to contain `MainEntity` in order to prevent errors where a render world entity id is used to look up an item by accident. Custom rendering code may need to be changed to query for `&MainEntity` in order to look up the correct item from such a collection. Additionally, users who implement their own extraction logic for collections of main world entity should strongly consider extracting into a different collection that uses `MainEntity` as a key. Additionally, render phases now require specifying both the `Entity` and `MainEntity` for a given `PhaseItem`. Custom render phases should ensure `MainEntity` is available when queuing a phase item. |
||
![]() |
54006b107b
|
Migrate meshes and materials to required components (#15524)
# Objective A big step in the migration to required components: meshes and materials! ## Solution As per the [selected proposal](https://hackmd.io/@bevy/required_components/%2Fj9-PnF-2QKK0on1KQ29UWQ): - Deprecate `MaterialMesh2dBundle`, `MaterialMeshBundle`, and `PbrBundle`. - Add `Mesh2d` and `Mesh3d` components, which wrap a `Handle<Mesh>`. - Add `MeshMaterial2d<M: Material2d>` and `MeshMaterial3d<M: Material>`, which wrap a `Handle<M>`. - Meshes *without* a mesh material should be rendered with a default material. The existence of a material is determined by `HasMaterial2d`/`HasMaterial3d`, which is required by `MeshMaterial2d`/`MeshMaterial3d`. This gets around problems with the generics. Previously: ```rust commands.spawn(MaterialMesh2dBundle { mesh: meshes.add(Circle::new(100.0)).into(), material: materials.add(Color::srgb(7.5, 0.0, 7.5)), transform: Transform::from_translation(Vec3::new(-200., 0., 0.)), ..default() }); ``` Now: ```rust commands.spawn(( Mesh2d(meshes.add(Circle::new(100.0))), MeshMaterial2d(materials.add(Color::srgb(7.5, 0.0, 7.5))), Transform::from_translation(Vec3::new(-200., 0., 0.)), )); ``` If the mesh material is missing, previously nothing was rendered. Now, it renders a white default `ColorMaterial` in 2D and a `StandardMaterial` in 3D (this can be overridden). Below, only every other entity has a material:   Why white? This is still open for discussion, but I think white makes sense for a *default* material, while *invalid* asset handles pointing to nothing should have something like a pink material to indicate that something is broken (I don't handle that in this PR yet). This is kind of a mix of Godot and Unity: Godot just renders a white material for non-existent materials, while Unity renders nothing when no materials exist, but renders pink for invalid materials. I can also change the default material to pink if that is preferable though. ## Testing I ran some 2D and 3D examples to test if anything changed visually. I have not tested all examples or features yet however. If anyone wants to test more extensively, it would be appreciated! ## Implementation Notes - The relationship between `bevy_render` and `bevy_pbr` is weird here. `bevy_render` needs `Mesh3d` for its own systems, but `bevy_pbr` has all of the material logic, and `bevy_render` doesn't depend on it. I feel like the two crates should be refactored in some way, but I think that's out of scope for this PR. - I didn't migrate meshlets to required components yet. That can probably be done in a follow-up, as this is already a huge PR. - It is becoming increasingly clear to me that we really, *really* want to disallow raw asset handles as components. They caused me a *ton* of headache here already, and it took me a long time to find every place that queried for them or inserted them directly on entities, since there were no compiler errors for it. If we don't remove the `Component` derive, I expect raw asset handles to be a *huge* footgun for users as we transition to wrapper components, especially as handles as components have been the norm so far. I personally consider this to be a blocker for 0.15: we need to migrate to wrapper components for asset handles everywhere, and remove the `Component` derive. Also see https://github.com/bevyengine/bevy/issues/14124. --- ## Migration Guide Asset handles for meshes and mesh materials must now be wrapped in the `Mesh2d` and `MeshMaterial2d` or `Mesh3d` and `MeshMaterial3d` components for 2D and 3D respectively. Raw handles as components no longer render meshes. Additionally, `MaterialMesh2dBundle`, `MaterialMeshBundle`, and `PbrBundle` have been deprecated. Instead, use the mesh and material components directly. Previously: ```rust commands.spawn(MaterialMesh2dBundle { mesh: meshes.add(Circle::new(100.0)).into(), material: materials.add(Color::srgb(7.5, 0.0, 7.5)), transform: Transform::from_translation(Vec3::new(-200., 0., 0.)), ..default() }); ``` Now: ```rust commands.spawn(( Mesh2d(meshes.add(Circle::new(100.0))), MeshMaterial2d(materials.add(Color::srgb(7.5, 0.0, 7.5))), Transform::from_translation(Vec3::new(-200., 0., 0.)), )); ``` If the mesh material is missing, a white default material is now used. Previously, nothing was rendered if the material was missing. The `WithMesh2d` and `WithMesh3d` query filter type aliases have also been removed. Simply use `With<Mesh2d>` or `With<Mesh3d>`. --------- Co-authored-by: Tim Blackbird <justthecooldude@gmail.com> Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
![]() |
efda7f3f9c
|
Simpler lint fixes: makes ci lints work but disables a lint for now (#15376)
Takes the first two commits from #15375 and adds suggestions from this comment: https://github.com/bevyengine/bevy/pull/15375#issuecomment-2366968300 See #15375 for more reasoning/motivation. ## Rebasing (rerunning) ```rust git switch simpler-lint-fixes git reset --hard main cargo fmt --all -- --unstable-features --config normalize_comments=true,imports_granularity=Crate cargo fmt --all git add --update git commit --message "rustfmt" cargo clippy --workspace --all-targets --all-features --fix cargo fmt --all -- --unstable-features --config normalize_comments=true,imports_granularity=Crate cargo fmt --all git add --update git commit --message "clippy" git cherry-pick e6c0b94f6795222310fb812fa5c4512661fc7887 ``` |
||
![]() |
bc34216929
|
Pack multiple vertex and index arrays together into growable buffers. (#14257)
This commit uses the [`offset-allocator`] crate to combine vertex and index arrays from different meshes into single buffers. Since the primary source of `wgpu` overhead is from validation and synchronization when switching buffers, this significantly improves Bevy's rendering performance on many scenes. This patch is a more flexible version of #13218, which also used slabs. Unlike #13218, which used slabs of a fixed size, this commit implements slabs that start small and can grow. In addition to reducing memory usage, supporting slab growth reduces the number of vertex and index buffer switches that need to happen during rendering, leading to improved performance. To prevent pathological fragmentation behavior, slabs are capped to a maximum size, and mesh arrays that are too large get their own dedicated slabs. As an additional improvement over #13218, this commit allows the application to customize all allocator heuristics. The `MeshAllocatorSettings` resource contains values that adjust the minimum and maximum slab sizes, the cutoff point at which meshes get their own dedicated slabs, and the rate at which slabs grow. Hopefully-sensible defaults have been chosen for each value. Unfortunately, WebGL 2 doesn't support the *base vertex* feature, which is necessary to pack vertex arrays from different meshes into the same buffer. `wgpu` represents this restriction as the downlevel flag `BASE_VERTEX`. This patch detects that bit and ensures that all vertex buffers get dedicated slabs on that platform. Even on WebGL 2, though, we can combine all *index* arrays into single buffers to reduce buffer changes, and we do so. The following measurements are on Bistro: Overall frame time improves from 8.74 ms to 5.53 ms (1.58x speedup):  Render system time improves from 6.57 ms to 3.54 ms (1.86x speedup):  Opaque pass time improves from 4.64 ms to 2.33 ms (1.99x speedup):  ## Migration Guide ### Changed * Vertex and index buffers for meshes may now be packed alongside other buffers, for performance. * `GpuMesh` has been renamed to `RenderMesh`, to reflect the fact that it no longer directly stores handles to GPU objects. * Because meshes no longer have their own vertex and index buffers, the responsibility for the buffers has moved from `GpuMesh` (now called `RenderMesh`) to the `MeshAllocator` resource. To access the vertex data for a mesh, use `MeshAllocator::mesh_vertex_slice`. To access the index data for a mesh, use `MeshAllocator::mesh_index_slice`. [`offset-allocator`]: https://github.com/pcwalton/offset-allocator |
||
![]() |
11817f4ba4
|
Generate MeshUniform s on the GPU via compute shader where available. (#12773)
Currently, `MeshUniform`s are rather large: 160 bytes. They're also somewhat expensive to compute, because they involve taking the inverse of a 3x4 matrix. Finally, if a mesh is present in multiple views, that mesh will have a separate `MeshUniform` for each and every view, which is wasteful. This commit fixes these issues by introducing the concept of a *mesh input uniform* and adding a *mesh uniform building* compute shader pass. The `MeshInputUniform` is simply the minimum amount of data needed for the GPU to compute the full `MeshUniform`. Most of this data is just the transform and is therefore only 64 bytes. `MeshInputUniform`s are computed during the *extraction* phase, much like skins are today, in order to avoid needlessly copying transforms around on CPU. (In fact, the render app has been changed to only store the translation of each mesh; it no longer cares about any other part of the transform, which is stored only on the GPU and the main world.) Before rendering, the `build_mesh_uniforms` pass runs to expand the `MeshInputUniform`s to the full `MeshUniform`. The mesh uniform building pass does the following, all on GPU: 1. Copy the appropriate fields of the `MeshInputUniform` to the `MeshUniform` slot. If a single mesh is present in multiple views, this effectively duplicates it into each view. 2. Compute the inverse transpose of the model transform, used for transforming normals. 3. If applicable, copy the mesh's transform from the previous frame for TAA. To support this, we double-buffer the `MeshInputUniform`s over two frames and swap the buffers each frame. The `MeshInputUniform`s for the current frame contain the index of that mesh's `MeshInputUniform` for the previous frame. This commit produces wins in virtually every CPU part of the pipeline: `extract_meshes`, `queue_material_meshes`, `batch_and_prepare_render_phase`, and especially `write_batched_instance_buffer` are all faster. Shrinking the amount of CPU data that has to be shuffled around speeds up the entire rendering process. | Benchmark | This branch | `main` | Speedup | |------------------------|-------------|---------|---------| | `many_cubes -nfc` | 17.259 | 24.529 | 42.12% | | `many_cubes -nfc -vpi` | 302.116 | 312.123 | 3.31% | | `many_foxes` | 3.227 | 3.515 | 8.92% | Because mesh uniform building requires compute shader, and WebGL 2 has no compute shader, the existing CPU mesh uniform building code has been left as-is. Many types now have both CPU mesh uniform building and GPU mesh uniform building modes. Developers can opt into the old CPU mesh uniform building by setting the `use_gpu_uniform_builder` option on `PbrPlugin` to `false`. Below are graphs of the CPU portions of `many-cubes --no-frustum-culling`. Yellow is this branch, red is `main`. `extract_meshes`:  It's notable that we get a small win even though we're now writing to a GPU buffer. `queue_material_meshes`:  There's a bit of a regression here; not sure what's causing it. In any case it's very outweighed by the other gains. `batch_and_prepare_render_phase`:  There's a huge win here, enough to make batching basically drop off the profile. `write_batched_instance_buffer`:  There's a massive improvement here, as expected. Note that a lot of it simply comes from the fact that `MeshInputUniform` is `Pod`. (This isn't a maintainability problem in my view because `MeshInputUniform` is so simple: just 16 tightly-packed words.) ## Changelog ### Added * Per-mesh instance data is now generated on GPU with a compute shader instead of CPU, resulting in rendering performance improvements on platforms where compute shaders are supported. ## Migration guide * Custom render phases now need multiple systems beyond just `batch_and_prepare_render_phase`. Code that was previously creating custom render phases should now add a `BinnedRenderPhasePlugin` or `SortedRenderPhasePlugin` as appropriate instead of directly adding `batch_and_prepare_render_phase`. |
||
![]() |
ab7cbfa8fc
|
Consolidate Render(Ui)Materials(2d) into RenderAssets (#12827)
# Objective - Replace `RenderMaterials` / `RenderMaterials2d` / `RenderUiMaterials` with `RenderAssets` to enable implementing changes to one thing, `RenderAssets`, that applies to all use cases rather than duplicating changes everywhere for multiple things that should be one thing. - Adopts #8149 ## Solution - Make RenderAsset generic over the destination type rather than the source type as in #8149 - Use `RenderAssets<PreparedMaterial<M>>` etc for render materials --- ## Changelog - Changed: - The `RenderAsset` trait is now implemented on the destination type. Its `SourceAsset` associated type refers to the type of the source asset. - `RenderMaterials`, `RenderMaterials2d`, and `RenderUiMaterials` have been replaced by `RenderAssets<PreparedMaterial<M>>` and similar. ## Migration Guide - `RenderAsset` is now implemented for the destination type rather that the source asset type. The source asset type is now the `RenderAsset` trait's `SourceAsset` associated type. |
||
![]() |
01649f13e2
|
Refactor App and SubApp internals for better separation (#9202)
# Objective This is a necessary precursor to #9122 (this was split from that PR to reduce the amount of code to review all at once). Moving `!Send` resource ownership to `App` will make it unambiguously `!Send`. `SubApp` must be `Send`, so it can't wrap `App`. ## Solution Refactor `App` and `SubApp` to not have a recursive relationship. Since `SubApp` no longer wraps `App`, once `!Send` resources are moved out of `World` and into `App`, `SubApp` will become unambiguously `Send`. There could be less code duplication between `App` and `SubApp`, but that would break `App` method chaining. ## Changelog - `SubApp` no longer wraps `App`. - `App` fields are no longer publicly accessible. - `App` can no longer be converted into a `SubApp`. - Various methods now return references to a `SubApp` instead of an `App`. ## Migration Guide - To construct a sub-app, use `SubApp::new()`. `App` can no longer convert into `SubApp`. - If you implemented a trait for `App`, you may want to implement it for `SubApp` as well. - If you're accessing `app.world` directly, you now have to use `app.world()` and `app.world_mut()`. - `App::sub_app` now returns `&SubApp`. - `App::sub_app_mut` now returns `&mut SubApp`. - `App::get_sub_app` now returns `Option<&SubApp>.` - `App::get_sub_app_mut` now returns `Option<&mut SubApp>.` |
||
![]() |
f9cc91d5a1
|
Intern mesh vertex buffer layouts so that we don't have to compare them over and over. (#12216)
Although we cached hashes of `MeshVertexBufferLayout`, we were paying the cost of `PartialEq` on `InnerMeshVertexBufferLayout` for every entity, every frame. This patch changes that logic to place `MeshVertexBufferLayout`s in `Arc`s so that they can be compared and hashed by pointer. This results in a 28% speedup in the `queue_material_meshes` phase of `many_cubes`, with frustum culling disabled. Additionally, this patch contains two minor changes: 1. This commit flattens the specialized mesh pipeline cache to one level of hash tables instead of two. This saves a hash lookup. 2. The example `many_cubes` has been given a `--no-frustum-culling` flag, to aid in benchmarking. See the Tracy profile: <img width="1064" alt="Screenshot 2024-02-29 144406" src="https://github.com/bevyengine/bevy/assets/157897/18632f1d-1fdd-4ac7-90ed-2d10306b2a1e"> ## Migration guide * Duplicate `MeshVertexBufferLayout`s are now combined into a single object, `MeshVertexBufferLayoutRef`, which contains an atomically-reference-counted pointer to the layout. Code that was using `MeshVertexBufferLayout` may need to be updated to use `MeshVertexBufferLayoutRef` instead. |
||
![]() |
1c67e020f7
|
Move EntityHash related types into bevy_ecs (#11498)
# Objective Reduce the size of `bevy_utils` (https://github.com/bevyengine/bevy/issues/11478) ## Solution Move `EntityHash` related types into `bevy_ecs`. This also allows us access to `Entity`, which means we no longer need `EntityHashMap`'s first generic argument. --- ## Changelog - Moved `bevy::utils::{EntityHash, EntityHasher, EntityHashMap, EntityHashSet}` into `bevy::ecs::entity::hash` . - Removed `EntityHashMap`'s first generic argument. It is now hardcoded to always be `Entity`. ## Migration Guide - Uses of `bevy::utils::{EntityHash, EntityHasher, EntityHashMap, EntityHashSet}` now have to be imported from `bevy::ecs::entity::hash`. - Uses of `EntityHashMap` no longer have to specify the first generic parameter. It is now hardcoded to always be `Entity`. |
||
![]() |
dd14f3a477
|
Implement lightmaps. (#10231)
 # Objective Lightmaps, textures that store baked global illumination, have been a mainstay of real-time graphics for decades. Bevy currently has no support for them, so this pull request implements them. ## Solution The new `Lightmap` component can be attached to any entity that contains a `Handle<Mesh>` and a `StandardMaterial`. When present, it will be applied in the PBR shader. Because multiple lightmaps are frequently packed into atlases, each lightmap may have its own UV boundaries within its texture. An `exposure` field is also provided, to control the brightness of the lightmap. Note that this PR doesn't provide any way to bake the lightmaps. That can be done with [The Lightmapper] or another solution, such as Unity's Bakery. --- ## Changelog ### Added * A new component, `Lightmap`, is available, for baked global illumination. If your mesh has a second UV channel (UV1), and you attach this component to the entity with that mesh, Bevy will apply the texture referenced in the lightmap. [The Lightmapper]: https://github.com/Naxela/The_Lightmapper --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com> |