# Objective
- Progress towards #19887.
## Solution
- Convert FromWorld implementations into systems.
- Move any resource "manipulation" from `Plugin::finish` to systems.
- Add `after` dependencies to any uses of these resources (basically all
`SpritePipeline`).
## Testing
- Ran the `sprite`, and `mesh2d_manual` example and it worked.
# Objective
The extraction systems for materials, meshes, and skins previously
iterated over `RemovedComponents<ViewVisibility>` in addition to more
specific variants like `RemovedComponents<MeshMaterial3d<M>>`. This
caused each system to loop through and check many irrelevant despawned
entities—sometimes multiple times. With many material types, this
overhead added up and became noticeable in frames with many despawns.
<img width="1091" alt="Screenshot 2025-02-21 at 10 28 01 AM"
src="https://github.com/user-attachments/assets/63fec1c9-232c-45f6-9150-daf8751ecf85"
/>
## Solution
This PR removes superfluous `RemovedComponents` iteration for
`ViewVisibility` and `GlobalTransform`, ensuring that we only iterate
over the most specific `RemovedComponents` relevant to the system (e.g.,
material components, mesh components). This is guaranteed to match what
the system originally collected.
### Before (red) / After (yellow):
<img width="838" alt="Screenshot 2025-02-21 at 10 46 17 AM"
src="https://github.com/user-attachments/assets/0e06b06d-7e91-4da5-a919-b843eb442a72"
/>
Log plot to highlight the long tail that this PR is addressing.
A few versions ago, wgpu made it possible to set shader entry point to
`None`, which will select the correct entry point in file where only a
single entrypoint is specified. This makes it possible to implement
`Default` for pipeline descriptors. This PR does so and attempts to
`..default()` everything possible.
# Objective
- This unblocks some work I am doing for #19887.
## Solution
- Rename `RenderGraphApp` to `RenderGraphExt`.
- Implement `RenderGraphExt` for `World`.
- Change `SubApp` and `App` to call the `World` impl.
# Objective
Closes#18075
In order to enable a number of patterns for dynamic materials in the
engine, it's necessary to decouple the renderer from the `Material`
trait.
This opens the possibility for:
- Materials that aren't coupled to `AsBindGroup`.
- 2d using the underlying 3d bindless infrastructure.
- Dynamic materials that can change their layout at runtime.
- Materials that aren't even backed by a Rust struct at all.
## Solution
In short, remove all trait bounds from render world material systems and
resources. This means moving a bunch of stuff onto `MaterialProperties`
and engaging in some hacks to make specialization work. Rather than
storing the bind group data in `MaterialBindGroupAllocator`, right now
we're storing it in a closure on `MaterialProperties`. TBD if this has
bad performance characteristics.
## Benchmarks
- `many_cubes`:
`cargo run --example many_cubes --release --features=bevy/trace_tracy --
--vary-material-data-per-instance`:

- @DGriffin91's Caldera
`cargo run --release --features=bevy/trace_tracy -- --random-materials`

- @DGriffin91's Caldera with 20 unique material types (i.e.
`MaterialPlugin<M>`) and random materials per mesh
`cargo run --release --features=bevy/trace_tracy -- --random-materials`

### TODO
- We almost certainly lost some parallelization from removing the type
params that could be gained back from smarter iteration.
- Test all the things that could have broken.
- ~Fix meshlets~
## Showcase
See [the
example](https://github.com/bevyengine/bevy/pull/19667/files#diff-9d768cfe1c3aa81eff365d250d3cbe5a63e8df63e81dd85f64c3c3cd993f6d94)
for a custom material implemented without the use of the `Material`
trait and thus `AsBindGroup`.

---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: IceSentry <c.giguere42@gmail.com>
# Objective
#19410 added support for resizing images "in place" meaning that their
data was copied into the new texture allocation on the CPU. However,
there are some scenarios where an image may be created and populated
entirely on the GPU. Using this method would cause data to disappear, as
it wouldn't be copied into the new texture.
## Solution
When an image is resized in place, if it has no data in it's asset,
we'll opt into a new flag `copy_on_resize` which will issue a
`copy_texture_to_texture` command on the old allocation.
To support this, we require passing the old asset to all `RenderAsset`
implementations. This will be generally useful in the future for
reducing things like buffer re-allocations.
## Testing
Tested using the example in the issue.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Improve the performance of `FilteredEntity(Ref|Mut)` and
`Entity(Ref|Mut)Except`.
`FilteredEntityRef` needs an `Access<ComponentId>` to determine what
components it can access. There is one stored in the query state, but
query items cannot borrow from the state, so it has to `clone()` the
access for each row. Cloning the access involves memory allocations and
can be expensive.
## Solution
Let query items borrow from their query state.
Add an `'s` lifetime to `WorldQuery::Item` and `WorldQuery::Fetch`,
similar to the one in `SystemParam`, and provide `&'s Self::State` to
the fetch so that it can borrow from the state.
Unfortunately, there are a few cases where we currently return query
items from temporary query states: the sorted iteration methods create a
temporary state to query the sort keys, and the
`EntityRef::components<Q>()` methods create a temporary state for their
query.
To allow these to continue to work with most `QueryData`
implementations, introduce a new subtrait `ReleaseStateQueryData` that
converts a `QueryItem<'w, 's>` to `QueryItem<'w, 'static>`, and is
implemented for everything except `FilteredEntity(Ref|Mut)` and
`Entity(Ref|Mut)Except`.
`#[derive(QueryData)]` will generate `ReleaseStateQueryData`
implementations that apply when all of the subqueries implement
`ReleaseStateQueryData`.
This PR does not actually change the implementation of
`FilteredEntity(Ref|Mut)` or `Entity(Ref|Mut)Except`! That will be done
as a follow-up PR so that the changes are easier to review. I have
pushed the changes as chescock/bevy#5.
## Testing
I ran performance traces of many_foxes, both against main and against
chescock/bevy#5, both including #15282. These changes do appear to make
generalized animation a bit faster:
(Red is main, yellow is chescock/bevy#5)

## Migration Guide
The `WorldQuery::Item` and `WorldQuery::Fetch` associated types and the
`QueryItem` and `ROQueryItem` type aliases now have an additional
lifetime parameter corresponding to the `'s` lifetime in `Query`. Manual
implementations of `WorldQuery` will need to update the method
signatures to include the new lifetimes. Other uses of the types will
need to be updated to include a lifetime parameter, although it can
usually be passed as `'_`. In particular, `ROQueryItem` is used when
implementing `RenderCommand`.
Before:
```rust
fn render<'w>(
item: &P,
view: ROQueryItem<'w, Self::ViewQuery>,
entity: Option<ROQueryItem<'w, Self::ItemQuery>>,
param: SystemParamItem<'w, '_, Self::Param>,
pass: &mut TrackedRenderPass<'w>,
) -> RenderCommandResult;
```
After:
```rust
fn render<'w>(
item: &P,
view: ROQueryItem<'w, '_, Self::ViewQuery>,
entity: Option<ROQueryItem<'w, '_, Self::ItemQuery>>,
param: SystemParamItem<'w, '_, Self::Param>,
pass: &mut TrackedRenderPass<'w>,
) -> RenderCommandResult;
```
---
Methods on `QueryState` that take `&mut self` may now result in
conflicting borrows if the query items capture the lifetime of the
mutable reference. This affects `get()`, `iter()`, and others. To fix
the errors, first call `QueryState::update_archetypes()`, and then
replace a call `state.foo(world, param)` with
`state.query_manual(world).foo_inner(param)`. Alternately, you may be
able to restructure the code to call `state.query(world)` once and then
make multiple calls using the `Query`.
Before:
```rust
let mut state: QueryState<_, _> = ...;
let d1 = state.get(world, e1);
let d2 = state.get(world, e2); // Error: cannot borrow `state` as mutable more than once at a time
println!("{d1:?}");
println!("{d2:?}");
```
After:
```rust
let mut state: QueryState<_, _> = ...;
state.update_archetypes(world);
let d1 = state.get_manual(world, e1);
let d2 = state.get_manual(world, e2);
// OR
state.update_archetypes(world);
let d1 = state.query(world).get_inner(e1);
let d2 = state.query(world).get_inner(e2);
// OR
let query = state.query(world);
let d1 = query.get_inner(e1);
let d1 = query.get_inner(e2);
println!("{d1:?}");
println!("{d2:?}");
```
# Objective
- Related to #19024
## Solution
- Use the new `load_shader_library` macro for the shader libraries and
`embedded_asset`/`load_embedded_asset` for the "shader binaries" in
`bevy_sprite`.
## Testing
- `sprite` example still works.
- `mesh2d` example still works.
P.S. I don't think this needs a migration guide. Technically users could
be using the `pub` weak handles, but there's no actual good use for
them, so omitting it seems fine. Alternatively, we could mix this in
with the migration guide notes for #19137.
# Objective
Fixes a part of #14274.
Bevy has an incredibly inconsistent naming convention for its system
sets, both internally and across the ecosystem.
<img alt="System sets in Bevy"
src="https://github.com/user-attachments/assets/d16e2027-793f-4ba4-9cc9-e780b14a5a1b"
width="450" />
*Names of public system set types in Bevy*
Most Bevy types use a naming of `FooSystem` or just `Foo`, but there are
also a few `FooSystems` and `FooSet` types. In ecosystem crates on the
other hand, `FooSet` is perhaps the most commonly used name in general.
Conventions being so wildly inconsistent can make it harder for users to
pick names for their own types, to search for system sets on docs.rs, or
to even discern which types *are* system sets.
To reign in the inconsistency a bit and help unify the ecosystem, it
would be good to establish a common recommended naming convention for
system sets in Bevy itself, similar to how plugins are commonly suffixed
with `Plugin` (ex: `TimePlugin`). By adopting a consistent naming
convention in first-party Bevy, we can softly nudge ecosystem crates to
follow suit (for types where it makes sense to do so).
Choosing a naming convention is also relevant now, as the [`bevy_cli`
recently adopted
lints](https://github.com/TheBevyFlock/bevy_cli/pull/345) to enforce
naming for plugins and system sets, and the recommended naming used for
system sets is still a bit open.
## Which Name To Use?
Now the contentious part: what naming convention should we actually
adopt?
This was discussed on the Bevy Discord at the end of last year, starting
[here](<https://discord.com/channels/691052431525675048/692572690833473578/1310659954683936789>).
`FooSet` and `FooSystems` were the clear favorites, with `FooSet` very
narrowly winning an unofficial poll. However, it seems to me like the
consensus was broadly moving towards `FooSystems` at the end and after
the poll, with Cart
([source](https://discord.com/channels/691052431525675048/692572690833473578/1311140204974706708))
and later Alice
([source](https://discord.com/channels/691052431525675048/692572690833473578/1311092530732859533))
and also me being in favor of it.
Let's do a quick pros and cons list! Of course these are just what I
thought of, so take it with a grain of salt.
`FooSet`:
- Pro: Nice and short!
- Pro: Used by many ecosystem crates.
- Pro: The `Set` suffix comes directly from the trait name `SystemSet`.
- Pro: Pairs nicely with existing APIs like `in_set` and
`configure_sets`.
- Con: `Set` by itself doesn't actually indicate that it's related to
systems *at all*, apart from the implemented trait. A set of what?
- Con: Is `FooSet` a set of `Foo`s or a system set related to `Foo`? Ex:
`ContactSet`, `MeshSet`, `EnemySet`...
`FooSystems`:
- Pro: Very clearly indicates that the type represents a collection of
systems. The actual core concept, system(s), is in the name.
- Pro: Parallels nicely with `FooPlugins` for plugin groups.
- Pro: Low risk of conflicts with other names or misunderstandings about
what the type is.
- Pro: In most cases, reads *very* nicely and clearly. Ex:
`PhysicsSystems` and `AnimationSystems` as opposed to `PhysicsSet` and
`AnimationSet`.
- Pro: Easy to search for on docs.rs.
- Con: Usually results in longer names.
- Con: Not yet as widely used.
Really the big problem with `FooSet` is that it doesn't actually
describe what it is. It describes what *kind of thing* it is (a set of
something), but not *what it is a set of*, unless you know the type or
check its docs or implemented traits. `FooSystems` on the other hand is
much more self-descriptive in this regard, at the cost of being a bit
longer to type.
Ultimately, in some ways it comes down to preference and how you think
of system sets. Personally, I was originally in favor of `FooSet`, but
have been increasingly on the side of `FooSystems`, especially after
seeing what the new names would actually look like in Avian and now
Bevy. I prefer it because it usually reads better, is much more clearly
related to groups of systems than `FooSet`, and overall *feels* more
correct and natural to me in the long term.
For these reasons, and because Alice and Cart also seemed to share a
preference for it when it was previously being discussed, I propose that
we adopt a `FooSystems` naming convention where applicable.
## Solution
Rename Bevy's system set types to use a consistent `FooSet` naming where
applicable.
- `AccessibilitySystem` → `AccessibilitySystems`
- `GizmoRenderSystem` → `GizmoRenderSystems`
- `PickSet` → `PickingSystems`
- `RunFixedMainLoopSystem` → `RunFixedMainLoopSystems`
- `TransformSystem` → `TransformSystems`
- `RemoteSet` → `RemoteSystems`
- `RenderSet` → `RenderSystems`
- `SpriteSystem` → `SpriteSystems`
- `StateTransitionSteps` → `StateTransitionSystems`
- `RenderUiSystem` → `RenderUiSystems`
- `UiSystem` → `UiSystems`
- `Animation` → `AnimationSystems`
- `AssetEvents` → `AssetEventSystems`
- `TrackAssets` → `AssetTrackingSystems`
- `UpdateGizmoMeshes` → `GizmoMeshSystems`
- `InputSystem` → `InputSystems`
- `InputFocusSet` → `InputFocusSystems`
- `ExtractMaterialsSet` → `MaterialExtractionSystems`
- `ExtractMeshesSet` → `MeshExtractionSystems`
- `RumbleSystem` → `RumbleSystems`
- `CameraUpdateSystem` → `CameraUpdateSystems`
- `ExtractAssetsSet` → `AssetExtractionSystems`
- `Update2dText` → `Text2dUpdateSystems`
- `TimeSystem` → `TimeSystems`
- `AudioPlaySet` → `AudioPlaybackSystems`
- `SendEvents` → `EventSenderSystems`
- `EventUpdates` → `EventUpdateSystems`
A lot of the names got slightly longer, but they are also a lot more
consistent, and in my opinion the majority of them read much better. For
a few of the names I took the liberty of rewording things a bit;
definitely open to any further naming improvements.
There are still also cases where the `FooSystems` naming doesn't really
make sense, and those I left alone. This primarily includes system sets
like `Interned<dyn SystemSet>`, `EnterSchedules<S>`, `ExitSchedules<S>`,
or `TransitionSchedules<S>`, where the type has some special purpose and
semantics.
## Todo
- [x] Should I keep all the old names as deprecated type aliases? I can
do this, but to avoid wasting work I'd prefer to first reach consensus
on whether these renames are even desired.
- [x] Migration guide
- [x] Release notes
# Objective
Fixes#18843
## Solution
We need to account for the material being added and removed in the
course of the same frame. We evict the caches first because the entity
will be re-added if it was marked as needing specialization, which
avoids another check on removed components to see if it was "really"
despawned.
Fixes#18809Fixes#18823
Meshes despawned in `Last` can still be in visisible entities if they
were visible as of `PostUpdate`. Sanity check that the mesh actually
exists before we specialize. We still want to unconditionally assume
that the entity is in `EntitySpecializationTicks` as its absence from
that cache would likely suggest another bug.
# Objective
The goal of `bevy_platform_support` is to provide a set of platform
agnostic APIs, alongside platform-specific functionality. This is a high
traffic crate (providing things like HashMap and Instant). Especially in
light of https://github.com/bevyengine/bevy/discussions/18799, it
deserves a friendlier / shorter name.
Given that it hasn't had a full release yet, getting this change in
before Bevy 0.16 makes sense.
## Solution
- Rename `bevy_platform_support` to `bevy_platform`.
# Objective
Fixes#16896Fixes#17737
## Solution
Adds a new render phase, including all the new cold specialization
patterns, for wireframes. There's a *lot* of regrettable duplication
here between 3d/2d.
## Testing
All the examples.
## Migration Guide
- `WireframePlugin` must now be created with
`WireframePlugin::default()`.
# Objective
Fixes#17872
## Solution
This should have basically no impact on static scenes. We can optimize
more later if anything comes up. Needing to iterate the two level bin is
a bit unfortunate but shouldn't matter for apps that use a single
camera.
# Objective
For materials that aren't being used or a visible entity doesn't have an
instance of, we were unnecessarily constantly checking whether they
needed specialization, saying yes (because the material had never been
specialized for that entity), and failing to look up the material
instance.
## Solution
If an entity doesn't have an instance of the material, it can't possibly
need specialization, so exit early before spending time doing the check.
Fixes#18388.
# Objective
Extract sprites into a `Vec` instead of a `HashMap`.
## Solution
Extract UI nodes into a `Vec` instead of an `EntityHashMap`.
Add an index into the `Vec` to `Transparent2d`.
Compare both the index and render entity in prepare so there aren't any
collisions.
## Showcase
yellow this PR, red main
```
cargo run --example many_sprites --release --features "trace_tracy"
```
`extract_sprites`
<img width="452" alt="extract_sprites"
src="https://github.com/user-attachments/assets/66c60406-7c2b-4367-907d-4a71d3630296"
/>
`queue_sprites`
<img width="463" alt="queue_sprites"
src="https://github.com/user-attachments/assets/54b903bd-4137-4772-9f87-e10e1e050d69"
/>
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Now that #13432 has been merged, it's important we update our reflected
types to properly opt into this feature. If we do not, then this could
cause issues for users downstream who want to make use of
reflection-based cloning.
## Solution
This PR is broken into 4 commits:
1. Add `#[reflect(Clone)]` on all types marked `#[reflect(opaque)]` that
are also `Clone`. This is mandatory as these types would otherwise cause
the cloning operation to fail for any type that contains it at any
depth.
2. Update the reflection example to suggest adding `#[reflect(Clone)]`
on opaque types.
3. Add `#[reflect(clone)]` attributes on all fields marked
`#[reflect(ignore)]` that are also `Clone`. This prevents the ignored
field from causing the cloning operation to fail.
Note that some of the types that contain these fields are also `Clone`,
and thus can be marked `#[reflect(Clone)]`. This makes the
`#[reflect(clone)]` attribute redundant. However, I think it's safer to
keep it marked in the case that the `Clone` impl/derive is ever removed.
I'm open to removing them, though, if people disagree.
4. Finally, I added `#[reflect(Clone)]` on all types that are also
`Clone`. While not strictly necessary, it enables us to reduce the
generated output since we can just call `Clone::clone` directly instead
of calling `PartialReflect::reflect_clone` on each variant/field. It
also means we benefit from any optimizations or customizations made in
the `Clone` impl, including directly dereferencing `Copy` values and
increasing reference counters.
Along with that change I also took the liberty of adding any missing
registrations that I saw could be applied to the type as well, such as
`Default`, `PartialEq`, and `Hash`. There were hundreds of these to
edit, though, so it's possible I missed quite a few.
That last commit is **_massive_**. There were nearly 700 types to
update. So it's recommended to review the first three before moving onto
that last one.
Additionally, I can break the last commit off into its own PR or into
smaller PRs, but I figured this would be the easiest way of doing it
(and in a timely manner since I unfortunately don't have as much time as
I used to for code contributions).
## Testing
You can test locally with a `cargo check`:
```
cargo check --workspace --all-features
```
## Objective
`insert_or_spawn_batch` is due to be deprecated eventually (#15704), and
removing uses internally will make that easier.
## Solution
Replaced internal uses of `insert_or_spawn_batch` with
`try_insert_batch` (non-panicking variant because
`insert_or_spawn_batch` didn't panic).
All of the internal uses are in rendering code. Since retained rendering
was meant to get rid non-opaque entity IDs, I assume the code was just
using `insert_or_spawn_batch` because `insert_batch` didn't exist and
not because it actually wanted to spawn something. However, I am *not*
confident in my ability to judge rendering code.
## Objective
Alternative to #18001.
- Now that systems can handle the `?` operator, `get_entity` returning
`Result` would be more useful than `Option`.
- With `get_entity` being more flexible, combined with entity commands
now checking the entity's existence automatically, the panic in `entity`
isn't really necessary.
## Solution
- Changed `Commands::get_entity` to return `Result<EntityCommands,
EntityDoesNotExistError>`.
- Removed panic from `Commands::entity`.
# Objective
Implements and closes#17515
## Solution
Add `uv_transform` to `ColorMaterial`
## Testing
Create a example similar to `repeated_texture` but for `Mesh2d` and
`MeshMaterial2d<ColorMaterial>`
## Showcase

## Migration Guide
Add `uv_transform` field to constructors of `ColorMaterial`
# Objective
`Eq`/`PartialEq` are currently implemented for `MeshMaterial{2|3}d` only
through the derive macro. Since we don't have perfect derive yet, the
impls are only present for `M: Eq` and `M: PartialEq`. On the other
hand, I want to be able to compare material components for my toy
reactivity project.
## Solution
Switch to manual `Eq`/`PartialEq` impl.
## Testing
Boy I hope this didn't break anything!
Currently, the specialized pipeline cache maps a (view entity, mesh
entity) tuple to the retained pipeline for that entity. This causes two
problems:
1. Using the view entity is incorrect, because the view entity isn't
stable from frame to frame.
2. Switching the view entity to a `RetainedViewEntity`, which is
necessary for correctness, significantly regresses performance of
`specialize_material_meshes` and `specialize_shadows` because of the
loss of the fast `EntityHash`.
This patch fixes both problems by switching to a *two-level* hash table.
The outer level of the table maps each `RetainedViewEntity` to an inner
table, which maps each `MainEntity` to its pipeline ID and change tick.
Because we loop over views first and, within that loop, loop over
entities visible from that view, we hoist the slow lookup of the view
entity out of the inner entity loop.
Additionally, this patch fixes a bug whereby pipeline IDs were leaked
when removing the view. We still have a problem with leaking pipeline
IDs for deleted entities, but that won't be fixed until the specialized
pipeline cache is retained.
This patch improves performance of the [Caldera benchmark] from 7.8×
faster than 0.14 to 9.0× faster than 0.14, when applied on top of the
global binding arrays PR, #17898.
[Caldera benchmark]: https://github.com/DGriffin91/bevy_caldera_scene
The GPU can fill out many of the fields in `IndirectParametersMetadata`
using information it already has:
* `early_instance_count` and `late_instance_count` are always
initialized to zero.
* `mesh_index` is already present in the work item buffer as the
`input_index` of the first work item in each batch.
This patch moves these fields to a separate buffer, the *GPU indirect
parameters metadata* buffer. That way, it avoids having to write them on
CPU during `batch_and_prepare_binned_render_phase`. This effectively
reduces the number of bits that that function must write per mesh from
160 to 64 (in addition to the 64 bits per mesh *instance*).
Additionally, this PR refactors `UntypedPhaseIndirectParametersBuffers`
to add another layer, `MeshClassIndirectParametersBuffers`, which allows
abstracting over the buffers corresponding indexed and non-indexed
meshes. This patch doesn't make much use of this abstraction, but
forthcoming patches will, and it's overall a cleaner approach.
This didn't seem to have much of an effect by itself on
`batch_and_prepare_binned_render_phase` time, but subsequent PRs
dependent on this PR yield roughly a 2× speedup.
# Objective
- #17787 removed sweeping of binned render phases from 2D by accident
due to them not using the `BinnedRenderPhasePlugin`.
- Fixes#17885
## Solution
- Schedule `sweep_old_entities` in `QueueSweep` like
`BinnedRenderPhasePlugin` does, but for 2D where that plugin is not
used.
## Testing
Tested with the modified `shader_defs` example in #17885 .
Currently, invocations of `batch_and_prepare_binned_render_phase` and
`batch_and_prepare_sorted_render_phase` can't run in parallel because
they write to scene-global GPU buffers. After PR #17698,
`batch_and_prepare_binned_render_phase` started accounting for the
lion's share of the CPU time, causing us to be strongly CPU bound on
scenes like Caldera when occlusion culling was on (because of the
overhead of batching for the Z-prepass). Although I eventually plan to
optimize `batch_and_prepare_binned_render_phase`, we can obtain
significant wins now by parallelizing that system across phases.
This commit splits all GPU buffers that
`batch_and_prepare_binned_render_phase` and
`batch_and_prepare_sorted_render_phase` touches into separate buffers
for each phase so that the scheduler will run those phases in parallel.
At the end of batch preparation, we gather the render phases up into a
single resource with a new *collection* phase. Because we already run
mesh preprocessing separately for each phase in order to make occlusion
culling work, this is actually a cleaner separation. For example, mesh
output indices (the unique ID that identifies each mesh instance on GPU)
are now guaranteed to be sequential starting from 0, which will simplify
the forthcoming work to remove them in favor of the compute dispatch ID.
On Caldera, this brings the frame time down to approximately 9.1 ms with
occlusion culling on.

Currently, we look up each `MeshInputUniform` index in a hash table that
maps the main entity ID to the index every frame. This is inefficient,
cache unfriendly, and unnecessary, as the `MeshInputUniform` index for
an entity remains the same from frame to frame (even if the input
uniform changes). This commit changes the `IndexSet` in the `RenderBin`
to an `IndexMap` that maps the `MainEntity` to `MeshInputUniformIndex`
(a new type that this patch adds for more type safety).
On Caldera with parallel `batch_and_prepare_binned_render_phase`, this
patch improves that function from 3.18 ms to 2.42 ms, a 31% speedup.
Currently, we *sweep*, or remove entities from bins when those entities
became invisible or changed phases, during `queue_material_meshes` and
similar phases. This, however, is wrong, because `queue_material_meshes`
executes once per material type, not once per phase. This could result
in sweeping bins multiple times per phase, which can corrupt the bins.
This commit fixes the issue by moving sweeping to a separate system that
runs after queuing.
This manifested itself as entities appearing and disappearing seemingly
at random.
Closes#17759.
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
# Objective
Because of mesh preprocessing, users cannot rely on
`@builtin(instance_index)` in order to reference external data, as the
instance index is not stable, either from frame to frame or relative to
the total spawn order of mesh instances.
## Solution
Add a user supplied mesh index that can be used for referencing external
data when drawing instanced meshes.
Closes#13373
## Testing
Benchmarked `many_cubes` showing no difference in total frame time.
## Showcase
https://github.com/user-attachments/assets/80620147-aafc-4d9d-a8ee-e2149f7c8f3b
---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
# Objective
https://github.com/bevyengine/bevy/issues/17746
## Solution
- Change `Image.data` from being a `Vec<u8>` to a `Option<Vec<u8>>`
- Added functions to help with creating images
## Testing
- Did you test these changes? If so, how?
All current tests pass
Tested a variety of existing examples to make sure they don't crash
(they don't)
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
Linux x86 64-bit NixOS
---
## Migration Guide
Code that directly access `Image` data will now need to use unwrap or
handle the case where no data is provided.
Behaviour of new_fill slightly changed, but not in a way that is likely
to affect anything. It no longer panics and will fill the whole texture
instead of leaving black pixels if the data provided is not a nice
factor of the size of the image.
---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Didn't remove WgpuWrapper. Not sure if it's needed or not still.
## Testing
- Did you test these changes? If so, how? Example runner
- Are there any parts that need more testing? Web (portable atomics
thingy?), DXC.
## Migration Guide
- Bevy has upgraded to [wgpu
v24](https://github.com/gfx-rs/wgpu/blob/trunk/CHANGELOG.md#v2400-2025-01-15).
- When using the DirectX 12 rendering backend, the new priority system
for choosing a shader compiler is as follows:
- If the `WGPU_DX12_COMPILER` environment variable is set at runtime, it
is used
- Else if the new `statically-linked-dxc` feature is enabled, a custom
version of DXC will be statically linked into your app at compile time.
- Else Bevy will look in the app's working directory for
`dxcompiler.dll` and `dxil.dll` at runtime.
- Else if they are missing, Bevy will fall back to FXC (not recommended)
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
Things were breaking post-cs.
## Solution
`specialize_mesh_materials` must run after
`collect_meshes_for_gpu_building`. Therefore, its placement in the
`PrepareAssets` set didn't make sense (also more generally). To fix, we
put this class of system in ~`PrepareResources`~ `QueueMeshes`, although
it potentially could use a more descriptive location. We may want to
review the placement of `check_views_need_specialization` which is also
currently in `PrepareAssets`.
This PR makes Bevy keep entities in bins from frame to frame if they
haven't changed. This reduces the time spent in `queue_material_meshes`
and related functions to near zero for static geometry. This patch uses
the same change tick technique that #17567 uses to detect when meshes
have changed in such a way as to require re-binning.
In order to quickly find the relevant bin for an entity when that entity
has changed, we introduce a new type of cache, the *bin key cache*. This
cache stores a mapping from main world entity ID to cached bin key, as
well as the tick of the most recent change to the entity. As we iterate
through the visible entities in `queue_material_meshes`, we check the
cache to see whether the entity needs to be re-binned. If it doesn't,
then we mark it as clean in the `valid_cached_entity_bin_keys` bit set.
If it does, then we insert it into the correct bin, and then mark the
entity as clean. At the end, all entities not marked as clean are
removed from the bins.
This patch has a dramatic effect on the rendering performance of most
benchmarks, as it effectively eliminates `queue_material_meshes` from
the profile. Note, however, that it generally simultaneously regresses
`batch_and_prepare_binned_render_phase` by a bit (not by enough to
outweigh the win, however). I believe that's because, before this patch,
`queue_material_meshes` put the bins in the CPU cache for
`batch_and_prepare_binned_render_phase` to use, while with this patch,
`batch_and_prepare_binned_render_phase` must load the bins into the CPU
cache itself.
On Caldera, this reduces the time spent in `queue_material_meshes` from
5+ ms to 0.2ms-0.3ms. Note that benchmarking on that scene is very noisy
right now because of https://github.com/bevyengine/bevy/issues/17535.

# Objective
- Make use of the new `weak_handle!` macro added in
https://github.com/bevyengine/bevy/pull/17384
## Solution
- Migrate bevy from `Handle::weak_from_u128` to the new `weak_handle!`
macro that takes a random UUID
- Deprecate `Handle::weak_from_u128`, since there are no remaining use
cases that can't also be addressed by constructing the type manually
## Testing
- `cargo run -p ci -- test`
---
## Migration Guide
Replace `Handle::weak_from_u128` with `weak_handle!` and a random UUID.
# Cold Specialization
## Objective
An ongoing part of our quest to retain everything in the render world,
cold-specialization aims to cache pipeline specialization so that
pipeline IDs can be recomputed only when necessary, rather than every
frame. This approach reduces redundant work in stable scenes, while
still accommodating scenarios in which materials, views, or visibility
might change, as well as unlocking future optimization work like
retaining render bins.
## Solution
Queue systems are split into a specialization system and queue system,
the former of which only runs when necessary to compute a new pipeline
id. Pipelines are invalidated using a combination of change detection
and ECS ticks.
### The difficulty with change detection
Detecting “what changed” can be tricky because pipeline specialization
depends not only on the entity’s components (e.g., mesh, material, etc.)
but also on which view (camera) it is rendering in. In other words, the
cache key for a given pipeline id is a view entity/render entity pair.
As such, it's not sufficient simply to react to change detection in
order to specialize -- an entity could currently be out of view or could
be rendered in the future in camera that is currently disabled or hasn't
spawned yet.
### Why ticks?
Ticks allow us to ensure correctness by allowing us to compare the last
time a view or entity was updated compared to the cached pipeline id.
This ensures that even if an entity was out of view or has never been
seen in a given camera before we can still correctly determine whether
it needs to be re-specialized or not.
## Testing
TODO: Tested a bunch of different examples, need to test more.
## Migration Guide
TODO
- `AssetEvents` has been moved into the `PostUpdate` schedule.
---------
Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
*Occlusion culling* allows the GPU to skip the vertex and fragment
shading overhead for objects that can be quickly proved to be invisible
because they're behind other geometry. A depth prepass already
eliminates most fragment shading overhead for occluded objects, but the
vertex shading overhead, as well as the cost of testing and rejecting
fragments against the Z-buffer, is presently unavoidable for standard
meshes. We currently perform occlusion culling only for meshlets. But
other meshes, such as skinned meshes, can benefit from occlusion culling
too in order to avoid the transform and skinning overhead for unseen
meshes.
This commit adapts the same [*two-phase occlusion culling*] technique
that meshlets use to Bevy's standard 3D mesh pipeline when the new
`OcclusionCulling` component, as well as the `DepthPrepass` component,
are present on the camera. It has these steps:
1. *Early depth prepass*: We use the hierarchical Z-buffer from the
previous frame to cull meshes for the initial depth prepass, effectively
rendering only the meshes that were visible in the last frame.
2. *Early depth downsample*: We downsample the depth buffer to create
another hierarchical Z-buffer, this time with the current view
transform.
3. *Late depth prepass*: We use the new hierarchical Z-buffer to test
all meshes that weren't rendered in the early depth prepass. Any meshes
that pass this check are rendered.
4. *Late depth downsample*: Again, we downsample the depth buffer to
create a hierarchical Z-buffer in preparation for the early depth
prepass of the next frame. This step is done after all the rendering, in
order to account for custom phase items that might write to the depth
buffer.
Note that this patch has no effect on the per-mesh CPU overhead for
occluded objects, which remains high for a GPU-driven renderer due to
the lack of `cold-specialization` and retained bins. If
`cold-specialization` and retained bins weren't on the horizon, then a
more traditional approach like potentially visible sets (PVS) or low-res
CPU rendering would probably be more efficient than the GPU-driven
approach that this patch implements for most scenes. However, at this
point the amount of effort required to implement a PVS baking tool or a
low-res CPU renderer would probably be greater than landing
`cold-specialization` and retained bins, and the GPU driven approach is
the more modern one anyway. It does mean that the performance
improvements from occlusion culling as implemented in this patch *today*
are likely to be limited, because of the high CPU overhead for occluded
meshes.
Note also that this patch currently doesn't implement occlusion culling
for 2D objects or shadow maps. Those can be addressed in a follow-up.
Additionally, note that the techniques in this patch require compute
shaders, which excludes support for WebGL 2.
This PR is marked experimental because of known precision issues with
the downsampling approach when applied to non-power-of-two framebuffer
sizes (i.e. most of them). These precision issues can, in rare cases,
cause objects to be judged occluded that in fact are not. (I've never
seen this in practice, but I know it's possible; it tends to be likelier
to happen with small meshes.) As a follow-up to this patch, we desire to
switch to the [SPD-based hi-Z buffer shader from the Granite engine],
which doesn't suffer from these problems, at which point we should be
able to graduate this feature from experimental status. I opted not to
include that rewrite in this patch for two reasons: (1) @JMS55 is
planning on doing the rewrite to coincide with the new availability of
image atomic operations in Naga; (2) to reduce the scope of this patch.
A new example, `occlusion_culling`, has been added. It demonstrates
objects becoming quickly occluded and disoccluded by dynamic geometry
and shows the number of objects that are actually being rendered. Also,
a new `--occlusion-culling` switch has been added to `scene_viewer`, in
order to make it easy to test this patch with large scenes like Bistro.
[*two-phase occlusion culling*]:
https://medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501
[Aaltonen SIGGRAPH 2015]:
https://www.advances.realtimerendering.com/s2015/aaltonenhaar_siggraph2015_combined_final_footer_220dpi.pdf
[Some literature]:
https://gist.github.com/reduz/c5769d0e705d8ab7ac187d63be0099b5?permalink_comment_id=5040452#gistcomment-5040452
[SPD-based hi-Z buffer shader from the Granite engine]:
https://github.com/Themaister/Granite/blob/master/assets/shaders/post/hiz.comp
## Migration guide
* When enqueuing a custom mesh pipeline, work item buffers are now
created with
`bevy::render::batching::gpu_preprocessing::get_or_create_work_item_buffer`,
not `PreprocessWorkItemBuffers::new`. See the
`specialized_mesh_pipeline` example.
## Showcase
Occlusion culling example:

Bistro zoomed out, before occlusion culling:

Bistro zoomed out, after occlusion culling:

In this scene, occlusion culling reduces the number of meshes Bevy has
to render from 1591 to 585.
This commit makes Bevy use change detection to only update
`RenderMaterialInstances` and `RenderMeshMaterialIds` when meshes have
been added, changed, or removed. `extract_mesh_materials`, the system
that extracts these, now follows the pattern that
`extract_meshes_for_gpu_building` established.
This improves frame time of `many_cubes` from 3.9ms to approximately
3.1ms, which slightly surpasses the performance of Bevy 0.14.
(Resubmitted from #16878 to clean up history.)

---------
Co-authored-by: Charlotte McElwain <charlotte.c.mcelwain@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
This commit allows Bevy to use `multi_draw_indirect_count` for drawing
meshes. The `multi_draw_indirect_count` feature works just like
`multi_draw_indirect`, but it takes the number of indirect parameters
from a GPU buffer rather than specifying it on the CPU.
Currently, the CPU constructs the list of indirect draw parameters with
the instance count for each batch set to zero, uploads the resulting
buffer to the GPU, and dispatches a compute shader that bumps the
instance count for each mesh that survives culling. Unfortunately, this
is inefficient when we support `multi_draw_indirect_count`. Draw
commands corresponding to meshes for which all instances were culled
will remain present in the list when calling
`multi_draw_indirect_count`, causing overhead. Proper use of
`multi_draw_indirect_count` requires eliminating these empty draw
commands.
To address this inefficiency, this PR makes Bevy fully construct the
indirect draw commands on the GPU instead of on the CPU. Instead of
writing instance counts to the draw command buffer, the mesh
preprocessing shader now writes them to a separate *indirect metadata
buffer*. A second compute dispatch known as the *build indirect
parameters* shader runs after mesh preprocessing and converts the
indirect draw metadata into actual indirect draw commands for the GPU.
The build indirect parameters shader operates on a batch at a time,
rather than an instance at a time, and as such each thread writes only 0
or 1 indirect draw parameters, simplifying the current logic in
`mesh_preprocessing`, which currently has to have special cases for the
first mesh in each batch. The build indirect parameters shader emits
draw commands in a tightly packed manner, enabling maximally efficient
use of `multi_draw_indirect_count`.
Along the way, this patch switches mesh preprocessing to dispatch one
compute invocation per render phase per view, instead of dispatching one
compute invocation per view. This is preparation for two-phase occlusion
culling, in which we will have two mesh preprocessing stages. In that
scenario, the first mesh preprocessing stage must only process opaque
and alpha tested objects, so the work items must be separated into those
that are opaque or alpha tested and those that aren't. Thus this PR
splits out the work items into a separate buffer for each phase. As this
patch rewrites so much of the mesh preprocessing infrastructure, it was
simpler to just fold the change into this patch instead of deferring it
to the forthcoming occlusion culling PR.
Finally, this patch changes mesh preprocessing so that it runs
separately for indexed and non-indexed meshes. This is because draw
commands for indexed and non-indexed meshes have different sizes and
layouts. *The existing code is actually broken for non-indexed meshes*,
as it attempts to overlay the indirect parameters for non-indexed meshes
on top of those for indexed meshes. Consequently, right now the
parameters will be read incorrectly when multiple non-indexed meshes are
multi-drawn together. *This is a bug fix* and, as with the change to
dispatch phases separately noted above, was easiest to include in this
patch as opposed to separately.
## Migration Guide
* Systems that add custom phase items now need to populate the indirect
drawing-related buffers. See the `specialized_mesh_pipeline` example for
an example of how this is done.
We won't be able to retain render phases from frame to frame if the keys
are unstable. It's not as simple as simply keying off the main world
entity, however, because some main world entities extract to multiple
render world entities. For example, directional lights extract to
multiple shadow cascades, and point lights extract to one view per
cubemap face. Therefore, we key off a new type, `RetainedViewEntity`,
which contains the main entity plus a *subview ID*.
This is part of the preparation for retained bins.
---------
Co-authored-by: ickshonpe <david.curthoys@googlemail.com>
# Objective
Many instances of `clippy::too_many_arguments` linting happen to be on
systems - functions which we don't call manually, and thus there's not
much reason to worry about the argument count.
## Solution
Allow `clippy::too_many_arguments` globally, and remove all lint
attributes related to it.
# Objective
I never realized `clippy::type_complexity` was an allowed lint - I've
been assuming it'd generate a warning when performing my linting PRs.
## Solution
Removes any instances of `#[allow(clippy::type_complexity)]` and
`#[expect(clippy::type_complexity)]`
## Testing
`cargo clippy` ran without errors or warnings.
I broke the commit history on the other one,
https://github.com/bevyengine/bevy/pull/17160. Woops.
# Objective
- https://github.com/bevyengine/bevy/issues/17111
## Solution
Set the `clippy::allow_attributes` and
`clippy::allow_attributes_without_reason` lints to `deny`, and bring
`bevy_sprite` in line with the new restrictions.
## Testing
`cargo clippy` and `cargo test --package bevy_sprite` were run, and no
errors were encountered.
Currently, our batchable binned items are stored in a hash table that
maps bin key, which includes the batch set key, to a list of entities.
Multidraw is handled by sorting the bin keys and accumulating adjacent
bins that can be multidrawn together (i.e. have the same batch set key)
into multidraw commands during `batch_and_prepare_binned_render_phase`.
This is reasonably efficient right now, but it will complicate future
work to retain indirect draw parameters from frame to frame. Consider
what must happen when we have retained indirect draw parameters and the
application adds a bin (i.e. a new mesh) that shares a batch set key
with some pre-existing meshes. (That is, the new mesh can be multidrawn
with the pre-existing meshes.) To be maximally efficient, our goal in
that scenario will be to update *only* the indirect draw parameters for
the batch set (i.e. multidraw command) containing the mesh that was
added, while leaving the others alone. That means that we have to
quickly locate all the bins that belong to the batch set being modified.
In the existing code, we would have to sort the list of bin keys so that
bins that can be multidrawn together become adjacent to one another in
the list. Then we would have to do a binary search through the sorted
list to find the location of the bin that was just added. Next, we would
have to widen our search to adjacent indexes that contain the same batch
set, doing expensive comparisons against the batch set key every time.
Finally, we would reallocate the indirect draw parameters and update the
stored pointers to the indirect draw parameters that the bins store.
By contrast, it'd be dramatically simpler if we simply changed the way
bins are stored to first map from batch set key (i.e. multidraw command)
to the bins (i.e. meshes) within that batch set key, and then from each
individual bin to the mesh instances. That way, the scenario above in
which we add a new mesh will be simpler to handle. First, we will look
up the batch set key corresponding to that mesh in the outer map to find
an inner map corresponding to the single multidraw command that will
draw that batch set. We will know how many meshes the multidraw command
is going to draw by the size of that inner map. Then we simply need to
reallocate the indirect draw parameters and update the pointers to those
parameters within the bins as necessary. There will be no need to do any
binary search or expensive batch set key comparison: only a single hash
lookup and an iteration over the inner map to update the pointers.
This patch implements the above technique. Because we don't have
retained bins yet, this PR provides no performance benefits. However, it
opens the door to maximally efficient updates when only a small number
of meshes change from frame to frame.
The main churn that this patch causes is that the *batch set key* (which
uniquely specifies a multidraw command) and *bin key* (which uniquely
specifies a mesh *within* that multidraw command) are now separate,
instead of the batch set key being embedded *within* the bin key.
In order to isolate potential regressions, I think that at least #16890,
#16836, and #16825 should land before this PR does.
## Migration Guide
* The *batch set key* is now separate from the *bin key* in
`BinnedPhaseItem`. The batch set key is used to collect multidrawable
meshes together. If you aren't using the multidraw feature, you can
safely set the batch set key to `()`.
# Objective
- Contributes to #11478
## Solution
- Made `bevy_utils::tracing` `doc(hidden)`
- Re-exported `tracing` from `bevy_log` for end-users
- Added `tracing` directly to crates that need it.
## Testing
- CI
---
## Migration Guide
If you were importing `tracing` via `bevy::utils::tracing`, instead use
`bevy::log::tracing`. Note that many items within `tracing` are also
directly re-exported from `bevy::log` as well, so you may only need
`bevy::log` for the most common items (e.g., `warn!`, `trace!`, etc.).
This also applies to the `log_once!` family of macros.
## Notes
- While this doesn't reduce the line-count in `bevy_utils`, it further
decouples the internal crates from `bevy_utils`, making its eventual
removal more feasible in the future.
- I have just imported `tracing` as we do for all dependencies. However,
a workspace dependency may be more appropriate for version management.
Derived `Default` for all public unit structs that already derive from
`Component`. This allows them to be used more easily as required
components.
To avoid clutter in tests/examples, only public components were
affected, but this could easily be expanded to affect all unit
components.
Fixes#17052.
This commit makes the following changes:
* `IndirectParametersBuffer` has been changed from a `BufferVec` to a
`RawBufferVec`. This won about 20us or so on Bistro by avoiding `encase`
overhead.
* The methods on the `GetFullBatchData` trait no longer have the
`entity` parameter, as it was unused.
* `PreprocessWorkItem`, which specifies a transform-and-cull operation,
now supplies the mesh instance uniform output index directly instead of
having the shader look it up from the indirect draw parameters.
Accordingly, the responsibility of writing the output index to the
indirect draw parameters has been moved from the CPU to the GPU. This is
in preparation for retained indirect instance draw commands, where the
mesh instance uniform output index may change from frame to frame, while
the indirect instance draw commands will be cached. We won't want the
CPU to have to upload the same indirect draw parameters again and again
if a batch didn't change from frame to frame.
* `batch_and_prepare_binned_render_phase` and
`batch_and_prepare_sorted_render_phase` now allocate indirect draw
commands for an entire batch set at a time when possible, instead of one
batch at a time. This change will allow us to retain the indirect draw
commands for whole batch sets.
* `GetFullBatchData::get_batch_indirect_parameters_index` has been
replaced with `GetFullBatchData::write_batch_indirect_parameters`, which
takes an offset and writes into it instead of allocating. This is
necessary in order to use the optimization mentioned in the previous
point.
* At the WGSL level, `IndirectParameters` has been factored out into
`mesh_preprocess_types.wgsl`. This is because we'll need a new compute
shader that zeroes out the instance counts in preparation for a new
frame. That shader will need to access `IndirectParameters`, so it was
moved to a separate file.
* Bins are no longer raw vectors but are instances of a separate type,
`RenderBin`. This is so that the bin can eventually contain its retained
batches.