PR #17898 regressed this, causing much of #17970. This commit fixes the
issue by freeing and reallocating materials in the
`MaterialBindGroupAllocator` on change. Note that more efficiency is
possible, but I opted for the simple approach because (1) we should fix
this bug ASAP; (2) I'd like #17965 to land first, because that unlocks
the biggest potential optimization, which is not recreating the bind
group if it isn't necessary to do so.
We might not be able to prepare a material on the first frame we
encounter a mesh using it for various reasons, including that the
material hasn't been loaded yet or that preparing the material is
exceeding the per-frame cap on number of bytes to load. When this
happens, we currently try to find the material in the
`MaterialBindGroupAllocator`, fail, and then fall back to group 0, slot
0, the default `MaterialBindGroupId`, which is obviously incorrect.
Worse, we then fail to dirty the mesh and reextract it when we *do*
finish preparing the material, so the mesh will continue to be rendered
with an incorrect material.
This patch fixes both problems. In `collect_meshes_for_gpu_building`, if
we fail to find a mesh's material in the `MeshBindGroupAllocator`, then
we detect that case, bail out, and add it to a list,
`MeshesToReextractNextFrame`. On subsequent frames, we process all the
meshes in `MeshesToReextractNextFrame` as though they were changed. This
ensures both that we don't render a mesh if its material hasn't been
loaded and that we start rendering the mesh once its material does load.
This was first noticed in the intermittent Pixel Eagle failures in the
`testbed_3d` patch in #17898, although the problem has actually existed
for some time. I believe it just so happened that the changes to the
allocator in that PR caused the problem to appear more commonly than it
did before.
This patch fixes two bugs in the new non-bindless material allocator
that landed in PR #17898:
1. A debug assertion to prevent double frees had been flipped: we
checked to see whether the slot was empty before freeing, while we
should have checked to see whether the slot was full.
2. The non-bindless allocator returned `None` when querying a slab that
hadn't been prepared yet instead of returning a handle to that slab.
This resulted in a 1-frame delay when modifying materials. In the
`animated_material` example, this resulted in the meshes never showing
up at all, because that example changes every material every frame.
Together with #17979, this patch locally fixes the problems with
`animated_material` on macOS that were reported in #17970.
Two-phase occlusion culling can be helpful for shadow maps just as it
can for a prepass, in order to reduce vertex and alpha mask fragment
shading overhead. This patch implements occlusion culling for shadow
maps from directional lights, when the `OcclusionCulling` component is
present on the entities containing the lights. Shadow maps from point
lights are deferred to a follow-up patch. Much of this patch involves
expanding the hierarchical Z-buffer to cover shadow maps in addition to
standard view depth buffers.
The `scene_viewer` example has been updated to add `OcclusionCulling` to
the directional light that it creates.
This improved the performance of the rend3 sci-fi test scene when
enabling shadows.
Currently, Bevy's implementation of bindless resources is rather
unusual: every binding in an object that implements `AsBindGroup` (most
commonly, a material) becomes its own separate binding array in the
shader. This is inefficient for two reasons:
1. If multiple materials reference the same texture or other resource,
the reference to that resource will be duplicated many times. This
increases `wgpu` validation overhead.
2. It creates many unused binding array slots. This increases `wgpu` and
driver overhead and makes it easier to hit limits on APIs that `wgpu`
currently imposes tight resource limits on, like Metal.
This PR fixes these issues by switching Bevy to use the standard
approach in GPU-driven renderers, in which resources are de-duplicated
and passed as global arrays, one for each type of resource.
Along the way, this patch introduces per-platform resource limits and
bumps them from 16 resources per binding array to 64 resources per bind
group on Metal and 2048 resources per bind group on other platforms.
(Note that the number of resources per *binding array* isn't the same as
the number of resources per *bind group*; as it currently stands, if all
the PBR features are turned on, Bevy could pack as many as 496 resources
into a single slab.) The limits have been increased because `wgpu` now
has universal support for partially-bound binding arrays, which mean
that we no longer need to fill the binding arrays with fallback
resources on Direct3D 12. The `#[bindless(LIMIT)]` declaration when
deriving `AsBindGroup` can now simply be written `#[bindless]` in order
to have Bevy choose a default limit size for the current platform.
Custom limits are still available with the new
`#[bindless(limit(LIMIT))]` syntax: e.g. `#[bindless(limit(8))]`.
The material bind group allocator has been completely rewritten. Now
there are two allocators: one for bindless materials and one for
non-bindless materials. The new non-bindless material allocator simply
maintains a 1:1 mapping from material to bind group. The new bindless
material allocator maintains a list of slabs and allocates materials
into slabs on a first-fit basis. This unfortunately makes its
performance O(number of resources per object * number of slabs), but the
number of slabs is likely to be low, and it's planned to become even
lower in the future with `wgpu` improvements. Resources are
de-duplicated with in a slab and reference counted. So, for instance, if
multiple materials refer to the same texture, that texture will exist
only once in the appropriate binding array.
To support these new features, this patch adds the concept of a
*bindless descriptor* to the `AsBindGroup` trait. The bindless
descriptor allows the material bind group allocator to probe the layout
of the material, now that an array of `BindGroupLayoutEntry` records is
insufficient to describe the group. The `#[derive(AsBindGroup)]` has
been heavily modified to support the new features. The most important
user-facing change to that macro is that the struct-level `uniform`
attribute, `#[uniform(BINDING_NUMBER, StandardMaterial)]`, now reads
`#[uniform(BINDLESS_INDEX, MATERIAL_UNIFORM_TYPE,
binding_array(BINDING_NUMBER)]`, allowing the material to specify the
binding number for the binding array that holds the uniform data.
To make this patch simpler, I removed support for bindless
`ExtendedMaterial`s, as well as field-level bindless uniform and storage
buffers. I intend to add back support for these as a follow-up. Because
they aren't in any released Bevy version yet, I figured this was OK.
Finally, this patch updates `StandardMaterial` for the new bindless
changes. Generally, code throughout the PBR shaders that looked like
`base_color_texture[slot]` now looks like
`bindless_2d_textures[material_indices[slot].base_color_texture]`.
This patch fixes a system hang that I experienced on the [Caldera test]
when running with `caldera --random-materials --texture-count 100`. The
time per frame is around 19.75 ms, down from 154.2 ms in Bevy 0.14: a
7.8× speedup.
[Caldera test]: https://github.com/DGriffin91/bevy_caldera_scene
Deferred rendering currently doesn't support occlusion culling. This PR
implements it in a straightforward way, mirroring what we already do for
the non-deferred pipeline.
On the rend3 sci-fi base test scene, this resulted in roughly a 2×
speedup when applied on top of my other patches. For that scene, it was
useful to add another option, `--add-light`, which forces the addition
of a shadow-casting light, to the scene viewer, which I included in this
patch.
This commit restructures the multidrawable batch set builder for better
performance in various ways:
* The bin traversal is optimized to make the best use of the CPU cache.
* The inner loop that iterates over the bins, which is the hottest part
of `batch_and_prepare_binned_render_phase`, has been shrunk as small as
possible.
* Where possible, multiple elements are added to or reserved from GPU
buffers as a batch instead of one at a time.
* Methods that LLVM wasn't inlining have been marked `#[inline]` where
doing so would unlock optimizations.
This code has also been refactored to avoid duplication between the
logic for indexed and non-indexed meshes via the introduction of a
`MultidrawableBatchSetPreparer` object.
Together, this improved the `batch_and_prepare_binned_render_phase` time
on Caldera by approximately 2×.
Eventually, we should optimize the batchable-but-not-multidrawable and
unbatchable logic as well, but these meshes are much rarer, so in the
interests of keeping this patch relatively small I opted to leave those
to a follow-up.
# Objective
Fix incorrect mesh culling where objects (particularly directional
shadows) were being incorrectly culled during the early preprocessing
phase. The issue manifested specifically on Apple M1 GPUs but not on
newer devices like the M4. The bug was in the
`view_frustum_intersects_obb` function, where including the w component
(plane distance) in the dot product calculations led to false positive
culling results. This caused objects to be incorrectly culled before
shadow casting could begin.
## Issue Details
The problem of missing shadows is reproducible on Apple M1 GPUs as of
this commit (bisected):
```
00722b8d0 Make indirect drawing opt-out instead of opt-in, enabling multidraw by default. (#16757)
```
and as recent as this commit:
```
c818c9214 Add option to animate materials in many_cubes (#17927)
```
- The frustum culling calculation incorrectly included the w component
(plane distance) when transforming basis vectors
- The relative radius calculation should only consider directional
transformation (xyz), not positional information (w)
- This caused false positive culling specifically on M1 devices likely
due to different device-specific floating-point behavior
- When objects were incorrectly culled, `early_instance_count` never
incremented, leading to missing geometry in the shadow pass
## Testing
- Tested on M1 and M4 devices to verify the fix
- Verified shadows and geometry render correctly on both platforms
- Confirmed the solution matches the existing Rust implementation's
behavior for calculating the relative radius:
c818c92143/crates/bevy_render/src/primitives/mod.rs (L77-L87)
- The fix resolves a mathematical error in the frustum culling
calculation while maintaining correct culling behavior across all
platforms.
---
## Showcase
`c818c9214`
<img width="1284" alt="c818c9214"
src="https://github.com/user-attachments/assets/fe1c7ea9-b13d-422e-b12d-f1cd74475213"
/>
`mate-h/frustum-cull-fix`
<img width="1283" alt="frustum-cull-fix"
src="https://github.com/user-attachments/assets/8a9ccb2a-64b6-4d5e-a17d-ac4798da5b51"
/>
The `check_visibility` system currently follows this algorithm:
1. Store all meshes that were visible last frame in the
`PreviousVisibleMeshes` set.
2. Determine which meshes are visible. For each such visible mesh,
remove it from `PreviousVisibleMeshes`.
3. Mark all meshes that remain in `PreviousVisibleMeshes` as invisible.
This algorithm would be correct if the `check_visibility` were the only
system that marked meshes visible. However, it's not: the shadow-related
systems `check_dir_light_mesh_visibility` and
`check_point_light_mesh_visibility` can as well. This results in the
following sequence of events for meshes that are in a shadow map but
*not* visible from a camera:
A. `check_visibility` runs, finds that no camera contains these meshes,
and marks them hidden, which sets the changed flag.
B. `check_dir_light_mesh_visibility` and/or
`check_point_light_mesh_visibility` run, discover that these meshes
are visible in the shadow map, and marks them as visible, again
setting the `ViewVisibility` changed flag.
C. During the extraction phase, the mesh extraction system sees that
`ViewVisibility` is changed and re-extracts the mesh.
This is inefficient and results in needless work during rendering.
This patch fixes the issue in two ways:
* The `check_dir_light_mesh_visibility` and
`check_point_light_mesh_visibility` systems now remove meshes that they
discover from `PreviousVisibleMeshes`.
* Step (3) above has been moved from `check_visibility` to a separate
system, `mark_newly_hidden_entities_invisible`. This system runs after
all visibility-determining systems, ensuring that
`PreviousVisibleMeshes` contains only those meshes that truly became
invisible on this frame.
This fix dramatically improves the performance of [the Caldera
benchmark], when combined with several other patches I've submitted.
[the Caldera benchmark]:
https://github.com/DGriffin91/bevy_caldera_scene
PR #17688 broke motion vector computation, and therefore motion blur,
because it enabled retention of `MeshInputUniform`s, and
`MeshInputUniform`s contain the indices of the previous frame's
transform and the previous frame's skinned mesh joint matrices. On frame
N, if a `MeshInputUniform` is retained on GPU from the previous frame,
the `previous_input_index` and `previous_skin_index` would refer to the
indices for frame N - 2, not the index for frame N - 1.
This patch fixes the problems. It solves these issues in two different
ways, one for transforms and one for skins:
1. To fix transforms, this patch supplies the *frame index* to the
shader as part of the view uniforms, and specifies which frame index
each mesh's previous transform refers to. So, in the situation described
above, the frame index would be N, the previous frame index would be N -
1, and the `previous_input_frame_number` would be N - 2. The shader can
now detect this situation and infer that the mesh has been retained, and
can therefore conclude that the mesh's transform hasn't changed.
2. To fix skins, this patch replaces the explicit `previous_skin_index`
with an invariant that the index of the joints for the current frame and
the index of the joints for the previous frame are the same. This means
that the `MeshInputUniform` never has to be updated even if the skin is
animated. The downside is that we have to copy joint matrices from the
previous frame's buffer to the current frame's buffer in
`extract_skins`.
The rationale behind (2) is that we currently have no mechanism to
detect when joints that affect a skin have been updated, short of
comparing all the transforms and setting a flag for
`extract_meshes_for_gpu_building` to consume, which would regress
performance as we want `extract_skins` and
`extract_meshes_for_gpu_building` to be able to run in parallel.
To test this change, use `cargo run --example motion_blur`.
Currently, the specialized pipeline cache maps a (view entity, mesh
entity) tuple to the retained pipeline for that entity. This causes two
problems:
1. Using the view entity is incorrect, because the view entity isn't
stable from frame to frame.
2. Switching the view entity to a `RetainedViewEntity`, which is
necessary for correctness, significantly regresses performance of
`specialize_material_meshes` and `specialize_shadows` because of the
loss of the fast `EntityHash`.
This patch fixes both problems by switching to a *two-level* hash table.
The outer level of the table maps each `RetainedViewEntity` to an inner
table, which maps each `MainEntity` to its pipeline ID and change tick.
Because we loop over views first and, within that loop, loop over
entities visible from that view, we hoist the slow lookup of the view
entity out of the inner entity loop.
Additionally, this patch fixes a bug whereby pipeline IDs were leaked
when removing the view. We still have a problem with leaking pipeline
IDs for deleted entities, but that won't be fixed until the specialized
pipeline cache is retained.
This patch improves performance of the [Caldera benchmark] from 7.8×
faster than 0.14 to 9.0× faster than 0.14, when applied on top of the
global binding arrays PR, #17898.
[Caldera benchmark]: https://github.com/DGriffin91/bevy_caldera_scene
Currently, Bevy rebuilds the buffer containing all the transforms for
joints every frame, during the extraction phase. This is inefficient in
cases in which many skins are present in the scene and their joints
don't move, such as the Caldera test scene.
To address this problem, this commit switches skin extraction to use a
set of retained GPU buffers with allocations managed by the offset
allocator. I use fine-grained change detection in order to determine
which skins need updating. Note that the granularity is on the level of
an entire skin, not individual joints. Using the change detection at
that level would yield poor performance in common cases in which an
entire skin is animated at once. Also, this patch yields additional
performance from the fact that changing joint transforms no longer
requires the skinned mesh to be re-extracted.
Note that this optimization can be a double-edged sword. In
`many_foxes`, fine-grained change detection regressed the performance of
`extract_skins` by 3.4x. This is because every joint is updated every
frame in that example, so change detection is pointless and is pure
overhead. Because the `many_foxes` workload is actually representative
of animated scenes, this patch includes a heuristic that disables
fine-grained change detection if the number of transformed entities in
the frame exceeds a certain fraction of the total number of joints.
Currently, this threshold is set to 25%. Note that this is a crude
heuristic, because it doesn't distinguish between the number of
transformed *joints* and the number of transformed *entities*; however,
it should be good enough to yield the optimum code path most of the
time.
Finally, this patch fixes a bug whereby skinned meshes are actually
being incorrectly retained if the buffer offsets of the joints of those
skinned meshes changes from frame to frame. To fix this without
retaining skins, we would have to re-extract every skinned mesh every
frame. Doing this was a significant regression on Caldera. With this PR,
by contrast, mesh joints stay at the same buffer offset, so we don't
have to update the `MeshInputUniform` containing the buffer offset every
frame. This also makes PR #17717 easier to implement, because that PR
uses the buffer offset from the previous frame, and the logic for
calculating that is simplified if the previous frame's buffer offset is
guaranteed to be identical to that of the current frame.
On Caldera, this patch reduces the time spent in `extract_skins` from
1.79 ms to near zero. On `many_foxes`, this patch regresses the
performance of `extract_skins` by approximately 10%-25%, depending on
the number of foxes. This has only a small impact on frame rate.
The GPU can fill out many of the fields in `IndirectParametersMetadata`
using information it already has:
* `early_instance_count` and `late_instance_count` are always
initialized to zero.
* `mesh_index` is already present in the work item buffer as the
`input_index` of the first work item in each batch.
This patch moves these fields to a separate buffer, the *GPU indirect
parameters metadata* buffer. That way, it avoids having to write them on
CPU during `batch_and_prepare_binned_render_phase`. This effectively
reduces the number of bits that that function must write per mesh from
160 to 64 (in addition to the 64 bits per mesh *instance*).
Additionally, this PR refactors `UntypedPhaseIndirectParametersBuffers`
to add another layer, `MeshClassIndirectParametersBuffers`, which allows
abstracting over the buffers corresponding indexed and non-indexed
meshes. This patch doesn't make much use of this abstraction, but
forthcoming patches will, and it's overall a cleaner approach.
This didn't seem to have much of an effect by itself on
`batch_and_prepare_binned_render_phase` time, but subsequent PRs
dependent on this PR yield roughly a 2× speedup.
Appending to these vectors is performance-critical in
`batch_and_prepare_binned_render_phase`, so `RawBufferVec`, which
doesn't have the overhead of `encase`, is more appropriate.
The `output_index` field is only used in direct mode, and the
`indirect_parameters_index` field is only used in indirect mode.
Consequently, we can combine them into a single field, reducing the size
of `PreprocessWorkItem`, which
`batch_and_prepare_{binned,sorted}_render_phase` must construct every
frame for every mesh instance, from 96 bits to 64 bits.
# Objective
Update typos, fix new typos.
1.29.6 was just released to fix an
[issue](https://github.com/crate-ci/typos/issues/1228) where January's
corrections were not included in the binaries for the last release.
Reminder: typos can be tossed in the monthly [non-critical corrections
issue](https://github.com/crate-ci/typos/issues/1221).
## Solution
I chose to allow `implementors`, because a good argument seems to be
being made [here](https://github.com/crate-ci/typos/issues/1226) and
there is now a PR to address that.
## Discussion
Should I exclude `bevy_mikktspace`?
At one point I think we had an informal policy of "don't mess with
mikktspace until https://github.com/bevyengine/bevy/pull/9050 is merged"
but it doesn't seem like that is likely to be merged any time soon.
I think these particular corrections in mikktspace are fine because
- The same typo mistake seems to have been fixed in that PR
- The entire file containing these corrections was deleted in that PR
## Typo of the Month
correspindong -> corresponding
Currently, invocations of `batch_and_prepare_binned_render_phase` and
`batch_and_prepare_sorted_render_phase` can't run in parallel because
they write to scene-global GPU buffers. After PR #17698,
`batch_and_prepare_binned_render_phase` started accounting for the
lion's share of the CPU time, causing us to be strongly CPU bound on
scenes like Caldera when occlusion culling was on (because of the
overhead of batching for the Z-prepass). Although I eventually plan to
optimize `batch_and_prepare_binned_render_phase`, we can obtain
significant wins now by parallelizing that system across phases.
This commit splits all GPU buffers that
`batch_and_prepare_binned_render_phase` and
`batch_and_prepare_sorted_render_phase` touches into separate buffers
for each phase so that the scheduler will run those phases in parallel.
At the end of batch preparation, we gather the render phases up into a
single resource with a new *collection* phase. Because we already run
mesh preprocessing separately for each phase in order to make occlusion
culling work, this is actually a cleaner separation. For example, mesh
output indices (the unique ID that identifies each mesh instance on GPU)
are now guaranteed to be sequential starting from 0, which will simplify
the forthcoming work to remove them in favor of the compute dispatch ID.
On Caldera, this brings the frame time down to approximately 9.1 ms with
occlusion culling on.

* Use texture atomics rather than buffer atomics for the visbuffer
(haven't tested perf on a raster-heavy scene yet)
* Unfortunately to clear the visbuffer we now need a compute pass to
clear it. Using wgpu's clear_texture function internally uses a buffer
-> image copy that's insanely expensive. Ideally it should be using
vkCmdClearColorImage, which I've opened an issue for
https://github.com/gfx-rs/wgpu/issues/7090. For now we'll have to stick
with a custom compute pass and all the extra code that brings.
* Faster resolve depth pass by discarding 0 depth pixels instead of
redundantly writing zero (2x faster for big depth textures like shadow
views)
## Objective
Get rid of a redundant Cargo feature flag.
## Solution
Use the built-in `target_abi = "sim"` instead of a custom Cargo feature
flag, which is set for the iOS (and visionOS and tvOS) simulator. This
has been stable since Rust 1.78.
In the future, some of this may become redundant if Wgpu implements
proper supper for the iOS Simulator:
https://github.com/gfx-rs/wgpu/issues/7057
CC @mockersf who implemented [the original
fix](https://github.com/bevyengine/bevy/pull/10178).
## Testing
- Open mobile example in Xcode.
- Launch the simulator.
- See that no errors are emitted.
- Remove the code cfg-guarded behind `target_abi = "sim"`.
- See that an error now happens.
(I haven't actually performed these steps on the latest `main`, because
I'm hitting an unrelated error (EDIT: It was
https://github.com/bevyengine/bevy/pull/17637). But tested it on
0.15.0).
---
## Migration Guide
> If you're using a project that builds upon the mobile example, remove
the `ios_simulator` feature from your `Cargo.toml` (Bevy now handles
this internally).
Currently, we look up each `MeshInputUniform` index in a hash table that
maps the main entity ID to the index every frame. This is inefficient,
cache unfriendly, and unnecessary, as the `MeshInputUniform` index for
an entity remains the same from frame to frame (even if the input
uniform changes). This commit changes the `IndexSet` in the `RenderBin`
to an `IndexMap` that maps the `MainEntity` to `MeshInputUniformIndex`
(a new type that this patch adds for more type safety).
On Caldera with parallel `batch_and_prepare_binned_render_phase`, this
patch improves that function from 3.18 ms to 2.42 ms, a 31% speedup.
# Objective
- Fixes#17797
## Solution
- `mesh` in `bevy_pbr::mesh_bindings` is behind a `ifndef
MESHLET_MESH_MATERIAL_PASS`. also gate `get_tag` which uses this `mesh`
## Testing
- Run the meshlet example
# Objective
Since previously we only had the alpha channel available, we stored the
mean of the transmittance in the aerial view lut, resulting in a grayer
fog than should be expected.
## Solution
- Calculate transmittance to scene in `render_sky` with two samples from
the transmittance lut
- use dual-source blending to effectively have per-component alpha
blending
Currently, we *sweep*, or remove entities from bins when those entities
became invisible or changed phases, during `queue_material_meshes` and
similar phases. This, however, is wrong, because `queue_material_meshes`
executes once per material type, not once per phase. This could result
in sweeping bins multiple times per phase, which can corrupt the bins.
This commit fixes the issue by moving sweeping to a separate system that
runs after queuing.
This manifested itself as entities appearing and disappearing seemingly
at random.
Closes#17759.
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
# Objective
Because of mesh preprocessing, users cannot rely on
`@builtin(instance_index)` in order to reference external data, as the
instance index is not stable, either from frame to frame or relative to
the total spawn order of mesh instances.
## Solution
Add a user supplied mesh index that can be used for referencing external
data when drawing instanced meshes.
Closes#13373
## Testing
Benchmarked `many_cubes` showing no difference in total frame time.
## Showcase
https://github.com/user-attachments/assets/80620147-aafc-4d9d-a8ee-e2149f7c8f3b
---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
# Objective
https://github.com/bevyengine/bevy/issues/17746
## Solution
- Change `Image.data` from being a `Vec<u8>` to a `Option<Vec<u8>>`
- Added functions to help with creating images
## Testing
- Did you test these changes? If so, how?
All current tests pass
Tested a variety of existing examples to make sure they don't crash
(they don't)
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
Linux x86 64-bit NixOS
---
## Migration Guide
Code that directly access `Image` data will now need to use unwrap or
handle the case where no data is provided.
Behaviour of new_fill slightly changed, but not in a way that is likely
to affect anything. It no longer panics and will fill the whole texture
instead of leaving black pixels if the data provided is not a nice
factor of the size of the image.
---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
# Objective
https://github.com/bevyengine/bevy/pull/16966 tried to fix a bug where
`slot` wasn't passed to `parallaxed_uv` when not running under bindless,
but failed to account for meshlets. This surfaces on macOS where
bindless is disabled.
## Solution
Lift the slot variable out of the bindless condition so it's always
available.
Didn't remove WgpuWrapper. Not sure if it's needed or not still.
## Testing
- Did you test these changes? If so, how? Example runner
- Are there any parts that need more testing? Web (portable atomics
thingy?), DXC.
## Migration Guide
- Bevy has upgraded to [wgpu
v24](https://github.com/gfx-rs/wgpu/blob/trunk/CHANGELOG.md#v2400-2025-01-15).
- When using the DirectX 12 rendering backend, the new priority system
for choosing a shader compiler is as follows:
- If the `WGPU_DX12_COMPILER` environment variable is set at runtime, it
is used
- Else if the new `statically-linked-dxc` feature is enabled, a custom
version of DXC will be statically linked into your app at compile time.
- Else Bevy will look in the app's working directory for
`dxcompiler.dll` and `dxil.dll` at runtime.
- Else if they are missing, Bevy will fall back to FXC (not recommended)
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
- publish script copy the license files to all subcrates, meaning that
all publish are dirty. this breaks git verification of crates
- the order and list of crates to publish is manually maintained,
leading to error. cargo 1.84 is more strict and the list is currently
wrong
## Solution
- duplicate all the licenses to all crates and remove the
`--allow-dirty` flag
- instead of a manual list of crates, get it from `cargo package
--workspace`
- remove the `--no-verify` flag to... verify more things?
# Objective
Things were breaking post-cs.
## Solution
`specialize_mesh_materials` must run after
`collect_meshes_for_gpu_building`. Therefore, its placement in the
`PrepareAssets` set didn't make sense (also more generally). To fix, we
put this class of system in ~`PrepareResources`~ `QueueMeshes`, although
it potentially could use a more descriptive location. We may want to
review the placement of `check_views_need_specialization` which is also
currently in `PrepareAssets`.
Right now, we key the cached light change ticks off the `LightEntity`.
This uses the render world entity, which isn't stable between frames.
Thus in practice few shadows are retained from frame to frame. This PR
fixes the issue by keying off the `RetainedViewEntity` instead, which is
designed to be stable from frame to frame.
This PR makes Bevy keep entities in bins from frame to frame if they
haven't changed. This reduces the time spent in `queue_material_meshes`
and related functions to near zero for static geometry. This patch uses
the same change tick technique that #17567 uses to detect when meshes
have changed in such a way as to require re-binning.
In order to quickly find the relevant bin for an entity when that entity
has changed, we introduce a new type of cache, the *bin key cache*. This
cache stores a mapping from main world entity ID to cached bin key, as
well as the tick of the most recent change to the entity. As we iterate
through the visible entities in `queue_material_meshes`, we check the
cache to see whether the entity needs to be re-binned. If it doesn't,
then we mark it as clean in the `valid_cached_entity_bin_keys` bit set.
If it does, then we insert it into the correct bin, and then mark the
entity as clean. At the end, all entities not marked as clean are
removed from the bins.
This patch has a dramatic effect on the rendering performance of most
benchmarks, as it effectively eliminates `queue_material_meshes` from
the profile. Note, however, that it generally simultaneously regresses
`batch_and_prepare_binned_render_phase` by a bit (not by enough to
outweigh the win, however). I believe that's because, before this patch,
`queue_material_meshes` put the bins in the CPU cache for
`batch_and_prepare_binned_render_phase` to use, while with this patch,
`batch_and_prepare_binned_render_phase` must load the bins into the CPU
cache itself.
On Caldera, this reduces the time spent in `queue_material_meshes` from
5+ ms to 0.2ms-0.3ms. Note that benchmarking on that scene is very noisy
right now because of https://github.com/bevyengine/bevy/issues/17535.

Right now, meshes aren't grouped together based on the bindless texture
slab when drawing shadows. This manifests itself as flickering in
Bistro. I believe that there are two causes of this:
1. Alpha masked shadows may try to sample from the wrong texture,
causing the alpha mask to appear and disappear.
2. Objects may try to sample from the blank textures that we pad out the
bindless slabs with, causing them to vanish intermittently.
This commit fixes the issue by including the material bind group ID as
part of the shadow batch set key, just as we do for the prepass and main
pass.
# Objective
- Make use of the new `weak_handle!` macro added in
https://github.com/bevyengine/bevy/pull/17384
## Solution
- Migrate bevy from `Handle::weak_from_u128` to the new `weak_handle!`
macro that takes a random UUID
- Deprecate `Handle::weak_from_u128`, since there are no remaining use
cases that can't also be addressed by constructing the type manually
## Testing
- `cargo run -p ci -- test`
---
## Migration Guide
Replace `Handle::weak_from_u128` with `weak_handle!` and a random UUID.
# Cold Specialization
## Objective
An ongoing part of our quest to retain everything in the render world,
cold-specialization aims to cache pipeline specialization so that
pipeline IDs can be recomputed only when necessary, rather than every
frame. This approach reduces redundant work in stable scenes, while
still accommodating scenarios in which materials, views, or visibility
might change, as well as unlocking future optimization work like
retaining render bins.
## Solution
Queue systems are split into a specialization system and queue system,
the former of which only runs when necessary to compute a new pipeline
id. Pipelines are invalidated using a combination of change detection
and ECS ticks.
### The difficulty with change detection
Detecting “what changed” can be tricky because pipeline specialization
depends not only on the entity’s components (e.g., mesh, material, etc.)
but also on which view (camera) it is rendering in. In other words, the
cache key for a given pipeline id is a view entity/render entity pair.
As such, it's not sufficient simply to react to change detection in
order to specialize -- an entity could currently be out of view or could
be rendered in the future in camera that is currently disabled or hasn't
spawned yet.
### Why ticks?
Ticks allow us to ensure correctness by allowing us to compare the last
time a view or entity was updated compared to the cached pipeline id.
This ensures that even if an entity was out of view or has never been
seen in a given camera before we can still correctly determine whether
it needs to be re-specialized or not.
## Testing
TODO: Tested a bunch of different examples, need to test more.
## Migration Guide
TODO
- `AssetEvents` has been moved into the `PostUpdate` schedule.
---------
Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
We were calling `clear()` on the work item buffer table, which caused us
to deallocate all the CPU side buffers. This patch changes the logic to
instead just clear the buffers individually, but leave their backing
stores. This has two consequences:
1. To effectively retain work item buffers from frame to frame, we need
to key them off `RetainedViewEntity` values and not the render world
`Entity`, which is transient. This PR changes those buffers accordingly.
2. We need to clean up work item buffers that belong to views that went
away. Amusingly enough, we actually have a system,
`delete_old_work_item_buffers`, that tries to do this already, but it
wasn't doing anything because the `clear_batched_gpu_instance_buffers`
system already handled that. This patch actually makes the
`delete_old_work_item_buffers` system useful, by removing the clearing
behavior from `clear_batched_gpu_instance_buffers` and instead making
`delete_old_work_item_buffers` delete buffers corresponding to
nonexistent views.
On Bistro, this PR improves the performance of
`batch_and_prepare_binned_render_phase` from 61.2 us to 47.8 us, a 28%
speedup.

This patch fixes a bug whereby we're re-extracting every mesh every
frame. It's a regression from PR #17413. The code in question has
actually been in the tree with this bug for quite a while; it's that
just the code didn't actually run unless the renderer considered the
previous view transforms necessary. Occlusion culling expanded the set
of circumstances under which Bevy computes the previous view transforms,
causing this bug to appear more often.
This patch fixes the issue by checking to see if the previous transform
of a mesh actually differs from the current transform before copying the
current transform to the previous transform.
# Objective
- Fix the atmosphere LUT parameterization in the aerial -view and
sky-view LUTs
- Correct the light accumulation according to a ray-marched reference
- Avoid negative values of the sun disk illuminance when the sun disk is
below the horizon
## Solution
- Adding a Newton's method iteration to `fast_sqrt` function
- Switched to using `fast_acos_4` for better precision of the sun angle
towards the horizon (view mu angle = 0)
- Simplified the function for mapping to and from the Sky View UV
coordinates by removing an if statement and correctly apply the method
proposed by the [Hillarie
paper](https://sebh.github.io/publications/egsr2020.pdf) detailed in
section 5.3 and 5.4.
- Replaced the `ray_dir_ws.y` term with a shadow factor in the
`sample_sun_illuminance` function that correctly approximates the sun
disk occluded by the earth from any view point
## Testing
- Ran the atmosphere and SSAO examples to make sure the shaders still
compile and run as expected.
---
## Showcase
<img width="1151" alt="showcase-img"
src="https://github.com/user-attachments/assets/de875533-42bd-41f9-9fd0-d7cc57d6e51c"
/>
---------
Co-authored-by: Emerson Coskey <emerson@coskey.dev>
Unfortunately, Apple platforms don't have enough texture bindings to
properly support clustered decals. This should be fixed once `wgpu` has
first-class bindless texture support. In the meantime, we disable them.
Closes#17553.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
*Occlusion culling* allows the GPU to skip the vertex and fragment
shading overhead for objects that can be quickly proved to be invisible
because they're behind other geometry. A depth prepass already
eliminates most fragment shading overhead for occluded objects, but the
vertex shading overhead, as well as the cost of testing and rejecting
fragments against the Z-buffer, is presently unavoidable for standard
meshes. We currently perform occlusion culling only for meshlets. But
other meshes, such as skinned meshes, can benefit from occlusion culling
too in order to avoid the transform and skinning overhead for unseen
meshes.
This commit adapts the same [*two-phase occlusion culling*] technique
that meshlets use to Bevy's standard 3D mesh pipeline when the new
`OcclusionCulling` component, as well as the `DepthPrepass` component,
are present on the camera. It has these steps:
1. *Early depth prepass*: We use the hierarchical Z-buffer from the
previous frame to cull meshes for the initial depth prepass, effectively
rendering only the meshes that were visible in the last frame.
2. *Early depth downsample*: We downsample the depth buffer to create
another hierarchical Z-buffer, this time with the current view
transform.
3. *Late depth prepass*: We use the new hierarchical Z-buffer to test
all meshes that weren't rendered in the early depth prepass. Any meshes
that pass this check are rendered.
4. *Late depth downsample*: Again, we downsample the depth buffer to
create a hierarchical Z-buffer in preparation for the early depth
prepass of the next frame. This step is done after all the rendering, in
order to account for custom phase items that might write to the depth
buffer.
Note that this patch has no effect on the per-mesh CPU overhead for
occluded objects, which remains high for a GPU-driven renderer due to
the lack of `cold-specialization` and retained bins. If
`cold-specialization` and retained bins weren't on the horizon, then a
more traditional approach like potentially visible sets (PVS) or low-res
CPU rendering would probably be more efficient than the GPU-driven
approach that this patch implements for most scenes. However, at this
point the amount of effort required to implement a PVS baking tool or a
low-res CPU renderer would probably be greater than landing
`cold-specialization` and retained bins, and the GPU driven approach is
the more modern one anyway. It does mean that the performance
improvements from occlusion culling as implemented in this patch *today*
are likely to be limited, because of the high CPU overhead for occluded
meshes.
Note also that this patch currently doesn't implement occlusion culling
for 2D objects or shadow maps. Those can be addressed in a follow-up.
Additionally, note that the techniques in this patch require compute
shaders, which excludes support for WebGL 2.
This PR is marked experimental because of known precision issues with
the downsampling approach when applied to non-power-of-two framebuffer
sizes (i.e. most of them). These precision issues can, in rare cases,
cause objects to be judged occluded that in fact are not. (I've never
seen this in practice, but I know it's possible; it tends to be likelier
to happen with small meshes.) As a follow-up to this patch, we desire to
switch to the [SPD-based hi-Z buffer shader from the Granite engine],
which doesn't suffer from these problems, at which point we should be
able to graduate this feature from experimental status. I opted not to
include that rewrite in this patch for two reasons: (1) @JMS55 is
planning on doing the rewrite to coincide with the new availability of
image atomic operations in Naga; (2) to reduce the scope of this patch.
A new example, `occlusion_culling`, has been added. It demonstrates
objects becoming quickly occluded and disoccluded by dynamic geometry
and shows the number of objects that are actually being rendered. Also,
a new `--occlusion-culling` switch has been added to `scene_viewer`, in
order to make it easy to test this patch with large scenes like Bistro.
[*two-phase occlusion culling*]:
https://medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501
[Aaltonen SIGGRAPH 2015]:
https://www.advances.realtimerendering.com/s2015/aaltonenhaar_siggraph2015_combined_final_footer_220dpi.pdf
[Some literature]:
https://gist.github.com/reduz/c5769d0e705d8ab7ac187d63be0099b5?permalink_comment_id=5040452#gistcomment-5040452
[SPD-based hi-Z buffer shader from the Granite engine]:
https://github.com/Themaister/Granite/blob/master/assets/shaders/post/hiz.comp
## Migration guide
* When enqueuing a custom mesh pipeline, work item buffers are now
created with
`bevy::render::batching::gpu_preprocessing::get_or_create_work_item_buffer`,
not `PreprocessWorkItemBuffers::new`. See the
`specialized_mesh_pipeline` example.
## Showcase
Occlusion culling example:

Bistro zoomed out, before occlusion culling:

Bistro zoomed out, after occlusion culling:

In this scene, occlusion culling reduces the number of meshes Bevy has
to render from 1591 to 585.
Currently, our default maximum shadow cascade distance is 1000 m, which
is quite distant compared to that of Unity (150 m), Unreal Engine 5 (200
m), and Godot (100 m). I also adjusted the default first cascade far
bound to be 10 m, which matches that of Unity (10.05 m) and Godot (10
m). Together, these changes should improve the default sharpness of
shadows of directional lights for typical scenes.
## Migration Guide
* The default shadow cascade far distance has been changed from 1000 to
150, and the default first cascade far bound has been changed from 5 to
10, in order to be similar to the defaults of other engines.
This commit allows specular highlights to be tinted with a color and for
the reflectance and color tint values to vary across a model via a pair
of maps. The implementation follows the [`KHR_materials_specular`] glTF
extension. In order to reduce the number of samplers and textures in the
default `StandardMaterial` configuration, the maps are gated behind the
`pbr_specular_textures` Cargo feature.
Specular tinting is currently unsupported in the deferred renderer,
because I didn't want to bloat the deferred G-buffers. A possible fix
for this in the future would be to make the G-buffer layout more
configurable, so that specular tints could be supported on an opt-in
basis. As an alternative, Bevy could force meshes with specular tints to
render in forward mode. Both of these solutions require some more
design, so I consider them out of scope for now.
Note that the map is a *specular* map, not a *reflectance* map. In Bevy
and Filament terms, the reflectance values in the specular map range
from [0.0, 0.5], rather than [0.0, 1.0]. This is an unfortunate
[`KHR_materials_specular`] specification requirement that stems from the
fact that glTF is specified in terms of a specular strength model, not
the reflectance model that Filament and Bevy use. A workaround, which is
noted in the `StandardMaterial` documentation, is to set the
`reflectance` value to 2.0, which spreads the specular map range from
[0.0, 1.0] as normal.
The glTF loader has been updated to parse the [`KHR_materials_specular`]
extension. Note that, unless the non-default `pbr_specular_textures` is
supplied, the maps are ignored. The `specularFactor` value is applied as
usual. Note that, as with the specular map, the glTF `specularFactor` is
twice Bevy's `reflectance` value.
This PR adds a new example, `specular_tint`, which demonstrates the
specular tint and map features. Note that this example requires the
[`KHR_materials_specular`] Cargo feature.
[`KHR_materials_specular`]:
https://github.com/KhronosGroup/glTF/tree/main/extensions/2.0/Khronos/KHR_materials_specular
## Changelog
### Added
* Specular highlights can now be tinted with the `specular_tint` field
in `StandardMaterial`.
* Specular maps are now available in `StandardMaterial`, gated behind
the `pbr_specular_textures` Cargo feature.
* The `KHR_materials_specular` glTF extension is now supported, allowing
for customization of specular reflectance and specular maps. Note that
the latter are gated behind the `pbr_specular_textures` Cargo feature.
This commit adds support for *decal projectors* to Bevy, allowing for
textures to be projected on top of geometry. Decal projectors are
clusterable objects, just as punctual lights and light probes are. This
means that decals are only evaluated for objects within the conservative
bounds of the projector, and they don't require a second pass.
These clustered decals require support for bindless textures and as such
currently don't work on WebGL 2, WebGPU, macOS, or iOS. For an
alternative that doesn't require bindless, see PR #16600. I believe that
both contact projective decals in #16600 and clustered decals are
desirable to have in Bevy. Contact projective decals offer broader
hardware and driver support, while clustered decals don't require the
creation of bounding geometry.
A new example, `decal_projectors`, has been added, which demonstrates
multiple decals on a rotating object. The decal projectors can be scaled
and rotated with the mouse.
There are several limitations of this initial patch that can be
addressed in follow-ups:
1. There's no way to specify the Z-index of decals. That is, the order
in which multiple decals are blended on top of one another is arbitrary.
A follow-up could introduce some sort of Z-index field so that artists
can specify that some decals should be blended on top of others.
2. Decals don't take the normal of the surface they're projected onto
into account. Most decal implementations in other engines have a feature
whereby the angle between the decal projector and the normal of the
surface must be within some threshold for the decal to appear. Often,
artists can specify a fade-off range for a smooth transition between
oblique surfaces and aligned surfaces.
3. There's no distance-based fadeoff toward the end of the projector
range. Many decal implementations have this.
This addresses #2401.
## Showcase

Implement procedural atmospheric scattering from [Sebastien Hillaire's
2020 paper](https://sebh.github.io/publications/egsr2020.pdf). This
approach should scale well even down to mobile hardware, and is
physically accurate.
## Co-author: @mate-h
He helped massively with getting this over the finish line, ensuring
everything was physically correct, correcting several places where I had
misunderstood or misapplied the paper, and improving the performance in
several places as well. Thanks!
## Credits
@aevyrie: helped find numerous bugs and improve the example to best show
off this feature :)
Built off of @mtsr's original branch, which handled the transmittance
lut (arguably the most important part)
## Showcase:


## For followup
- Integrate with pcwalton's volumetrics code
- refactor/reorganize for better integration with other effects
- have atmosphere transmittance affect directional lights
- add support for generating skybox/environment map
---------
Co-authored-by: Emerson Coskey <56370779+EmersonCoskey@users.noreply.github.com>
Co-authored-by: atlv <email@atlasdostal.com>
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
Co-authored-by: Emerson Coskey <coskey@emerlabs.net>
Co-authored-by: Máté Homolya <mate.homolya@gmail.com>
# Objective
- Contributes to #16877
## Solution
- Moved `hashbrown`, `foldhash`, and related types out of `bevy_utils`
and into `bevy_platform_support`
- Refactored the above to match the layout of these types in `std`.
- Updated crates as required.
## Testing
- CI
---
## Migration Guide
- The following items were moved out of `bevy_utils` and into
`bevy_platform_support::hash`:
- `FixedState`
- `DefaultHasher`
- `RandomState`
- `FixedHasher`
- `Hashed`
- `PassHash`
- `PassHasher`
- `NoOpHash`
- The following items were moved out of `bevy_utils` and into
`bevy_platform_support::collections`:
- `HashMap`
- `HashSet`
- `bevy_utils::hashbrown` has been removed. Instead, import from
`bevy_platform_support::collections` _or_ take a dependency on
`hashbrown` directly.
- `bevy_utils::Entry` has been removed. Instead, import from
`bevy_platform_support::collections::hash_map` or
`bevy_platform_support::collections::hash_set` as appropriate.
- All of the above equally apply to `bevy::utils` and
`bevy::platform_support`.
## Notes
- I left `PreHashMap`, `PreHashMapExt`, and `TypeIdMap` in `bevy_utils`
as they might be candidates for micro-crating. They can always be moved
into `bevy_platform_support` at a later date if desired.
Right now, we always include distance fog in the shader, which is
unfortunate as it's complex code and is rare. This commit changes it to
be a `#define` instead. I haven't confirmed that removing distance fog
meaningfully reduces VGPR usage, but it can't hurt.
This commit makes Bevy use change detection to only update
`RenderMaterialInstances` and `RenderMeshMaterialIds` when meshes have
been added, changed, or removed. `extract_mesh_materials`, the system
that extracts these, now follows the pattern that
`extract_meshes_for_gpu_building` established.
This improves frame time of `many_cubes` from 3.9ms to approximately
3.1ms, which slightly surpasses the performance of Bevy 0.14.
(Resubmitted from #16878 to clean up history.)

---------
Co-authored-by: Charlotte McElwain <charlotte.c.mcelwain@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
`bevy_ecs`'s `system` module is something of a grab bag, and *very*
large. This is particularly true for the `system_param` module, which is
more than 2k lines long!
While it could be defensible to put `Res` and `ResMut` there (lol no
they're in change_detection.rs, obviously), it doesn't make any sense to
put the `Resource` trait there. This is confusing to navigate (and
painful to work on and review).
## Solution
- Create a root level `bevy_ecs/resource.rs` module to mirror
`bevy_ecs/component.rs`
- move the `Resource` trait to that module
- move the `Resource` derive macro to that module as well (Rust really
likes when you pun on the names of the derive macro and trait and put
them in the same path)
- fix all of the imports
## Notes to reviewers
- We could probably move more stuff into here, but I wanted to keep this
PR as small as possible given the absurd level of import changes.
- This PR is ground work for my upcoming attempts to store resource data
on components (resources-as-entities). Splitting this code out will make
the work and review a bit easier, and is the sort of overdue refactor
that's good to do as part of more meaningful work.
## Testing
cargo build works!
## Migration Guide
`bevy_ecs::system::Resource` has been moved to
`bevy_ecs::resource::Resource`.
# Objective
The existing `RelationshipSourceCollection` uses `Vec` as the only
possible backing for our relationships. While a reasonable choice,
benchmarking use cases might reveal that a different data type is better
or faster.
For example:
- Not all relationships require a stable ordering between the
relationship sources (i.e. children). In cases where we a) have many
such relations and b) don't care about the ordering between them, a hash
set is likely a better datastructure than a `Vec`.
- The number of children-like entities may be small on average, and a
`smallvec` may be faster
## Solution
- Implement `RelationshipSourceCollection` for `EntityHashSet`, our
custom entity-optimized `HashSet`.
-~~Implement `DoubleEndedIterator` for `EntityHashSet` to make things
compile.~~
- This implementation was cursed and very surprising.
- Instead, by moving the iterator type on `RelationshipSourceCollection`
from an erased RPTIT to an explicit associated type we can add a trait
bound on the offending methods!
- Implement `RelationshipSourceCollection` for `SmallVec`
## Testing
I've added a pair of new tests to make sure this pattern compiles
successfully in practice!
## Migration Guide
`EntityHashSet` and `EntityHashMap` are no longer re-exported in
`bevy_ecs::entity` directly. If you were not using `bevy_ecs` / `bevy`'s
`prelude`, you can access them through their now-public modules,
`hash_set` and `hash_map` instead.
## Notes to reviewers
The `EntityHashSet::Iter` type needs to be public for this impl to be
allowed. I initially renamed it to something that wasn't ambiguous and
re-exported it, but as @Victoronz pointed out, that was somewhat
unidiomatic.
In
1a8564898f,
I instead made the `entity_hash_set` public (and its `entity_hash_set`)
sister public, and removed the re-export. I prefer this design (give me
module docs please), but it leads to a lot of churn in this PR.
Let me know which you'd prefer, and if you'd like me to split that
change out into its own micro PR.
# Objective
Fixes https://github.com/bevyengine/bevy/issues/17111
## Solution
Move `#![warn(clippy::allow_attributes,
clippy::allow_attributes_without_reason)]` to the workspace `Cargo.toml`
## Testing
Lots of CI testing, and local testing too.
---------
Co-authored-by: Benjamin Brienen <benjamin.brienen@outlook.com>
This commit allows Bevy to use `multi_draw_indirect_count` for drawing
meshes. The `multi_draw_indirect_count` feature works just like
`multi_draw_indirect`, but it takes the number of indirect parameters
from a GPU buffer rather than specifying it on the CPU.
Currently, the CPU constructs the list of indirect draw parameters with
the instance count for each batch set to zero, uploads the resulting
buffer to the GPU, and dispatches a compute shader that bumps the
instance count for each mesh that survives culling. Unfortunately, this
is inefficient when we support `multi_draw_indirect_count`. Draw
commands corresponding to meshes for which all instances were culled
will remain present in the list when calling
`multi_draw_indirect_count`, causing overhead. Proper use of
`multi_draw_indirect_count` requires eliminating these empty draw
commands.
To address this inefficiency, this PR makes Bevy fully construct the
indirect draw commands on the GPU instead of on the CPU. Instead of
writing instance counts to the draw command buffer, the mesh
preprocessing shader now writes them to a separate *indirect metadata
buffer*. A second compute dispatch known as the *build indirect
parameters* shader runs after mesh preprocessing and converts the
indirect draw metadata into actual indirect draw commands for the GPU.
The build indirect parameters shader operates on a batch at a time,
rather than an instance at a time, and as such each thread writes only 0
or 1 indirect draw parameters, simplifying the current logic in
`mesh_preprocessing`, which currently has to have special cases for the
first mesh in each batch. The build indirect parameters shader emits
draw commands in a tightly packed manner, enabling maximally efficient
use of `multi_draw_indirect_count`.
Along the way, this patch switches mesh preprocessing to dispatch one
compute invocation per render phase per view, instead of dispatching one
compute invocation per view. This is preparation for two-phase occlusion
culling, in which we will have two mesh preprocessing stages. In that
scenario, the first mesh preprocessing stage must only process opaque
and alpha tested objects, so the work items must be separated into those
that are opaque or alpha tested and those that aren't. Thus this PR
splits out the work items into a separate buffer for each phase. As this
patch rewrites so much of the mesh preprocessing infrastructure, it was
simpler to just fold the change into this patch instead of deferring it
to the forthcoming occlusion culling PR.
Finally, this patch changes mesh preprocessing so that it runs
separately for indexed and non-indexed meshes. This is because draw
commands for indexed and non-indexed meshes have different sizes and
layouts. *The existing code is actually broken for non-indexed meshes*,
as it attempts to overlay the indirect parameters for non-indexed meshes
on top of those for indexed meshes. Consequently, right now the
parameters will be read incorrectly when multiple non-indexed meshes are
multi-drawn together. *This is a bug fix* and, as with the change to
dispatch phases separately noted above, was easiest to include in this
patch as opposed to separately.
## Migration Guide
* Systems that add custom phase items now need to populate the indirect
drawing-related buffers. See the `specialized_mesh_pipeline` example for
an example of how this is done.
# Objective
allow setting ambient light via component on cameras.
arguably fixes#7193
note i chose to use a component rather than an entity since it was not
clear to me how to manage multiple ambient sources for a single
renderlayer, and it makes for a very small changeset.
## Solution
- make ambient light a component as well as a resource
- extract it
- use the component if present on a camera, fallback to the resource
## Testing
i added
```rs
if index == 1 {
commands.entity(camera).insert(AmbientLight{
color: Color::linear_rgba(1.0, 0.0, 0.0, 1.0),
brightness: 1000.0,
..Default::default()
});
}
```
at line 84 of the split_screen example
---------
Co-authored-by: François Mockers <mockersf@gmail.com>
We won't be able to retain render phases from frame to frame if the keys
are unstable. It's not as simple as simply keying off the main world
entity, however, because some main world entities extract to multiple
render world entities. For example, directional lights extract to
multiple shadow cascades, and point lights extract to one view per
cubemap face. Therefore, we key off a new type, `RetainedViewEntity`,
which contains the main entity plus a *subview ID*.
This is part of the preparation for retained bins.
---------
Co-authored-by: ickshonpe <david.curthoys@googlemail.com>
# Objective
- Closes https://github.com/bevyengine/bevy/issues/14322.
## Solution
- Implement fast 4-sample bicubic filtering based on this shader toy
https://www.shadertoy.com/view/4df3Dn, with a small speedup from a ghost
of tushima presentation.
## Testing
- Did you test these changes? If so, how?
- Ran on lightmapped example. Practically no difference in that scene.
- Are there any parts that need more testing?
- Lightmapping a better scene.
## Changelog
- Lightmaps now have a higher quality bicubic sampling method (off by
default).
---------
Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
# Objective
I realized that setting these to `deny` may have been a little
aggressive - especially since we upgrade warnings to denies in CI.
## Solution
Downgrades these lints to `warn`, so that compiles can work locally. CI
will still treat these as denies.
# Objective
Stumbled upon a `from <-> form` transposition while reviewing a PR,
thought it was interesting, and went down a bit of a rabbit hole.
## Solution
Fix em
# Objective
- https://github.com/bevyengine/bevy/issues/17111
## Solution
Set the `clippy::allow_attributes` and
`clippy::allow_attributes_without_reason` lints to `deny`, and bring
`bevy_pbr` in line with the new restrictions.
## Testing
`cargo clippy --tests --package bevy_pbr` was run, and no errors were
encountered.
# Objective
Rework / build on #17043 to simplify the implementation. #17043 should
be merged first, and the diff from this PR will get much nicer after it
is merged (this PR is net negative LOC).
## Solution
1. Command and EntityCommand have been vastly simplified. No more marker
components. Just one function.
2. Command and EntityCommand are now generic on the return type. This
enables result-less commands to exist, and allows us to statically
distinguish between fallible and infallible commands, which allows us to
skip the "error handling overhead" for cases that don't need it.
3. There are now only two command queue variants: `queue` and
`queue_fallible`. `queue` accepts commands with no return type.
`queue_fallible` accepts commands that return a Result (specifically,
one that returns an error that can convert to
`bevy_ecs::result::Error`).
4. I've added the concept of the "default error handler", which is used
by `queue_fallible`. This is a simple direct call to the `panic()` error
handler by default. Users that want to override this can enable the
`configurable_error_handler` cargo feature, then initialize the
GLOBAL_ERROR_HANDLER value on startup. This is behind a flag because
there might be minor overhead with `OnceLock` and I'm guessing this will
be a niche feature. We can also do perf testing with OnceLock if someone
really wants it to be used unconditionally, but I don't personally feel
the need to do that.
5. I removed the "temporary error handler" on Commands (and all code
associated with it). It added more branching, made Commands bigger /
more expensive to initialize (note that we construct it at high
frequencies / treat it like a pointer type), made the code harder to
follow, and introduced a bunch of additional functions. We instead rely
on the new default error handler used in `queue_fallible` for most
things. In the event that a custom handler is required,
`handle_error_with` can be used.
6. EntityCommand now _only_ supports functions that take
`EntityWorldMut` (and all existing entity commands have been ported).
Removing the marker component from EntityCommand hinged on this change,
but I strongly believe this is for the best anyway, as this sets the
stage for more efficient batched entity commands.
7. I added `EntityWorldMut::resource` and the other variants for more
ergonomic resource access on `EntityWorldMut` (removes the need for
entity.world_scope, which also incurs entity-lookup overhead).
## Open Questions
1. I believe we could merge `queue` and `queue_fallible` into a single
`queue` which accepts both fallible and infallible commands (via the
introduction of a `QueueCommand` trait). Is this desirable?
# Objective
Many instances of `clippy::too_many_arguments` linting happen to be on
systems - functions which we don't call manually, and thus there's not
much reason to worry about the argument count.
## Solution
Allow `clippy::too_many_arguments` globally, and remove all lint
attributes related to it.
# Objective
I never realized `clippy::type_complexity` was an allowed lint - I've
been assuming it'd generate a warning when performing my linting PRs.
## Solution
Removes any instances of `#[allow(clippy::type_complexity)]` and
`#[expect(clippy::type_complexity)]`
## Testing
`cargo clippy` ran without errors or warnings.
# Objective
Remove some outdated docs from 0.15 that mention a removed function.
## Solution
In `pbr_functions.wgsl`, I think it's fine to just remove the mention.
In `meshlet/asset.rs`, I think it would be nice to still have a note on
how texture samples should be done. Unfortunately, there isn't a nice
abstraction for it any more. Current workaround, for reference:
b386d08d0f/crates/bevy_pbr/src/render/pbr_fragment.wgsl (L184-L208)
For now, I've just removed the mention.
# Objective
the `get` function on [`InstanceInputUniformBuffer`] seems very
error-prone. This PR hopes to fix this.
## Solution
Do a few checks to ensure the index is in bounds and that the `BDI` is
not removed.
Return `Option<BDI>` instead of `BDI`.
## Testing
- Did you test these changes? If so, how?
added a test to verify that the instance buffer works correctly
## Future Work
Performance decreases when using .binary_search(). However this is
likely due to the fact that [`InstanceInputUniformBuffer::get`] for now
is never used, and only get_unchecked.
## Migration Guide
`InstanceInputUniformBuffer::get` now returns `Option<BDI>` instead of
`BDI` to reduce panics. If you require the old functionality of
`InstanceInputUniformBuffer::get` consider using
`InstanceInputUniformBuffer::get_unchecked`.
---------
Co-authored-by: Tim Overbeek <oorbeck@gmail.com>
# Objective
- I'm compiling (parts of) bevy for an embedded platform with no 64bit
atomic and ctrlc handler support. Some compilation errors came up. This
PR contains the fixes for those.
- Fix depth_bias casting in PBR material (Fixes#14169)
- Negative depth_bias values were casted to 0 before this PR
- f32::INFINITY depth_bias value was casted to -1 before this PR
## Solutions
- Restrict 64bit atomic reflection to supported platforms
- Restrict ctrlc handler to supported platforms (linux, windows or macos
instead of "not wasm")
- The depth bias value (f32) is first casted to i32 then u64 in order to
preserve negative values
## Testing
- This version compiles on a platform with no 64bit atomic support, and
no ctrlc support
- CtrlC handler still works on Linux and Windows (I can't test on Macos)
- depth_bias:
```rust
println!("{}",f32::INFINITY as u64 as i32); // Prints: -1 (old implementation)
println!("{}",f32::INFINITY as i32 as u64 as i32); // Prints: 2147483647 (expected, new implementation)
```
Also ran a modified version of 3d_scene example with the following
results:
RED cube depth_bias: -1000.0
BLUE cube depth_bias: 0.0

RED cube depth_bias: -INF
BLUE cube depth_bias: 0.0

RED cube depth_bias: INF (case reported in #14169)
BLUE cube depth_bias: 0.0
(Im not completely sure whats going on with the shadows here, it seems
like depth_bias has some affect to those aswell, if this is
unintentional this issue was not introduced by this PR)

Currently, our batchable binned items are stored in a hash table that
maps bin key, which includes the batch set key, to a list of entities.
Multidraw is handled by sorting the bin keys and accumulating adjacent
bins that can be multidrawn together (i.e. have the same batch set key)
into multidraw commands during `batch_and_prepare_binned_render_phase`.
This is reasonably efficient right now, but it will complicate future
work to retain indirect draw parameters from frame to frame. Consider
what must happen when we have retained indirect draw parameters and the
application adds a bin (i.e. a new mesh) that shares a batch set key
with some pre-existing meshes. (That is, the new mesh can be multidrawn
with the pre-existing meshes.) To be maximally efficient, our goal in
that scenario will be to update *only* the indirect draw parameters for
the batch set (i.e. multidraw command) containing the mesh that was
added, while leaving the others alone. That means that we have to
quickly locate all the bins that belong to the batch set being modified.
In the existing code, we would have to sort the list of bin keys so that
bins that can be multidrawn together become adjacent to one another in
the list. Then we would have to do a binary search through the sorted
list to find the location of the bin that was just added. Next, we would
have to widen our search to adjacent indexes that contain the same batch
set, doing expensive comparisons against the batch set key every time.
Finally, we would reallocate the indirect draw parameters and update the
stored pointers to the indirect draw parameters that the bins store.
By contrast, it'd be dramatically simpler if we simply changed the way
bins are stored to first map from batch set key (i.e. multidraw command)
to the bins (i.e. meshes) within that batch set key, and then from each
individual bin to the mesh instances. That way, the scenario above in
which we add a new mesh will be simpler to handle. First, we will look
up the batch set key corresponding to that mesh in the outer map to find
an inner map corresponding to the single multidraw command that will
draw that batch set. We will know how many meshes the multidraw command
is going to draw by the size of that inner map. Then we simply need to
reallocate the indirect draw parameters and update the pointers to those
parameters within the bins as necessary. There will be no need to do any
binary search or expensive batch set key comparison: only a single hash
lookup and an iteration over the inner map to update the pointers.
This patch implements the above technique. Because we don't have
retained bins yet, this PR provides no performance benefits. However, it
opens the door to maximally efficient updates when only a small number
of meshes change from frame to frame.
The main churn that this patch causes is that the *batch set key* (which
uniquely specifies a multidraw command) and *bin key* (which uniquely
specifies a mesh *within* that multidraw command) are now separate,
instead of the batch set key being embedded *within* the bin key.
In order to isolate potential regressions, I think that at least #16890,
#16836, and #16825 should land before this PR does.
## Migration Guide
* The *batch set key* is now separate from the *bin key* in
`BinnedPhaseItem`. The batch set key is used to collect multidrawable
meshes together. If you aren't using the multidraw feature, you can
safely set the batch set key to `()`.
Bump version after release
This PR has been auto-generated
---------
Co-authored-by: Bevy Auto Releaser <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: François Mockers <mockersf@gmail.com>
# Objective
- Contributes to #11478
## Solution
- Made `bevy_utils::tracing` `doc(hidden)`
- Re-exported `tracing` from `bevy_log` for end-users
- Added `tracing` directly to crates that need it.
## Testing
- CI
---
## Migration Guide
If you were importing `tracing` via `bevy::utils::tracing`, instead use
`bevy::log::tracing`. Note that many items within `tracing` are also
directly re-exported from `bevy::log` as well, so you may only need
`bevy::log` for the most common items (e.g., `warn!`, `trace!`, etc.).
This also applies to the `log_once!` family of macros.
## Notes
- While this doesn't reduce the line-count in `bevy_utils`, it further
decouples the internal crates from `bevy_utils`, making its eventual
removal more feasible in the future.
- I have just imported `tracing` as we do for all dependencies. However,
a workspace dependency may be more appropriate for version management.
Some hardware and driver combos, such as Intel Iris Xe, have low limits
on the numbers of samplers per shader, causing an overflow. With
first-class bindless arrays, `wgpu` should be able to work around this
limitation eventually, but for now we need to disable bindless materials
on those platforms.
This is an alternative to PR #17107 that calculates the precise number
of samplers needed and compares to the hardware sampler limit,
transparently falling back to non-bindless if the limit is exceeded.
Fixes#16988.
Derived `Default` for all public unit structs that already derive from
`Component`. This allows them to be used more easily as required
components.
To avoid clutter in tests/examples, only public components were
affected, but this could easily be expanded to affect all unit
components.
Fixes#17052.
# Objective
Improve DAG building for virtual geometry
## Solution
- Use METIS to group triangles into meshlets which lets us minimize
locked vertices which improves simplification, instead of using meshopt
which prioritizes culling efficiency. Also some other minor tweaks.
- Currently most meshlets have 126 triangles, and not 128. Fixing this
might involve calling METIS recursively ourselves to manually bisect the
graph, not sure. Not going to attempt to fix this in this PR.
## Testing
- Did you test these changes? If so, how?
- Tested on bunny.glb and cliff.glb
- Are there any parts that need more testing?
- No
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Download the new bunny asset, run the meshlet example.
---
## Showcase
New

Old

---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
# Objective
Use the latest version of `typos` and fix the typos that it now detects
# Additional Info
By the way, `typos` has a "low priority typo suggestions issue" where we
can throw typos we find that `typos` doesn't catch.
(This link may go stale) https://github.com/crate-ci/typos/issues/1200
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/16556
- Closes https://github.com/bevyengine/bevy/issues/11807
## Solution
- Simplify custom projections by using a single source of truth -
`Projection`, removing all existing generic systems and types.
- Existing perspective and orthographic structs are no longer components
- I could dissolve these to simplify further, but keeping them around
was the fast way to implement this.
- Instead of generics, introduce a third variant, with a trait object.
- Do an object safety dance with an intermediate trait to allow cloning
boxed camera projections. This is a normal rust polymorphism papercut.
You can do this with a crate but a manual impl is short and sweet.
## Testing
- Added a custom projection example
---
## Showcase
- Custom projections and projection handling has been simplified.
- Projection systems are no longer generic, with the potential for many
different projection components on the same camera.
- Instead `Projection` is now the single source of truth for camera
projections, and is the only projection component.
- Custom projections are still supported, and can be constructed with
`Projection::custom()`.
## Migration Guide
- `PerspectiveProjection` and `OrthographicProjection` are no longer
components. Use `Projection` instead.
- Custom projections should no longer be inserted as a component.
Instead, simply set the custom projection as a value of `Projection`
with `Projection::custom()`.
# Objective
Some sort calls and `Ord` impls are unnecessarily complex.
## Solution
Rewrite the "match on cmp, if equal do another cmp" as either a
comparison on tuples, or `Ordering::then_with`, depending on whether the
compare keys need construction.
`sort_by` -> `sort_by_key` when symmetrical. Do the same for
`min_by`/`max_by`.
Note that `total_cmp` can only work with `sort_by`, and not on tuples.
When sorting collected query results that contain
`Entity`/`MainEntity`/`RenderEntity` in their `QueryData`, with that
`Entity` in the sort key:
stable -> unstable sort (all queried entities are unique)
If key construction is not simple, switch to `sort_by_cached_key` when
possible.
Sorts that are only performed to discover the maximal element are
replaced by `max_by_key`.
Dedicated comparison functions and structs are removed where simple.
Derive `PartialOrd`/`Ord` when useful.
Misc. closure style inconsistencies.
## Testing
- Existing tests.
This commit makes the following changes:
* `IndirectParametersBuffer` has been changed from a `BufferVec` to a
`RawBufferVec`. This won about 20us or so on Bistro by avoiding `encase`
overhead.
* The methods on the `GetFullBatchData` trait no longer have the
`entity` parameter, as it was unused.
* `PreprocessWorkItem`, which specifies a transform-and-cull operation,
now supplies the mesh instance uniform output index directly instead of
having the shader look it up from the indirect draw parameters.
Accordingly, the responsibility of writing the output index to the
indirect draw parameters has been moved from the CPU to the GPU. This is
in preparation for retained indirect instance draw commands, where the
mesh instance uniform output index may change from frame to frame, while
the indirect instance draw commands will be cached. We won't want the
CPU to have to upload the same indirect draw parameters again and again
if a batch didn't change from frame to frame.
* `batch_and_prepare_binned_render_phase` and
`batch_and_prepare_sorted_render_phase` now allocate indirect draw
commands for an entire batch set at a time when possible, instead of one
batch at a time. This change will allow us to retain the indirect draw
commands for whole batch sets.
* `GetFullBatchData::get_batch_indirect_parameters_index` has been
replaced with `GetFullBatchData::write_batch_indirect_parameters`, which
takes an offset and writes into it instead of allocating. This is
necessary in order to use the optimization mentioned in the previous
point.
* At the WGSL level, `IndirectParameters` has been factored out into
`mesh_preprocess_types.wgsl`. This is because we'll need a new compute
shader that zeroes out the instance counts in preparation for a new
frame. That shader will need to access `IndirectParameters`, so it was
moved to a separate file.
* Bins are no longer raw vectors but are instances of a separate type,
`RenderBin`. This is so that the bin can eventually contain its retained
batches.
# Objective
Fixes#16104
## Solution
I removed all instances of `:?` and put them back one by one where it
caused an error.
I removed some bevy_utils helper functions that were only used in 2
places and don't add value. See: #11478
## Testing
CI should catch the mistakes
## Migration Guide
`bevy::utils::{dbg,info,warn,error}` were removed. Use
`bevy::utils::tracing::{debug,info,warn,error}` instead.
---------
Co-authored-by: SpecificProtagonist <vincentjunge@posteo.net>
A previous PR, #14599, attempted to enable lightmaps in deferred mode,
but it still used the `OpaqueNoLightmap3dBinKey`, which meant that it
would be broken if multiple lightmaps were used. This commit fixes that
issue, and allows bindless lightmaps to work with deferred rendering as
well.
# Objective
- Running example `load_gltf` when not using bindless gives this error
```
ERROR bevy_render::render_resource::pipeline_cache: failed to process shader:
error: no definition in scope for identifier: 'slot'
┌─ crates/bevy_pbr/src/render/pbr_fragment.wgsl:153:13
│
153 │ slot,
│ ^^^^ unknown identifier
│
= no definition in scope for identifier: 'slot'
```
- since https://github.com/bevyengine/bevy/pull/16825
## Solution
- Set `slot` to the expected value when not mindless
- Also use it for `uv_b`
## Testing
- Run example `load_gltf` on a Mac or in wasm
This PR simply exposes Bevy PBR's
`TONEMAPPING_LUT_TEXTURE_BINDING_INDEX` and
`TONEMAPPING_LUT_SAMPLER_BINDING_INDEX`.
# Objective
Alongside #16932, this is the last required change to be able to replace
Bevy's built-in deferred lighting pass with a custom one based on the
original logic.
Fixes a crash when using deferred rendering but disabling the default
deferred lighting plugin.
# The Issue
The `ScreenSpaceReflectionsPlugin` references
`NodePbr::DeferredLightingPass`, which hasn't been added when
`PbrPlugin::add_default_deferred_lighting_plugin` is `false`.
This yields the following crash:
```
thread 'main' panicked at /Users/marius/Documents/dev/bevy/crates/bevy_render/src/render_graph/graph.rs:155:26:
InvalidNode(DeferredLightingPass)
stack backtrace:
0: rust_begin_unwind
at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:665:5
1: core::panicking::panic_fmt
at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:74:14
2: bevy_render::render_graph::graph::RenderGraph::add_node_edges
at /Users/marius/Documents/dev/bevy/crates/bevy_render/src/render_graph/graph.rs:155:26
3: <bevy_app::sub_app::SubApp as bevy_render::render_graph::app::RenderGraphApp>::add_render_graph_edges
at /Users/marius/Documents/dev/bevy/crates/bevy_render/src/render_graph/app.rs:66:13
4: <bevy_pbr::ssr::ScreenSpaceReflectionsPlugin as bevy_app::plugin::Plugin>::finish
at /Users/marius/Documents/dev/bevy/crates/bevy_pbr/src/ssr/mod.rs:234:9
5: bevy_app::app::App::finish
at /Users/marius/Documents/dev/bevy/crates/bevy_app/src/app.rs:255:13
6: bevy_winit::state::winit_runner
at /Users/marius/Documents/dev/bevy/crates/bevy_winit/src/state.rs:859:9
7: core::ops::function::FnOnce::call_once
at /Users/marius/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
8: core::ops::function::FnOnce::call_once{{vtable.shim}}
at /Users/marius/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
9: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /Users/marius/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/alloc/src/boxed.rs:2454:9
10: bevy_app::app::App::run
at /Users/marius/Documents/dev/bevy/crates/bevy_app/src/app.rs:184:9
11: bevy_deferred_test::main
at ./src/main.rs:9:5
12: core::ops::function::FnOnce::call_once
at /Users/marius/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
```
### Minimal reproduction example:
```rust
use bevy::core_pipeline::prepass::{DeferredPrepass, DepthPrepass};
use bevy::pbr::{DefaultOpaqueRendererMethod, PbrPlugin, ScreenSpaceReflections};
use bevy::prelude::*;
fn main() {
App::new()
.add_plugins(DefaultPlugins.set(PbrPlugin {
add_default_deferred_lighting_plugin: false,
..default()
}))
.add_systems(Startup, setup)
.insert_resource(DefaultOpaqueRendererMethod::deferred())
.run();
}
/// set up a camera
fn setup(
mut commands: Commands
) {
// camera
commands.spawn((
Camera3d::default(),
Transform::from_xyz(-2.5, 4.5, 9.0).looking_at(Vec3::ZERO, Vec3::Y),
DepthPrepass,
DeferredPrepass,
ScreenSpaceReflections::default(),
));
}
```
# The Fix
When no node under the default lighting node's label exists, this label
isn't added to the SSR's graph node edges. It's good to keep the
SSRPlugin enabled, this way, users can plug in their own lighting
system, which I have successfully done on top of this PR.
# Workarounds
A current workaround for this issue is to re-use Bevy's
`NodePbr::DeferredLightingPass` as the label for your own custom
lighting pass node.
# Objective
Fixes: #16578
## Solution
This is a patch fix, proper fix requires a breaking change.
Added `Panic` enum variant and using is as the system meta default.
Warn once behavior can be enabled same way disabling panic (originally
disabling wans) is.
To fix an issue with the current architecture, where **all** combinator
system params get checked together,
combinator systems only check params of the first system.
This will result in old, panicking behavior on subsequent systems and
will be fixed in 0.16.
## Testing
Ran unit tests and `fallible_params` example.
---------
Co-authored-by: François Mockers <mockersf@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
Revert the retry queue for stuck meshlet groups that couldn't simplify
added in https://github.com/bevyengine/bevy/pull/15886.
It was a hack that didn't really work, that was intended to help solve
meshlets getting stuck and never getting simplified further. The actual
solution is a new DAG building algorithm that I have coming in a
followup PR. With that PR, there will be no need for the retry queue, as
meshlets will rarely ever get stuck (I checked, the code never gets
called). I split this off into it's own PR for easier reviewing.
Meshlet IDs during building are back to being relative to the overall
list of meshlets across all LODs, instead of starting at 0 for the first
meshlet in the simplification queue for the current LOD, regardless of
how many meshlets there are in the asset total.
Not going to bother to regenerate the bunny asset for this PR.
This commit fixes the following regressions:
1. Transmission-specific calls to shader lighting functions didn't pass
the `enable_diffuse` parameter, breaking the `transmission` example.
2. The combination of bindless `StandardMaterial` and bindless lightmaps
caused us to blow past the 128 texture limit on M1/M2 chips in some
cases, in particular the `depth_of_field` example.
https://github.com/gfx-rs/wgpu/issues/3334 should fix this, but in the
meantime this patch reduces the number of bindless lightmaps from 16 to
4 in order to stay under the limit.
3. The renderer was crashing on startup on Adreno 610 chips. This PR
simply disables bindless on Adreno 610 and lower.
# Objective
`EntityHashMap` and `EntityHashSet` iterators do not implement
`EntitySetIterator`.
## Solution
Make them newtypes instead of aliases. The methods that create the
iterators can then produce their own newtypes that carry the `Hasher`
generic and implement `EntitySetIterator`. Functionality remains the
same otherwise.
There are some other small benefits, f.e. the removal of `with_hasher`
associated functions, and the ability to implement more traits
ourselves.
`MainEntityHashMap` and `MainEntityHashSet` are currently left as the
previous type aliases, because supporting general `TrustedEntityBorrow`
hashing is more complex. However, it can also be done.
## Testing
Pre-existing `EntityHashMap` tests.
## Migration Guide
Users of `with_hasher` and `with_capacity_and_hasher` on
`EntityHashMap`/`Set` must now use `new` and `with_capacity`
respectively.
If the non-newtyped versions are required, they can be obtained via
`Deref`, `DerefMut` or `into_inner` calls.
# Objective
When preparing `GpuImage`s, we currently discard the
`depth_or_array_layers` of the `Image`'s size by converting it into a
`UVec2`.
Fixes#16715.
## Solution
Change `GpuImage::size` to `Extent3d`, and just pass that through when
creating `GpuImage`s.
Also copy the `aspect_ratio`, and `size` (now `size_2d` for
disambiguation from the field) functions from `Image` to `GpuImage` for
ease of use with 2D textures.
I originally copied all size-related functions (like `width`, and
`height`), but i think they are unnecessary considering how visible the
`size` field on `GpuImage` is compared to `Image`.
## Testing
Tested via `cargo r -p ci` for everything except docs, when generating
docs it keeps spitting out a ton of
```
error[E0554]: `#![feature]` may not be used on the stable release channel
--> crates/bevy_dylib/src/lib.rs:1:21
|
1 | #![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
```
Not sure why this is happening, but it also happens without my changes,
so it's almost certainly some strange issue specific to my machine.
## Migration Guide
- `GpuImage::size` is now an `Extent3d`. To easily get 2D size, use
`size_2d()`.
The only thing that was preventing `extract_meshes_for_gpu_building` and
`extract_mesh_materials` from running in parallel was the
`ResMut<RenderMeshMaterialIds>`. This lookup can be safely moved to the
`collect_meshes_for_gpu_building` phase, which runs after the extraction
phase.
This results in a small win on `many_cubes`. `extract_mesh_materials` is
currently nonretained, so it's still slow, but running it in parallel is
an easy win.
Before:

After:

Currently, `check_visibility` is parameterized over a query filter that
specifies the type of potentially-visible object. This has the
unfortunate side effect that we need a separate system,
`mark_view_visibility_as_changed_if_necessary`, to trigger view
visibility change detection. That system is quite slow because it must
iterate sequentially over all entities in the scene.
This PR moves the query filter from `check_visibility` to a new
component, `VisibilityClass`. `VisibilityClass` stores a list of type
IDs, each corresponding to one of the query filters we used to use.
Because `check_visibility` is no longer specialized to the query filter
at the type level, Bevy now only needs to invoke it once, leading to
better performance as `check_visibility` can do change detection on the
fly rather than delegating it to a separate system.
This commit also has ergonomic improvements, as there's no need for
applications that want to add their own custom renderable components to
add specializations of the `check_visibility` system to the schedule.
Instead, they only need to ensure that the `ViewVisibility` component is
properly kept up to date. The recommended way to do this, and the way
that's demonstrated in the `custom_phase_item` and
`specialized_mesh_pipeline` examples, is to make `ViewVisibility` a
required component and to add the type ID to it in a component add hook.
This patch does this for `Mesh3d`, `Mesh2d`, `Sprite`, `Light`, and
`Node`, which means that most app code doesn't need to change at all.
Note that, although this patch has a large impact on the performance of
visibility determination, it doesn't actually improve the end-to-end
frame time of `many_cubes`. That's because the render world was already
effectively hiding the latency from
`mark_view_visibility_as_changed_if_necessary`. This patch is, however,
necessary for *further* improvements to `many_cubes` performance.
`many_cubes` trace before:

`many_cubes` trace after:

## Migration Guide
* `check_visibility` no longer takes a `QueryFilter`, and there's no
need to add it manually to your app schedule anymore for custom
rendering items. Instead, entities with custom renderable components
should add the appropriate type IDs to `VisibilityClass`. See
`custom_phase_item` for an example.
This PR adds support for *mixed lighting* to Bevy, whereby some parts of
the scene are lightmapped, while others take part in real-time lighting.
(Here *real-time lighting* means lighting at runtime via the PBR shader,
as opposed to precomputed light using lightmaps.) It does so by adding a
new field, `affects_lightmapped_meshes` to `IrradianceVolume` and
`AmbientLight`, and a corresponding field
`affects_lightmapped_mesh_diffuse` to `DirectionalLight`, `PointLight`,
`SpotLight`, and `EnvironmentMapLight`. By default, this value is set to
true; when set to false, the light contributes nothing to the diffuse
irradiance component to meshes with lightmaps.
Note that specular light is unaffected. This is because the correct way
to bake specular lighting is *directional lightmaps*, which we have no
support for yet.
There are two general ways I expect this field to be used:
1. When diffuse indirect light is baked into lightmaps, irradiance
volumes and reflection probes shouldn't contribute any diffuse light to
the static geometry that has a lightmap. That's because the baking tool
should have already accounted for it, and in a higher-quality fashion,
as lightmaps typically offer a higher effective texture resolution than
the light probe does.
2. When direct diffuse light is baked into a lightmap, punctual lights
shouldn't contribute any diffuse light to static geometry with a
lightmap, to avoid double-counting. It may seem odd to bake *direct*
light into a lightmap, as opposed to indirect light. But there is a use
case: in a scene with many lights, avoiding light leaks requires shadow
mapping, which quickly becomes prohibitive when many lights are
involved. Baking lightmaps allows light leaks to be eliminated on static
geometry.
A new example, `mixed_lighting`, has been added. It demonstrates a sofa
(model from the [glTF Sample Assets]) that has been lightmapped offline
using [Bakery]. It has four modes:
1. In *baked* mode, all objects are locked in place, and all the diffuse
direct and indirect light has been calculated ahead of time. Note that
the bottom of the sphere has a red tint from the sofa, illustrating that
the baking tool captured indirect light for it.
2. In *mixed direct* mode, lightmaps capturing diffuse direct and
indirect light have been pre-calculated for the static objects, but the
dynamic sphere has real-time lighting. Note that, because the diffuse
lighting has been entirely pre-calculated for the scenery, the dynamic
sphere casts no shadow. In a real app, you would typically use real-time
lighting for the most important light so that dynamic objects can shadow
the scenery and relegate baked lighting to the less important lights for
which shadows aren't as important. Also note that there is no red tint
on the sphere, because there is no global illumination applied to it. In
an actual game, you could fix this problem by supplementing the
lightmapped objects with an irradiance volume.
3. In *mixed indirect* mode, all direct light is calculated in
real-time, and the static objects have pre-calculated indirect lighting.
This corresponds to the mode that most applications are expected to use.
Because direct light on the scenery is computed dynamically, shadows are
fully supported. As in mixed direct mode, there is no global
illumination on the sphere; in a real application, irradiance volumes
could be used to supplement the lightmaps.
4. In *real-time* mode, no lightmaps are used at all, and all punctual
lights are rendered in real-time. No global illumination exists.
In the example, you can click around to move the sphere, unless you're
in baked mode, in which case the sphere must be locked in place to be
lit correctly.
## Showcase
Baked mode:

Mixed direct mode:

Mixed indirect mode (default):

Real-time mode:

## Migration guide
* The `AmbientLight` resource, the `IrradianceVolume` component, and the
`EnvironmentMapLight` component now have `affects_lightmapped_meshes`
fields. If you don't need to use that field (for example, if you aren't
using lightmaps), you can safely set the field to true.
* `DirectionalLight`, `PointLight`, and `SpotLight` now have
`affects_lightmapped_mesh_diffuse` fields. If you don't need to use that
field (for example, if you aren't using lightmaps), you can safely set
the field to true.
[glTF Sample Assets]:
https://github.com/KhronosGroup/glTF-Sample-Assets/tree/main
[Bakery]:
https://geom.io/bakery/wiki/index.php?title=Bakery_-_GPU_Lightmapper
This commit allows Bevy to bind 16 lightmaps at a time, if the current
platform supports bindless textures. Naturally, if bindless textures
aren't supported, Bevy falls back to binding only a single lightmap at a
time. As lightmaps are usually heavily atlased, I doubt many scenes will
use more than 16 lightmap textures.
This has little performance impact now, but it's desirable for us to
reap the benefits of multidraw and bindless textures on scenes that use
lightmaps. Otherwise, we might have to break batches in order to switch
those lightmaps.
Additionally, this PR slightly reduces the cost of binning because it
makes the lightmap index in `Opaque3dBinKey` 32 bits instead of an
`AssetId`.
## Migration Guide
* The `Opaque3dBinKey::lightmap_image` field is now
`Opaque3dBinKey::lightmap_slab`, which is a lightweight identifier for
an entire binding array of lightmaps.
# Objective
We were waiting for 1.83 to address most of these, due to a bug with
`missing_docs` and `expect`. Relates to, but does not entirely complete,
#15059.
## Solution
- Upgrade to 1.83
- Switch `allow(missing_docs)` to `expect(missing_docs)`
- Remove a few now-unused `allow`s along the way, or convert to `expect`
I forgot to set `BINDLESS_SLOT_COUNT` in `ExtendedMaterial`'s
implementation of `AsBindGroup`, so it didn't actually become bindless.
In fact, it would usually crash with a shader/bind group layout
mismatch, because some parts of Bevy's renderer thought that the
resulting material was bindless while other parts didn't. This commit
corrects the situation.
I had to make `BINDLESS_SLOT_COUNT` a function instead of a constant
because the `ExtendedMaterial` version needs some logic. Unfortunately,
trait methods can't be `const fn`s, so it has to be a runtime function.
This patch replaces the undocumented `NoGpuCulling` component with a new
component, `NoIndirectDrawing`, effectively turning indirect drawing on
by default. Indirect mode is needed for the recently-landed multidraw
feature (#16427). Since multidraw is such a win for performance, when
that feature is supported the small performance tax that indirect mode
incurs is virtually always worth paying.
To ensure that custom drawing code such as that in the
`custom_shader_instancing` example continues to function, this commit
additionally makes GPU culling take the `NoFrustumCulling` component
into account.
This PR is an alternative to #16670 that doesn't break the
`custom_shader_instancing` example. **PR #16755 should land first in
order to avoid breaking deferred rendering, as multidraw currently
breaks it**.
## Migration Guide
* Indirect drawing (GPU culling) is now enabled by default, so the
`GpuCulling` component is no longer available. To disable indirect mode,
which may be useful with custom render nodes, add the new
`NoIndirectDrawing` component to your camera.
This commit resolves most of the failures seen in #16670. It contains
two major fixes:
1. The prepass shaders weren't updated for bindless mode, so they were
accessing `material` as a single element instead of as an array. I added
the needed `BINDLESS` check.
2. If the mesh didn't support batch set keys (i.e. `get_batch_set_key()`
returns `None`), and multidraw was enabled, the batching logic would try
to multidraw all the meshes in a bin together instead of disabling
multidraw. This is because we checked whether the `Option<BatchSetKey>`
for the previous batch was equal to the `Option<BatchSetKey>` for the
next batch to determine whether objects could be multidrawn together,
which would return true if batch set keys were absent, causing an entire
bin to be multidrawn together. This patch fixes the logic so that
multidraw is only enabled if the batch set keys match *and are `Some`*.
Additionally, this commit adds batch key support for bins that use
`Opaque3dNoLightmapBinKey`, which in practice means prepasses.
Consequently, this patch enables multidraw for the prepass when GPU
culling is enabled.
When testing this patch, try adding `GpuCulling` to the camera in the
`deferred_rendering` and `ssr` examples. You can see that these examples
break without this patch and work properly with it.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Updating dependencies; adopted version of #15696. (Supercedes #15696.)
Long answer: hashbrown is no longer using ahash by default, meaning that
we can't use the default-hasher methods with ahasher. So, we have to use
the longer-winded versions instead. This takes the opportunity to also
switch our default hasher as well, but without actually enabling the
default-hasher feature for hashbrown, meaning that we'll be able to
change our hasher more easily at the cost of all of these method calls
being obnoxious forever.
One large change from 0.15 is that `insert_unique_unchecked` is now
`unsafe`, and for cases where unsafe code was denied at the crate level,
I replaced it with `insert`.
## Migration Guide
`bevy_utils` has updated its version of `hashbrown` to 0.15 and now
defaults to `foldhash` instead of `ahash`. This means that if you've
hard-coded your hasher to `bevy_utils::AHasher` or separately used the
`ahash` crate in your code, you may need to switch to `foldhash` to
ensure that everything works like it does in Bevy.
This commit makes skinned meshes batchable on platforms other than WebGL
2. On supported platforms, it replaces the two uniform buffers used for
joint matrices with a pair of storage buffers containing all matrices
for all skinned meshes packed together. The indices into the buffer are
stored in the mesh uniform and mesh input uniform. The GPU mesh
preprocessing step copies the indices in if that step is enabled.
On the `many_foxes` demo, I observed a frame time decrease from 15.470ms
to 11.935ms. This is the result of reducing the `submit_graph_commands`
time from an average of 5.45ms to 0.489ms, an 11x speedup in that
portion of rendering.

This is what the profile looks like for `many_foxes` after these
changes.

---------
Co-authored-by: François Mockers <mockersf@gmail.com>
This commit makes `StandardMaterial` use bindless textures, as
implemented in PR #16368. Non-bindless mode, as used for example in
Metal and WebGL 2, remains fully supported via a plethora of `#ifdef
BINDLESS` preprocessor definitions.
Unfortunately, this PR introduces quite a bit of unsightliness into the
PBR shaders. This is a result of the fact that WGSL supports neither
passing binding arrays to functions nor passing individual *elements* of
binding arrays to functions, except directly to texture sample
functions. Thus we're unable to use the `sample_texture` abstraction
that helped abstract over the meshlet and non-meshlet paths. I don't
think there's anything we can do to help this other than to suggest
improvements to upstream Naga.
This patch makes shadows use multidraw when the camera they'll be drawn
to has the `GpuCulling` component. This results in a significant
reduction in drawcalls; Bistro Exterior drops to 3 drawcalls for each
shadow cascade.
Note that PR #16670 will remove the `GpuCulling` component, making
shadows automatically use multidraw. Beware of that when testing this
patch; before #16670 lands, you'll need to manually add `GpuCulling` to
your camera in order to see any performance benefits.
PR #15756 made us create temporary render entities for all visible
objects, even if they had no render world counterpart. This regressed
our `many_cubes` time from about 3.59 ms/frame to 4.66 ms/frame.
This commit changes that behavior to use `Entity::PLACEHOLDER` instead
of creating a temporary render entity. This improves our `many_cubes`
time from 5.66 ms/frame to 3.96 ms/frame, a 43% speedup.
I tested 3D, 2D gizmos, and UI and they seem to work.
See the following graph of `many_cubes` frame time (lower is better). PR
#15756 is the one in October.

This commit removes the logic that attempted to keep the
`MeshInputUniform` buffer contiguous. Not only was it slow and complex,
but it was also incorrect, which caused #16686 and #16690. I changed the
logic to simply maintain a free list of unused slots in the buffer and
preferentially fill them when pushing new mesh input uniforms.
Closes#16686.
Closes#16690.
# Objective
- A `Trigger` has multiple associated `Entity`s - the entity observing
the event, and the entity that was targeted by the event.
- The field `entity: Entity` encodes no semantic information about what
the entity is used for, you can already tell that it's an `Entity` by
the type signature!
## Solution
- Rename `trigger.entity()` to `trigger.target()`
---
## Changelog
- `Trigger`s are associated with multiple entities. `Trigger::entity()`
has been renamed to `Trigger::target()` to reflect the semantics of the
entity being returned.
## Migration Guide
- Rename `Trigger::entity()` to `Trigger::target()`.
- Rename `ObserverTrigger::entity` to `ObserverTrigger::target`
# Objective
Fixes typos in bevy project, following suggestion in
https://github.com/bevyengine/bevy-website/pull/1912#pullrequestreview-2483499337
## Solution
I used https://github.com/crate-ci/typos to find them.
I included only the ones that feel undebatable too me, but I am not in
game engine so maybe some terms are expected.
I left out the following typos:
- `reparametrize` => `reparameterize`: There are a lot of occurences, I
believe this was expected
- `semicircles` => `hemicircles`: 2 occurences, may mean something
specific in geometry
- `invertation` => `inversion`: may mean something specific
- `unparented` => `parentless`: may mean something specific
- `metalness` => `metallicity`: may mean something specific
## Testing
- Did you test these changes? If so, how? I did not test the changes,
most changes are related to raw text. I expect the others to be tested
by the CI.
- Are there any parts that need more testing? I do not think
- How can other people (reviewers) test your changes? Is there anything
specific they need to know? To me there is nothing to test
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
---
## Migration Guide
> This section is optional. If there are no breaking changes, you can
delete this section.
(kept in case I include the `reparameterize` change here)
- If this PR is a breaking change (relative to the last release of
Bevy), describe how a user might need to migrate their code to support
these changes
- Simply adding new functionality is not a breaking change.
- Fixing behavior that was definitely a bug, rather than a questionable
design choice is not a breaking change.
## Questions
- [x] Should I include the above typos? No
(https://github.com/bevyengine/bevy/pull/16702#issuecomment-2525271152)
- [ ] Should I add `typos` to the CI? (I will check how to configure it
properly)
This project looks awesome, I really enjoy reading the progress made,
thanks to everyone involved.
This commit adds support for *multidraw*, which is a feature that allows
multiple meshes to be drawn in a single drawcall. `wgpu` currently
implements multidraw on Vulkan, so this feature is only enabled there.
Multiple meshes can be drawn at once if they're in the same vertex and
index buffers and are otherwise placed in the same bin. (Thus, for
example, at present the materials and textures must be identical, but
see #16368.) Multidraw is a significant performance improvement during
the draw phase because it reduces the number of rebindings, as well as
the number of drawcalls.
This feature is currently only enabled when GPU culling is used: i.e.
when `GpuCulling` is present on a camera. Therefore, if you run for
example `scene_viewer`, you will not see any performance improvements,
because `scene_viewer` doesn't add the `GpuCulling` component to its
camera.
Additionally, the multidraw feature is only implemented for opaque 3D
meshes and not for shadows or 2D meshes. I plan to make GPU culling the
default and to extend the feature to shadows in the future. Also, in the
future I suspect that polyfilling multidraw on APIs that don't support
it will be fruitful, as even without driver-level support use of
multidraw allows us to avoid expensive `wgpu` rebindings.
# Objective
- Remove `derive_more`'s error derivation and replace it with
`thiserror`
## Solution
- Added `derive_more`'s `error` feature to `deny.toml` to prevent it
sneaking back in.
- Reverted to `thiserror` error derivation
## Notes
Merge conflicts were too numerous to revert the individual changes, so
this reversion was done manually. Please scrutinise carefully during
review.
# Objective
Volumetric fog was broken by #13746.
Looks like this particular shader just got missed. I don't see any other
instances of `unpack_offset_and_counts` in the codebase.
```
2024-12-06T03:18:42.297494Z ERROR bevy_render::render_resource::pipeline_cache: failed to process shader:
error: no definition in scope for identifier: 'bevy_pbr::clustered_forward::unpack_offset_and_counts'
┌─ crates/bevy_pbr/src/volumetric_fog/volumetric_fog.wgsl:312:29
│
312 │ let offset_and_counts = bevy_pbr::clustered_forward::unpack_offset_and_counts(cluster_index);
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ unknown identifier
│
= no definition in scope for identifier: 'bevy_pbr::clustered_forward::unpack_offset_and_counts'
```
## Solution
Use `unpack_clusterable_object_index_ranges` to get the indices for
point/spot lights.
## Testing
`cargo run --example volumetric_fog`
`cargo run --example fog_volumes`
`cargo run --example scrolling_fog`
The bindless PR (#16368) broke some examples:
* `specialized_mesh_pipeline` and `custom_shader_instancing` failed
because they expect to be able to render a mesh with no material, by
overriding enough of the render pipeline to be able to do so. This PR
fixes the issue by restoring the old behavior in which we extract meshes
even if they have no material.
* `texture_binding_array` broke because it doesn't implement
`AsBindGroup::unprepared_bind_group`. This was tricky to fix because
there's a very good reason why `texture_binding_array` doesn't implement
that method: there's no sensible way to do so with `wgpu`'s current
bindless API, due to its multiple levels of borrowed references. To fix
the example, I split `MaterialBindGroup` into
`MaterialBindlessBindGroup` and `MaterialNonBindlessBindGroup`, and
allow direct custom implementations of `AsBindGroup::as_bind_group` for
the latter type of bind groups. To opt in to the new behavior, return
the `AsBindGroupError::CreateBindGroupDirectly` error from your
`AsBindGroup::unprepared_bind_group` implementation, and Bevy will call
your custom `AsBindGroup::as_bind_group` method as before.
## Migration Guide
* Bevy will now unconditionally call
`AsBindGroup::unprepared_bind_group` for your materials, so you must no
longer panic in that function. Instead, return the new
`AsBindGroupError::CreateBindGroupDirectly` error, and Bevy will fall
back to calling `AsBindGroup::as_bind_group` as before.
This commit moves the front end of the rendering pipeline to a retained
model when GPU preprocessing is in use (i.e. by default, except in
constrained environments). `RenderMeshInstance` and `MeshUniformData`
are stored from frame to frame and are updated only for the entities
that changed state. This was rather tricky and requires some careful
surgery to keep the data valid in the case of removals.
This patch is built on top of Bevy's change detection. Generally, this
worked, except that `ViewVisibility` isn't currently properly tracked.
Therefore, this commit adds proper change tracking for `ViewVisibility`.
Doing this required adding a new system that runs after all
`check_visibility` invocations, as no single `check_visibility`
invocation has enough global information to detect changes.
On the Bistro exterior scene, with all textures forced to opaque, this
patch improves steady-state `extract_meshes_for_gpu_building` from
93.8us to 34.5us and steady-state `collect_meshes_for_gpu_building` from
195.7us to 4.28us. Altogether this constitutes an improvement from 290us
to 38us, which is a 7.46x speedup.


This patch is only lightly tested and shouldn't land before 0.15 is
released anyway, so I'm releasing it as a draft.
This commit allows the Bevy renderer to use the clustering
infrastructure for light probes (reflection probes and irradiance
volumes) on platforms where at least 3 storage buffers are available. On
such platforms (the vast majority), we stop performing brute-force
searches of light probes for each fragment and instead only search the
light probes with bounding spheres that intersect the current cluster.
This should dramatically improve scalability of irradiance volumes and
reflection probes.
The primary platform that doesn't support 3 storage buffers is WebGL 2,
and we continue using a brute-force search of light probes on that
platform, as the UBO that stores per-cluster indices is too small to fit
the light probe counts. Note, however, that that platform also doesn't
support bindless textures (indeed, it would be very odd for a platform
to support bindless textures but not SSBOs), so we only support one of
each type of light probe per drawcall there in the first place.
Consequently, this isn't a performance problem, as the search will only
have one light probe to consider. (In fact, clustering would probably
end up being a performance loss.)
Known potential improvements include:
1. We currently cull based on a conservative bounding sphere test and
not based on the oriented bounding box (OBB) of the light probe. This is
improvable, but in the interests of simplicity, I opted to keep the
bounding sphere test for now. The OBB improvement can be a follow-up.
2. This patch doesn't change the fact that each fragment only takes a
single light probe into account. Typical light probe implementations
detect the case in which multiple light probes cover the current
fragment and perform some sort of weighted blend between them. As the
light probe fetch function presently returns only a single light probe,
implementing that feature would require more code restructuring, so I
left it out for now. It can be added as a follow-up.
3. Light probe implementations typically have a falloff range. Although
this is a wanted feature in Bevy, this particular commit also doesn't
implement that feature, as it's out of scope.
4. This commit doesn't raise the maximum number of light probes past its
current value of 8 for each type. This should be addressed later, but
would possibly require more bindings on platforms with storage buffers,
which would increase this patch's complexity. Even without raising the
limit, this patch should constitute a significant performance
improvement for scenes that get anywhere close to this limit. In the
interest of keeping this patch small, I opted to leave raising the limit
to a follow-up.
## Changelog
### Changed
* Light probes (reflection probes and irradiance volumes) are now
clustered on most platforms, improving performance when many light
probes are present.
---------
Co-authored-by: Benjamin Brienen <Benjamin.Brienen@outlook.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Currently, the prepass has no support for visibility ranges, so
artifacts appear when using dithering visibility ranges in conjunction
with a prepass. This patch fixes that problem.
Note that this patch changes the prepass to use sparse bind group
indices instead of sequential ones. I figured this is cleaner, because
it allows for greater sharing of WGSL code between the forward pipeline
and the prepass pipeline.
The `visibility_range` example has been updated to allow the prepass to
be toggled on and off.
# Objective
Make documentation of a component's required components more visible by
moving it to the type's docs
## Solution
Change `#[require]` from a derive macro helper to an attribute macro.
Disadvantages:
- this silences any unused code warnings on the component, as it is used
by the macro!
- need to import `require` if not using the ecs prelude (I have not
included this in the migration guilde as Rust tooling already suggests
the fix)
---
## Showcase

---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
This patch adds the infrastructure necessary for Bevy to support
*bindless resources*, by adding a new `#[bindless]` attribute to
`AsBindGroup`.
Classically, only a single texture (or sampler, or buffer) can be
attached to each shader binding. This means that switching materials
requires breaking a batch and issuing a new drawcall, even if the mesh
is otherwise identical. This adds significant overhead not only in the
driver but also in `wgpu`, as switching bind groups increases the amount
of validation work that `wgpu` must do.
*Bindless resources* are the typical solution to this problem. Instead
of switching bindings between each texture, the renderer instead
supplies a large *array* of all textures in the scene up front, and the
material contains an index into that array. This pattern is repeated for
buffers and samplers as well. The renderer now no longer needs to switch
binding descriptor sets while drawing the scene.
Unfortunately, as things currently stand, this approach won't quite work
for Bevy. Two aspects of `wgpu` conspire to make this ideal approach
unacceptably slow:
1. In the DX12 backend, all binding arrays (bindless resources) must
have a constant size declared in the shader, and all textures in an
array must be bound to actual textures. Changing the size requires a
recompile.
2. Changing even one texture incurs revalidation of all textures, a
process that takes time that's linear in the total size of the binding
array.
This means that declaring a large array of textures big enough to
encompass the entire scene is presently unacceptably slow. For example,
if you declare 4096 textures, then `wgpu` will have to revalidate all
4096 textures if even a single one changes. This process can take
multiple frames.
To work around this problem, this PR groups bindless resources into
small *slabs* and maintains a free list for each. The size of each slab
for the bindless arrays associated with a material is specified via the
`#[bindless(N)]` attribute. For instance, consider the following
declaration:
```rust
#[derive(AsBindGroup)]
#[bindless(16)]
struct MyMaterial {
#[buffer(0)]
color: Vec4,
#[texture(1)]
#[sampler(2)]
diffuse: Handle<Image>,
}
```
The `#[bindless(N)]` attribute specifies that, if bindless arrays are
supported on the current platform, each resource becomes a binding array
of N instances of that resource. So, for `MyMaterial` above, the `color`
attribute is exposed to the shader as `binding_array<vec4<f32>, 16>`,
the `diffuse` texture is exposed to the shader as
`binding_array<texture_2d<f32>, 16>`, and the `diffuse` sampler is
exposed to the shader as `binding_array<sampler, 16>`. Inside the
material's vertex and fragment shaders, the applicable index is
available via the `material_bind_group_slot` field of the `Mesh`
structure. So, for instance, you can access the current color like so:
```wgsl
// `uniform` binding arrays are a non-sequitur, so `uniform` is automatically promoted
// to `storage` in bindless mode.
@group(2) @binding(0) var<storage> material_color: binding_array<Color, 4>;
...
@fragment
fn fragment(in: VertexOutput) -> @location(0) vec4<f32> {
let color = material_color[mesh[in.instance_index].material_bind_group_slot];
...
}
```
Note that portable shader code can't guarantee that the current platform
supports bindless textures. Indeed, bindless mode is only available in
Vulkan and DX12. The `BINDLESS` shader definition is available for your
use to determine whether you're on a bindless platform or not. Thus a
portable version of the shader above would look like:
```wgsl
#ifdef BINDLESS
@group(2) @binding(0) var<storage> material_color: binding_array<Color, 4>;
#else // BINDLESS
@group(2) @binding(0) var<uniform> material_color: Color;
#endif // BINDLESS
...
@fragment
fn fragment(in: VertexOutput) -> @location(0) vec4<f32> {
#ifdef BINDLESS
let color = material_color[mesh[in.instance_index].material_bind_group_slot];
#else // BINDLESS
let color = material_color;
#endif // BINDLESS
...
}
```
Importantly, this PR *doesn't* update `StandardMaterial` to be bindless.
So, for example, `scene_viewer` will currently not run any faster. I
intend to update `StandardMaterial` to use bindless mode in a follow-up
patch.
A new example, `shaders/shader_material_bindless`, has been added to
demonstrate how to use this new feature.
Here's a Tracy profile of `submit_graph_commands` of this patch and an
additional patch (not submitted yet) that makes `StandardMaterial` use
bindless. Red is those patches; yellow is `main`. The scene was Bistro
Exterior with a hack that forces all textures to opaque. You can see a
1.47x mean speedup.

## Migration Guide
* `RenderAssets::prepare_asset` now takes an `AssetId` parameter.
* Bin keys now have Bevy-specific material bind group indices instead of
`wgpu` material bind group IDs, as part of the bindless change. Use the
new `MaterialBindGroupAllocator` to map from bind group index to bind
group ID.
# Objective
- Fixes#16078
## Solution
- Rename things to clarify that we _want_ unclipped depth for
directional light shadow views, and need some way of disabling the GPU's
builtin depth clipping
- Use DEPTH_CLIP_CONTROL instead of the fragment shader emulation on
supported platforms
- Pass only the clip position depth instead of the whole clip position
between vertex->fragment shader (no idea if this helps performance or
not, compiler might optimize it anyways)
- Meshlets
- HW raster always uses DEPTH_CLIP_CONTROL since it targets a more
limited set of platforms
- SW raster was not handling DEPTH_CLAMP_ORTHO correctly, it ended up
pretty much doing nothing.
- This PR made me realize that SW raster technically should have depth
clipping for all views that are not directional light shadows, but I
decided not to bother writing it. I'm not sure that it ever matters in
practice. If proven otherwise, I can add it.
## Testing
- Did you test these changes? If so, how?
- Lighting example. Both opaque (no fragment shader) and alpha masked
geometry (fragment shader emulation) are working with
depth_clip_control, and both work when it's turned off. Also tested
meshlet example.
- Are there any parts that need more testing?
- Performance. I can't figure out a good test scene.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Toggle depth_clip_control_supported in prepass/mod.rs line 323 to turn
this PR on or off.
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
- Native
---
## Migration Guide
- `MeshPipelineKey::DEPTH_CLAMP_ORTHO` is now
`MeshPipelineKey::UNCLIPPED_DEPTH_ORTHO`
- The `DEPTH_CLAMP_ORTHO` shaderdef has been renamed to
`UNCLIPPED_DEPTH_ORTHO_EMULATION`
- `clip_position_unclamped: vec4<f32>` is now `unclipped_depth: f32`
I didn't mean to make this item private, fixing it for the 0.15 release
to be consistent with 0.14.
(maintainers: please make sure this gets merged into the 0.15 release
branch as well as main)
# Objective
PCSS still has some fundamental issues (#16155). We should resolve them
before "releasing" the feature.
## Solution
1. Rename the already-optional `pbr_pcss` cargo feature to
`experimental_pbr_pcss` to better communicate its state to developers.
2. Adjust the description of the `experimental_pbr_pcss` cargo feature
to better communicate its state to developers.
3. Gate PCSS-related light component fields behind that cargo feature,
to prevent surfacing them to developers by default.
# Objective
_If I understand it correctly_, we were checking mesh visibility, as
well as re-rendering point and spot light shadow maps for each view.
This makes it so that M views and N lights produce M x N complexity.
This PR aims to fix that, as well as introduce a stress test for this
specific scenario.
## Solution
- Keep track of what lights have already had mesh visibility calculated
and do not calculate it again;
- Reuse shadow depth textures and attachments across all views, and only
render shadow maps for the _first_ time a light is encountered on a
view;
- Directional lights remain unaltered, since their shadow map cascades
are view-dependent;
- Add a new `many_cameras_lights` stress test example to verify the
solution
## Showcase
110% speed up on the stress test
83% reduction of memory usage in stress test
### Before (5.35 FPS on stress test)
<img width="1392" alt="Screenshot 2024-09-11 at 12 25 57"
src="https://github.com/user-attachments/assets/136b0785-e9a4-44df-9a22-f99cc465e126">
### After (11.34 FPS on stress test)
<img width="1392" alt="Screenshot 2024-09-11 at 12 24 35"
src="https://github.com/user-attachments/assets/b8dd858f-5e19-467f-8344-2b46ca039630">
## Testing
- Did you test these changes? If so, how?
- On my game project where I have two cameras, and many shadow casting
lights I managed to get pretty much double the FPS.
- Also included a stress test, see the comparison above
- Are there any parts that need more testing?
- Yes, I would like help verifying that this fix is indeed correct, and
that we were really re-rendering the shadow maps by mistake and it's
indeed okay to not do that
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run the `many_cameras_lights` example
- On the `main` branch, cherry pick the commit with the example (`git
cherry-pick --no-commit 1ed4ace01`) and run it
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
- macOS
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
Fixes#15940
## Solution
Remove the `pub use` and fix the compile errors.
Make `bevy_image` available as `bevy::image`.
## Testing
Feature Frenzy would be good here! Maybe I'll learn how to use it if I
have some time this weekend, or maybe a reviewer can use it.
## Migration Guide
Use `bevy_image` instead of `bevy_render::texture` items.
---------
Co-authored-by: chompaa <antony.m.3012@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- wgpu 0.20 made workgroup vars stop being zero-init by default. this
broke some applications (cough foresight cough) and now we workaround
it. wgpu exposes a compilation option that zero initializes workgroup
memory by default, but bevy does not expose it.
## Solution
- expose the compilation option wgpu gives us
## Testing
- ran examples: 3d_scene, compute_shader_game_of_life, gpu_readback,
lines, specialized_mesh_pipeline. they all work
- confirmed fix for our own problems
---
</details>
## Migration Guide
- add `zero_initialize_workgroup_memory: false,` to
`ComputePipelineDescriptor` or `RenderPipelineDescriptor` structs to
preserve 0.14 functionality, add `zero_initialize_workgroup_memory:
true,` to restore bevy 0.13 functionality.
# Objective
- fix formatting issue in "mesh_view_binding.wgsl"
_note: As naga-oil preprocessor match the whole line when finding an
"#endif",
It's just for external formatting tool and consistency._
## Solution
Trivial change.
Add '//' before the closing comment of the "#endif"
# Objective
gpu based mesh uniform construction in the `GpuPreprocessNode` is
currently in `Core3d`. The node iterates all views and schedules the
uniform construction for each. so
- when there are multiple 3d cameras, it runs multiple times on each
view
- if a view wants to render meshes but doesn't use the `Core3d` graph,
the camera must run later than at least one `Core3d`-based camera (or
add the node to its own graph, duplicating the work)
- If views want to share mesh uniforms there is no way to avoid running
the preprocessing for every view
## Solution
- move the node to the top level of the rendergraph, before the camera
driver node
- make the `PreprocessBindGroup` `clone`able, and add a
`SkipGpuPreprocessing` component to allow opting out per view
# Objective
- Choose LOD based on normal simplification error in addition to
position error
- Update meshoptimizer to 0.22, which has a bunch of simplifier
improvements
## Testing
- Did you test these changes? If so, how?
- Visualize normals, and compare LOD changes before and after. Normals
no longer visibly change as the LOD cut changes.
- Are there any parts that need more testing?
- No
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run the meshlet example in this PR and on main and move around to
change the LOD cut. Before running each example, in
meshlet_mesh_material.wgsl, replace `let color = vec3(rand_f(&rng),
rand_f(&rng), rand_f(&rng));` with `let color =
(vertex_output.world_normal + 1.0) / 2.0;`. Make sure to download the
appropriate bunny asset for each branch!
# Objective
Order independent transparency can filter fragment writes based on the
alpha value and it is currently hard-coded to anything higher than 0.0.
By making that value configurable, users can optimize fragment writes,
potentially reducing the number of layers needed and improving
performance in favor of some transparency quality.
## Solution
This PR adds `alpha_threshold` to the
OrderIndependentTransparencySettings component and uses the struct to
configure a corresponding shader uniform. This uniform is then used
instead of the hard-coded value.
To configure OIT with a custom alpha threshold, use:
```rust
fn setup(mut commands: Commands) {
commands.spawn((
Camera3d::default(),
OrderIndependentTransparencySettings {
layer_count: 8,
alpha_threshold: 0.2,
},
));
}
```
## Testing
I tested this change using the included OIT example, as well as with two
additional projects.
## Migration Guide
If you previously explicitly initialized
OrderIndependentTransparencySettings with your own `layer_count`, you
will now have to add either a `..default()` statement or an explicit
`alpha_threshold` value:
```rust
fn setup(mut commands: Commands) {
commands.spawn((
Camera3d::default(),
OrderIndependentTransparencySettings {
layer_count: 16,
..default()
},
));
}
```
---------
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
The two additional linear texture samplers that PCSS added caused us to
blow past the limit on Apple Silicon macOS and WebGL. To fix the issue,
this commit adds a `--feature pbr_pcss` feature gate that disables PCSS
if not present.
Closes#15345.
Closes#15525.
Closes#15821.
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
The PCSS PR #13497 increased the size of clusterable objects from 64
bytes to 80 bytes but didn't decrease the UBO size to compensate, so we
blew past the 16kB limit on WebGL 2. This commit fixes the issue by
lowering the maximum number of clusterable objects to 204, which puts us
under the 16kB limit again.
Closes#15998.
# Objective
- Make the meshlet fill cluster buffers pass slightly faster
- Address https://github.com/bevyengine/bevy/issues/15920 for meshlets
- Added PreviousGlobalTransform as a required meshlet component to avoid
extra archetype moves, slightly alleviating
https://github.com/bevyengine/bevy/issues/14681 for meshlets
- Enforce that MeshletPlugin::cluster_buffer_slots is not greater than
2^25 (glitches will occur otherwise). Technically this field controls
post-lod/culling cluster count, and the issue is on pre-lod/culling
cluster count, but it's still valid now, and in the future this will be
more true.
Needs to be merged after https://github.com/bevyengine/bevy/pull/15846
and https://github.com/bevyengine/bevy/pull/15886
## Solution
- Old pass dispatched a thread per cluster, and did a binary search over
the instances to find which instance the cluster belongs to, and what
meshlet index within the instance it is.
- New pass dispatches a workgroup per instance, and has the workgroup
loop over all meshlets in the instance in order to write out the cluster
data.
- Use a push constant instead of arrayLength to fix the linked bug
- Remap 1d->2d dispatch for software raster only if actually needed to
save on spawning excess workgroups
## Testing
- Did you test these changes? If so, how?
- Ran the meshlet example, and an example with 1041 instances of 32217
meshlets per instance. Profiled the second scene with nsight, went from
0.55ms -> 0.40ms. Small savings. We're pretty much VRAM bandwidth bound
at this point.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run the meshlet example
## Changelog (non-meshlets)
- PreviousGlobalTransform now implements the Default trait
Take a bunch more improvements from @zeux's nanite.cpp code.
* Use position-only vertices (discard other attributes) to determine
meshlet connectivity for grouping
* Rather than using the lock borders flag when simplifying meshlet
groups, provide the locked vertices ourselves. The lock borders flag
locks the entire border of the meshlet group, but really we only want to
lock the edges between meshlet groups - outwards facing edges are fine
to unlock. This gives a really significant increase to the DAG quality.
* Add back stuck meshlets (group has only a single meshlet,
simplification failed) to the simplification queue to allow them to get
used later on and have another attempt at simplifying
* Target 8 meshlets per group instead of 4 (second biggest improvement
after manual locks)
* Provide a seed to metis for deterministic meshlet building
* Misc other improvements
We can remove the usage of unsafe after the next upstream meshopt
release, but for now we need to use the ffi function directly. I'll do
another round of improvements later, mainly attribute-aware
simplification and using spatial weights for meshlet grouping.
Need to merge https://github.com/bevyengine/bevy/pull/15846 first.
# Objective
- I made a mistake in #15902, specifically [this
diff](e2faedb99c)
-- the `point_light_count` variable is used for all point lights, not
just shadow mapped ones, so I cannot add `.min(max_texture_cubes)`
there. (Despite `spot_light_count` having `.min(..)`)
It may have broken code like this (where `index` is index of
`point_light` vec):
9930df83ed/crates/bevy_pbr/src/render/light.rs (L848-L850)
and also causes panic here:
9930df83ed/crates/bevy_pbr/src/render/light.rs (L1173-L1174)
## Solution
- Adds `.min(max_texture_cubes)` directly to the loop where texture
views for point lights are created.
## Testing
- `lighting` example (with the directional light removed; original
example doesn't crash as only 1 directional-or-spot light in total is
shadow-mapped on webgl) no longer crashes on webgl
# Objective
1. Prevent weird glitches with stray pixels scattered around the scene

2. Prevent weird glitchy full-screen triangles that pop-up and destroy
perf (SW rasterizing huge triangles is slow)

## Solution
1. Use floating point math in the SW rasterizer bounding box calculation
to handle negative verticss, and add backface culling
2. Force hardware raster for clusters that clip the near plane, and let
the hardware rasterizer handle the clipping
I also adjusted the SW rasterizer threshold to < 64 pixels (little bit
better perf in my test scene, but still need to do a more comprehensive
test), and enabled backface culling for the hardware raster pipeline.
## Testing
- Did you test these changes? If so, how?
- Yes, on an example scene. Issues no longer occur.
- Are there any parts that need more testing?
- No.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run the meshlet example.
# Objective
Bevy seems to want to standardize on "American English" spellings. Not
sure if this is laid out anywhere in writing, but see also #15947.
While perusing the docs for `typos`, I noticed that it has a `locale`
config option and tried it out.
## Solution
Switch to `en-us` locale in the `typos` config and run `typos -w`
## Migration Guide
The following methods or fields have been renamed from `*dependants*` to
`*dependents*`.
- `ProcessorAssetInfo::dependants`
- `ProcessorAssetInfos::add_dependant`
- `ProcessorAssetInfos::non_existent_dependants`
- `AssetInfo::dependants_waiting_on_load`
- `AssetInfo::dependants_waiting_on_recursive_dep_load`
- `AssetInfos::loader_dependants`
- `AssetInfos::remove_dependants_and_labels`
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/15871
(Camera is done in #15946)
## Solution
- Do the same as #15904 for other extraction systems
- Added missing `SyncComponentPlugin` for DOF, TAA, and SSAO
(According to the
[documentation](https://dev-docs.bevyengine.org/bevy/render/sync_component/struct.SyncComponentPlugin.html),
this plugin "needs to be added for manual extraction implementations."
We may need to check this is done.)
## Testing
Modified example locally to add toggles if not exist.
- [x] DOF - toggling DOF component and perspective in `depth_of_field`
example
- [x] TAA - toggling `Camera.is_active` and TAA component
- [x] clusters - not entirely sure, toggling `Camera.is_active` in
`many_lights` example (no crash/glitch even without this PR)
- [x] previous_view - toggling `Camera.is_active` in `skybox` (no
crash/glitch even without this PR)
- [x] lights - toggling `Visibility` of `DirectionalLight` in `lighting`
example
- [x] SSAO - toggling `Camera.is_active` and SSAO component in `ssao`
example
- [x] default UI camera view - toggling `Camera.is_active` (nop without
#15946 because UI defaults to some camera even if `DefaultCameraView` is
not there)
- [x] volumetric fog - toggling existence of volumetric light. Looks
like optimization, no change in behavior/visuals
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/13552
## Solution
- Thanks for the guidance from @DGriffin91, the current solution is to
transmit the light_map through the emissive channel to avoid increasing
the bandwidth of deferred shading.
- <del>Store lightmap sample result into G-Buffer and pass them into the
`Deferred Lighting Pipeline`, therefore we can get the correct indirect
lighting via the `apply_pbr_lighting` function.</del>
- <del>The original G-Buffer lacks storage for lightmap data, therefore
a new buffer is added. We can only use Rgba16Uint here due to the
32-byte limit on the render targets.</del>
## Testing
- Need to test all the examples that contains a prepass, with both the
forward and deferred rendering mode.
- I have tested the ones below.
- `lightmaps` (adjust the code based on the issue and check the
rendering result)
- `transmission` (it contains a prepass)
- `ssr` (it also uses the G-Bufffer)
- `meshlet` (forward and deferred)
- `pbr`
## Showcase
By updating the `lightmaps` example to use deferred rendering, this pull
request enables correct rendering result of the Cornell Box.
```
diff --git a/examples/3d/lightmaps.rs b/examples/3d/lightmaps.rs
index 564a3162b..11a748fba 100644
--- a/examples/3d/lightmaps.rs
+++ b/examples/3d/lightmaps.rs
@@ -1,12 +1,14 @@
//! Rendering a scene with baked lightmaps.
-use bevy::pbr::Lightmap;
+use bevy::core_pipeline::prepass::DeferredPrepass;
+use bevy::pbr::{DefaultOpaqueRendererMethod, Lightmap};
use bevy::prelude::*;
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.insert_resource(AmbientLight::NONE)
+ .insert_resource(DefaultOpaqueRendererMethod::deferred())
.add_systems(Startup, setup)
.add_systems(Update, add_lightmaps_to_meshes)
.run();
@@ -19,10 +21,12 @@ fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
..default()
});
- commands.spawn(Camera3dBundle {
- transform: Transform::from_xyz(-278.0, 273.0, 800.0),
- ..default()
- });
+ commands
+ .spawn(Camera3dBundle {
+ transform: Transform::from_xyz(-278.0, 273.0, 800.0),
+ ..default()
+ })
+ .insert(DeferredPrepass);
}
fn add_lightmaps_to_meshes(
```
<img width="1280" alt="image"
src="https://github.com/user-attachments/assets/17fd3367-61cc-4c23-b956-e7cfc751af3c">
## Emissive Issue
**The emissive light object appears incorrectly rendered because the
alpha channel of emission is set to 1 in deferred rendering and 0 in
forward rendering, leading to different emissive light result. Could
this be a bug?**
```wgsl
// pbr_deferred_functions.wgsl - pbr_input_from_deferred_gbuffer
let emissive = rgb9e5::rgb9e5_to_vec3_(gbuffer.g);
if ((pbr.material.flags & STANDARD_MATERIAL_FLAGS_UNLIT_BIT) != 0u) {
pbr.material.base_color = vec4(emissive, 1.0);
pbr.material.emissive = vec4(vec3(0.0), 1.0);
} else {
pbr.material.base_color = vec4(pow(base_rough.rgb, vec3(2.2)), 1.0);
pbr.material.emissive = vec4(emissive, 1.0);
}
// pbr_functions.wgsl - apply_pbr_lighting
emissive_light = emissive_light * mix(1.0, view_bindings::view.exposure, emissive.a);
```
---------
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
# Objective
Closes#15799.
Many rendering people and maintainers are in favor of reverting default
mesh materials added in #15524, especially as the migration to required
component is already large and heavily breaking.
## Solution
Revert default mesh materials, and adjust docs accordingly.
- Remove `extract_default_materials`
- Remove `clear_material_instances`, and move the logic back into
`extract_mesh_materials`
- Remove `HasMaterial2d` and `HasMaterial3d`
- Change default material handles back to pink instead of white
- 2D uses `Color::srgb(1.0, 0.0, 1.0)`, while 3D uses `Color::srgb(1.0,
0.0, 0.5)`. Not sure if this is intended.
There is now no indication at all about missing materials for `Mesh2d`
and `Mesh3d`. Having a mesh without a material renders nothing.
## Testing
I ran `2d_shapes`, `mesh2d_manual`, and `3d_shapes`, with and without
mesh material components.
# Objective
- Fixes#15897
## Solution
- Despawn light view entities when they go unused or when the
corresponding view is not alive.
## Testing
- `scene_viewer` example no longer prints "The preprocessing index
buffer wasn't present" warning
- modified an example to try toggling shadows for all kinds of light:
https://gist.github.com/akimakinai/ddb0357191f5052b654370699d2314cf
# Objective
Another clippy-lint fix: the goal is so that `ci lints` actually
displays the problems that a contributor caused, and not a bunch of
existing stuff in the repo. (when run on nightly)
## Solution
This fixes all but the `clippy::needless_lifetimes` lint, which will
result in substantially more fixes and be in other PR(s). I also
explicitly allow `non_local_definitions` since it is [not working
correctly, but will be
fixed](https://github.com/rust-lang/rust/issues/131643).
A few things were manually fixed: for example, some places had an
explicitly defined `div_ceil` function that was used, which is no longer
needed since this function is stable on unsigned integers. Also, empty
lines in doc comments were handled individually.
## Testing
I ran `cargo clippy --workspace --all-targets --all-features --fix
--allow-staged` with the `clippy::needless_lifetimes` lint marked as
`allow` in `Cargo.toml` to avoid fixing that too. It now passes with all
but the listed lint.
# Objective
#15320 is a particularly painful breaking change, and the new
`RenderEntity` in particular is very noisy, with a lot of `let entity =
entity.id()` spam.
## Solution
Implement `WorldQuery`, `QueryData` and `ReadOnlyQueryData` for
`RenderEntity` and `WorldEntity`.
These work the same as the `Entity` impls from a user-facing
perspective: they simply return an owned (copied) `Entity` identifier.
This dramatically reduces noise and eases migration.
Under the hood, these impls defer to the implementations for `&T` for
everything other than the "call .id() for the user" bit, as they involve
read-only access to component data. Doing it this way (as opposed to
implementing a custom fetch, as tried in the first commit) dramatically
reduces the maintenance risk of complex unsafe code outside of
`bevy_ecs`.
To make this easier (and encourage users to do this themselves!), I've
made `ReadFetch` and `WriteFetch` slightly more public: they're no
longer `doc(hidden)`. This is a good change, since trying to vendor the
logic is much worse than just deferring to the existing tested impls.
## Testing
I've run a handful of rendering examples (breakout, alien_cake_addict,
auto_exposure, fog_volumes, box_shadow) and nothing broke.
## Follow-up
We should lint for the uses of `&RenderEntity` and `&MainEntity` in
queries: this is just less nice for no reason.
---------
Co-authored-by: Trashtalk217 <trashtalk217@gmail.com>
# Objective
In the Render World, there are a number of collections that are derived
from Main World entities and are used to drive rendering. The most
notable are:
- `VisibleEntities`, which is generated in the `check_visibility` system
and contains visible entities for a view.
- `ExtractedInstances`, which maps entity ids to asset ids.
In the old model, these collections were trivially kept in sync -- any
extracted phase item could look itself up because the render entity id
was guaranteed to always match the corresponding main world id.
After #15320, this became much more complicated, and was leading to a
number of subtle bugs in the Render World. The main rendering systems,
i.e. `queue_material_meshes` and `queue_material2d_meshes`, follow a
similar pattern:
```rust
for visible_entity in visible_entities.iter::<With<Mesh2d>>() {
let Some(mesh_instance) = render_mesh_instances.get_mut(visible_entity) else {
continue;
};
// Look some more stuff up and specialize the pipeline...
let bin_key = Opaque2dBinKey {
pipeline: pipeline_id,
draw_function: draw_opaque_2d,
asset_id: mesh_instance.mesh_asset_id.into(),
material_bind_group_id: material_2d.get_bind_group_id().0,
};
opaque_phase.add(
bin_key,
*visible_entity,
BinnedRenderPhaseType::mesh(mesh_instance.automatic_batching),
);
}
```
In this case, `visible_entities` and `render_mesh_instances` are both
collections that are created and keyed by Main World entity ids, and so
this lookup happens to work by coincidence. However, there is a major
unintentional bug here: namely, because `visible_entities` is a
collection of Main World ids, the phase item being queued is created
with a Main World id rather than its correct Render World id.
This happens to not break mesh rendering because the render commands
used for drawing meshes do not access the `ItemQuery` parameter, but
demonstrates the confusion that is now possible: our UI phase items are
correctly being queued with Render World ids while our meshes aren't.
Additionally, this makes it very easy and error prone to use the wrong
entity id to look up things like assets. For example, if instead we
ignored visibility checks and queued our meshes via a query, we'd have
to be extra careful to use `&MainEntity` instead of the natural
`Entity`.
## Solution
Make all collections that are derived from Main World data use
`MainEntity` as their key, to ensure type safety and avoid accidentally
looking up data with the wrong entity id:
```rust
pub type MainEntityHashMap<V> = hashbrown::HashMap<MainEntity, V, EntityHash>;
```
Additionally, we make all `PhaseItem` be able to provide both their Main
and Render World ids, to allow render phase implementors maximum
flexibility as to what id should be used to look up data.
You can think of this like tracking at the type level whether something
in the Render World should use it's "primary key", i.e. entity id, or
needs to use a foreign key, i.e. `MainEntity`.
## Testing
##### TODO:
This will require extensive testing to make sure things didn't break!
Additionally, some extraction logic has become more complicated and
needs to be checked for regressions.
## Migration Guide
With the advent of the retained render world, collections that contain
references to `Entity` that are extracted into the render world have
been changed to contain `MainEntity` in order to prevent errors where a
render world entity id is used to look up an item by accident. Custom
rendering code may need to be changed to query for `&MainEntity` in
order to look up the correct item from such a collection. Additionally,
users who implement their own extraction logic for collections of main
world entity should strongly consider extracting into a different
collection that uses `MainEntity` as a key.
Additionally, render phases now require specifying both the `Entity` and
`MainEntity` for a given `PhaseItem`. Custom render phases should ensure
`MainEntity` is available when queuing a phase item.
# Objective
- Closes#15716
- Closes#15718
## Solution
- Replace `Handle<MeshletMesh>` with a new `MeshletMesh3d` component
- As expected there were some random things that needed fixing:
- A couple tests were storing handles just to prevent them from being
dropped I believe, which seems to have been unnecessary in some.
- The `SpriteBundle` still had a `Handle<Image>` field. I've removed
this.
- Tests in `bevy_sprite` incorrectly added a `Handle<Image>` field
outside of the `Sprite` component.
- A few examples were still inserting `Handle`s, switched those to their
corresponding wrappers.
- 2 examples that were still querying for `Handle<Image>` were changed
to query `Sprite`
## Testing
- I've verified that the changed example work now
## Migration Guide
`Handle` can no longer be used as a `Component`. All existing Bevy types
using this pattern have been wrapped in their own semantically
meaningful type. You should do the same for any custom `Handle`
components your project needs.
The `Handle<MeshletMesh>` component is now `MeshletMesh3d`.
The `WithMeshletMesh` type alias has been removed. Use
`With<MeshletMesh3d>` instead.
# Objective
- Another step towards #15716
- Remove trait implementations that are dependent on `Handle<T>` being a
`Component`
## Solution
- Remove unused `ExtractComponent` trait implementation for `Handle<T>`
- Remove unused `ExtractInstance` trait implementation for `AssetId`
- Although the `ExtractInstance` trait wasn't used, the `AssetId`s were
being stored inside of `ExtractedInstances` which has an
`ExtractInstance` trait bound on its contents.
I've upgraded the `RenderMaterialInstances` type alias to be its own
resource, identical to `ExtractedInstances<AssetId<M>>` to get around
that with minimal breakage.
## Testing
Tested `many_cubes`, rendering did not explode
# Objective
- Closes#15752
Calling the functions `App::observe` and `World::observe` doesn't make
sense because you're not "observing" the `App` or `World`, you're adding
an observer that listens for an event that occurs *within* the `World`.
We should rename them to better fit this.
## Solution
Renames:
- `App::observe` -> `App::add_observer`
- `World::observe` -> `World::add_observer`
- `Commands::observe` -> `Commands::add_observer`
- `EntityWorldMut::observe_entity` -> `EntityWorldMut::observe`
(Note this isn't a breaking change as the original rename was introduced
earlier this cycle.)
## Testing
Reusing current tests.