# Objective
Fixes: #16578
## Solution
This is a patch fix, proper fix requires a breaking change.
Added `Panic` enum variant and using is as the system meta default.
Warn once behavior can be enabled same way disabling panic (originally
disabling wans) is.
To fix an issue with the current architecture, where **all** combinator
system params get checked together,
combinator systems only check params of the first system.
This will result in old, panicking behavior on subsequent systems and
will be fixed in 0.16.
## Testing
Ran unit tests and `fallible_params` example.
---------
Co-authored-by: François Mockers <mockersf@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
This commit fixes the following regressions:
1. Transmission-specific calls to shader lighting functions didn't pass
the `enable_diffuse` parameter, breaking the `transmission` example.
2. The combination of bindless `StandardMaterial` and bindless lightmaps
caused us to blow past the 128 texture limit on M1/M2 chips in some
cases, in particular the `depth_of_field` example.
https://github.com/gfx-rs/wgpu/issues/3334 should fix this, but in the
meantime this patch reduces the number of bindless lightmaps from 16 to
4 in order to stay under the limit.
3. The renderer was crashing on startup on Adreno 610 chips. This PR
simply disables bindless on Adreno 610 and lower.
# Objective
`EntityHashMap` and `EntityHashSet` iterators do not implement
`EntitySetIterator`.
## Solution
Make them newtypes instead of aliases. The methods that create the
iterators can then produce their own newtypes that carry the `Hasher`
generic and implement `EntitySetIterator`. Functionality remains the
same otherwise.
There are some other small benefits, f.e. the removal of `with_hasher`
associated functions, and the ability to implement more traits
ourselves.
`MainEntityHashMap` and `MainEntityHashSet` are currently left as the
previous type aliases, because supporting general `TrustedEntityBorrow`
hashing is more complex. However, it can also be done.
## Testing
Pre-existing `EntityHashMap` tests.
## Migration Guide
Users of `with_hasher` and `with_capacity_and_hasher` on
`EntityHashMap`/`Set` must now use `new` and `with_capacity`
respectively.
If the non-newtyped versions are required, they can be obtained via
`Deref`, `DerefMut` or `into_inner` calls.
# Objective
When preparing `GpuImage`s, we currently discard the
`depth_or_array_layers` of the `Image`'s size by converting it into a
`UVec2`.
Fixes#16715.
## Solution
Change `GpuImage::size` to `Extent3d`, and just pass that through when
creating `GpuImage`s.
Also copy the `aspect_ratio`, and `size` (now `size_2d` for
disambiguation from the field) functions from `Image` to `GpuImage` for
ease of use with 2D textures.
I originally copied all size-related functions (like `width`, and
`height`), but i think they are unnecessary considering how visible the
`size` field on `GpuImage` is compared to `Image`.
## Testing
Tested via `cargo r -p ci` for everything except docs, when generating
docs it keeps spitting out a ton of
```
error[E0554]: `#![feature]` may not be used on the stable release channel
--> crates/bevy_dylib/src/lib.rs:1:21
|
1 | #![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
```
Not sure why this is happening, but it also happens without my changes,
so it's almost certainly some strange issue specific to my machine.
## Migration Guide
- `GpuImage::size` is now an `Extent3d`. To easily get 2D size, use
`size_2d()`.
The only thing that was preventing `extract_meshes_for_gpu_building` and
`extract_mesh_materials` from running in parallel was the
`ResMut<RenderMeshMaterialIds>`. This lookup can be safely moved to the
`collect_meshes_for_gpu_building` phase, which runs after the extraction
phase.
This results in a small win on `many_cubes`. `extract_mesh_materials` is
currently nonretained, so it's still slow, but running it in parallel is
an easy win.
Before:

After:

This PR adds support for *mixed lighting* to Bevy, whereby some parts of
the scene are lightmapped, while others take part in real-time lighting.
(Here *real-time lighting* means lighting at runtime via the PBR shader,
as opposed to precomputed light using lightmaps.) It does so by adding a
new field, `affects_lightmapped_meshes` to `IrradianceVolume` and
`AmbientLight`, and a corresponding field
`affects_lightmapped_mesh_diffuse` to `DirectionalLight`, `PointLight`,
`SpotLight`, and `EnvironmentMapLight`. By default, this value is set to
true; when set to false, the light contributes nothing to the diffuse
irradiance component to meshes with lightmaps.
Note that specular light is unaffected. This is because the correct way
to bake specular lighting is *directional lightmaps*, which we have no
support for yet.
There are two general ways I expect this field to be used:
1. When diffuse indirect light is baked into lightmaps, irradiance
volumes and reflection probes shouldn't contribute any diffuse light to
the static geometry that has a lightmap. That's because the baking tool
should have already accounted for it, and in a higher-quality fashion,
as lightmaps typically offer a higher effective texture resolution than
the light probe does.
2. When direct diffuse light is baked into a lightmap, punctual lights
shouldn't contribute any diffuse light to static geometry with a
lightmap, to avoid double-counting. It may seem odd to bake *direct*
light into a lightmap, as opposed to indirect light. But there is a use
case: in a scene with many lights, avoiding light leaks requires shadow
mapping, which quickly becomes prohibitive when many lights are
involved. Baking lightmaps allows light leaks to be eliminated on static
geometry.
A new example, `mixed_lighting`, has been added. It demonstrates a sofa
(model from the [glTF Sample Assets]) that has been lightmapped offline
using [Bakery]. It has four modes:
1. In *baked* mode, all objects are locked in place, and all the diffuse
direct and indirect light has been calculated ahead of time. Note that
the bottom of the sphere has a red tint from the sofa, illustrating that
the baking tool captured indirect light for it.
2. In *mixed direct* mode, lightmaps capturing diffuse direct and
indirect light have been pre-calculated for the static objects, but the
dynamic sphere has real-time lighting. Note that, because the diffuse
lighting has been entirely pre-calculated for the scenery, the dynamic
sphere casts no shadow. In a real app, you would typically use real-time
lighting for the most important light so that dynamic objects can shadow
the scenery and relegate baked lighting to the less important lights for
which shadows aren't as important. Also note that there is no red tint
on the sphere, because there is no global illumination applied to it. In
an actual game, you could fix this problem by supplementing the
lightmapped objects with an irradiance volume.
3. In *mixed indirect* mode, all direct light is calculated in
real-time, and the static objects have pre-calculated indirect lighting.
This corresponds to the mode that most applications are expected to use.
Because direct light on the scenery is computed dynamically, shadows are
fully supported. As in mixed direct mode, there is no global
illumination on the sphere; in a real application, irradiance volumes
could be used to supplement the lightmaps.
4. In *real-time* mode, no lightmaps are used at all, and all punctual
lights are rendered in real-time. No global illumination exists.
In the example, you can click around to move the sphere, unless you're
in baked mode, in which case the sphere must be locked in place to be
lit correctly.
## Showcase
Baked mode:

Mixed direct mode:

Mixed indirect mode (default):

Real-time mode:

## Migration guide
* The `AmbientLight` resource, the `IrradianceVolume` component, and the
`EnvironmentMapLight` component now have `affects_lightmapped_meshes`
fields. If you don't need to use that field (for example, if you aren't
using lightmaps), you can safely set the field to true.
* `DirectionalLight`, `PointLight`, and `SpotLight` now have
`affects_lightmapped_mesh_diffuse` fields. If you don't need to use that
field (for example, if you aren't using lightmaps), you can safely set
the field to true.
[glTF Sample Assets]:
https://github.com/KhronosGroup/glTF-Sample-Assets/tree/main
[Bakery]:
https://geom.io/bakery/wiki/index.php?title=Bakery_-_GPU_Lightmapper
This commit allows Bevy to bind 16 lightmaps at a time, if the current
platform supports bindless textures. Naturally, if bindless textures
aren't supported, Bevy falls back to binding only a single lightmap at a
time. As lightmaps are usually heavily atlased, I doubt many scenes will
use more than 16 lightmap textures.
This has little performance impact now, but it's desirable for us to
reap the benefits of multidraw and bindless textures on scenes that use
lightmaps. Otherwise, we might have to break batches in order to switch
those lightmaps.
Additionally, this PR slightly reduces the cost of binning because it
makes the lightmap index in `Opaque3dBinKey` 32 bits instead of an
`AssetId`.
## Migration Guide
* The `Opaque3dBinKey::lightmap_image` field is now
`Opaque3dBinKey::lightmap_slab`, which is a lightweight identifier for
an entire binding array of lightmaps.
This patch replaces the undocumented `NoGpuCulling` component with a new
component, `NoIndirectDrawing`, effectively turning indirect drawing on
by default. Indirect mode is needed for the recently-landed multidraw
feature (#16427). Since multidraw is such a win for performance, when
that feature is supported the small performance tax that indirect mode
incurs is virtually always worth paying.
To ensure that custom drawing code such as that in the
`custom_shader_instancing` example continues to function, this commit
additionally makes GPU culling take the `NoFrustumCulling` component
into account.
This PR is an alternative to #16670 that doesn't break the
`custom_shader_instancing` example. **PR #16755 should land first in
order to avoid breaking deferred rendering, as multidraw currently
breaks it**.
## Migration Guide
* Indirect drawing (GPU culling) is now enabled by default, so the
`GpuCulling` component is no longer available. To disable indirect mode,
which may be useful with custom render nodes, add the new
`NoIndirectDrawing` component to your camera.
This commit resolves most of the failures seen in #16670. It contains
two major fixes:
1. The prepass shaders weren't updated for bindless mode, so they were
accessing `material` as a single element instead of as an array. I added
the needed `BINDLESS` check.
2. If the mesh didn't support batch set keys (i.e. `get_batch_set_key()`
returns `None`), and multidraw was enabled, the batching logic would try
to multidraw all the meshes in a bin together instead of disabling
multidraw. This is because we checked whether the `Option<BatchSetKey>`
for the previous batch was equal to the `Option<BatchSetKey>` for the
next batch to determine whether objects could be multidrawn together,
which would return true if batch set keys were absent, causing an entire
bin to be multidrawn together. This patch fixes the logic so that
multidraw is only enabled if the batch set keys match *and are `Some`*.
Additionally, this commit adds batch key support for bins that use
`Opaque3dNoLightmapBinKey`, which in practice means prepasses.
Consequently, this patch enables multidraw for the prepass when GPU
culling is enabled.
When testing this patch, try adding `GpuCulling` to the camera in the
`deferred_rendering` and `ssr` examples. You can see that these examples
break without this patch and work properly with it.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Updating dependencies; adopted version of #15696. (Supercedes #15696.)
Long answer: hashbrown is no longer using ahash by default, meaning that
we can't use the default-hasher methods with ahasher. So, we have to use
the longer-winded versions instead. This takes the opportunity to also
switch our default hasher as well, but without actually enabling the
default-hasher feature for hashbrown, meaning that we'll be able to
change our hasher more easily at the cost of all of these method calls
being obnoxious forever.
One large change from 0.15 is that `insert_unique_unchecked` is now
`unsafe`, and for cases where unsafe code was denied at the crate level,
I replaced it with `insert`.
## Migration Guide
`bevy_utils` has updated its version of `hashbrown` to 0.15 and now
defaults to `foldhash` instead of `ahash`. This means that if you've
hard-coded your hasher to `bevy_utils::AHasher` or separately used the
`ahash` crate in your code, you may need to switch to `foldhash` to
ensure that everything works like it does in Bevy.
This commit makes skinned meshes batchable on platforms other than WebGL
2. On supported platforms, it replaces the two uniform buffers used for
joint matrices with a pair of storage buffers containing all matrices
for all skinned meshes packed together. The indices into the buffer are
stored in the mesh uniform and mesh input uniform. The GPU mesh
preprocessing step copies the indices in if that step is enabled.
On the `many_foxes` demo, I observed a frame time decrease from 15.470ms
to 11.935ms. This is the result of reducing the `submit_graph_commands`
time from an average of 5.45ms to 0.489ms, an 11x speedup in that
portion of rendering.

This is what the profile looks like for `many_foxes` after these
changes.

---------
Co-authored-by: François Mockers <mockersf@gmail.com>
This commit makes `StandardMaterial` use bindless textures, as
implemented in PR #16368. Non-bindless mode, as used for example in
Metal and WebGL 2, remains fully supported via a plethora of `#ifdef
BINDLESS` preprocessor definitions.
Unfortunately, this PR introduces quite a bit of unsightliness into the
PBR shaders. This is a result of the fact that WGSL supports neither
passing binding arrays to functions nor passing individual *elements* of
binding arrays to functions, except directly to texture sample
functions. Thus we're unable to use the `sample_texture` abstraction
that helped abstract over the meshlet and non-meshlet paths. I don't
think there's anything we can do to help this other than to suggest
improvements to upstream Naga.
This patch makes shadows use multidraw when the camera they'll be drawn
to has the `GpuCulling` component. This results in a significant
reduction in drawcalls; Bistro Exterior drops to 3 drawcalls for each
shadow cascade.
Note that PR #16670 will remove the `GpuCulling` component, making
shadows automatically use multidraw. Beware of that when testing this
patch; before #16670 lands, you'll need to manually add `GpuCulling` to
your camera in order to see any performance benefits.
PR #15756 made us create temporary render entities for all visible
objects, even if they had no render world counterpart. This regressed
our `many_cubes` time from about 3.59 ms/frame to 4.66 ms/frame.
This commit changes that behavior to use `Entity::PLACEHOLDER` instead
of creating a temporary render entity. This improves our `many_cubes`
time from 5.66 ms/frame to 3.96 ms/frame, a 43% speedup.
I tested 3D, 2D gizmos, and UI and they seem to work.
See the following graph of `many_cubes` frame time (lower is better). PR
#15756 is the one in October.

This commit removes the logic that attempted to keep the
`MeshInputUniform` buffer contiguous. Not only was it slow and complex,
but it was also incorrect, which caused #16686 and #16690. I changed the
logic to simply maintain a free list of unused slots in the buffer and
preferentially fill them when pushing new mesh input uniforms.
Closes#16686.
Closes#16690.
# Objective
- A `Trigger` has multiple associated `Entity`s - the entity observing
the event, and the entity that was targeted by the event.
- The field `entity: Entity` encodes no semantic information about what
the entity is used for, you can already tell that it's an `Entity` by
the type signature!
## Solution
- Rename `trigger.entity()` to `trigger.target()`
---
## Changelog
- `Trigger`s are associated with multiple entities. `Trigger::entity()`
has been renamed to `Trigger::target()` to reflect the semantics of the
entity being returned.
## Migration Guide
- Rename `Trigger::entity()` to `Trigger::target()`.
- Rename `ObserverTrigger::entity` to `ObserverTrigger::target`
This commit adds support for *multidraw*, which is a feature that allows
multiple meshes to be drawn in a single drawcall. `wgpu` currently
implements multidraw on Vulkan, so this feature is only enabled there.
Multiple meshes can be drawn at once if they're in the same vertex and
index buffers and are otherwise placed in the same bin. (Thus, for
example, at present the materials and textures must be identical, but
see #16368.) Multidraw is a significant performance improvement during
the draw phase because it reduces the number of rebindings, as well as
the number of drawcalls.
This feature is currently only enabled when GPU culling is used: i.e.
when `GpuCulling` is present on a camera. Therefore, if you run for
example `scene_viewer`, you will not see any performance improvements,
because `scene_viewer` doesn't add the `GpuCulling` component to its
camera.
Additionally, the multidraw feature is only implemented for opaque 3D
meshes and not for shadows or 2D meshes. I plan to make GPU culling the
default and to extend the feature to shadows in the future. Also, in the
future I suspect that polyfilling multidraw on APIs that don't support
it will be fruitful, as even without driver-level support use of
multidraw allows us to avoid expensive `wgpu` rebindings.
The bindless PR (#16368) broke some examples:
* `specialized_mesh_pipeline` and `custom_shader_instancing` failed
because they expect to be able to render a mesh with no material, by
overriding enough of the render pipeline to be able to do so. This PR
fixes the issue by restoring the old behavior in which we extract meshes
even if they have no material.
* `texture_binding_array` broke because it doesn't implement
`AsBindGroup::unprepared_bind_group`. This was tricky to fix because
there's a very good reason why `texture_binding_array` doesn't implement
that method: there's no sensible way to do so with `wgpu`'s current
bindless API, due to its multiple levels of borrowed references. To fix
the example, I split `MaterialBindGroup` into
`MaterialBindlessBindGroup` and `MaterialNonBindlessBindGroup`, and
allow direct custom implementations of `AsBindGroup::as_bind_group` for
the latter type of bind groups. To opt in to the new behavior, return
the `AsBindGroupError::CreateBindGroupDirectly` error from your
`AsBindGroup::unprepared_bind_group` implementation, and Bevy will call
your custom `AsBindGroup::as_bind_group` method as before.
## Migration Guide
* Bevy will now unconditionally call
`AsBindGroup::unprepared_bind_group` for your materials, so you must no
longer panic in that function. Instead, return the new
`AsBindGroupError::CreateBindGroupDirectly` error, and Bevy will fall
back to calling `AsBindGroup::as_bind_group` as before.
This commit moves the front end of the rendering pipeline to a retained
model when GPU preprocessing is in use (i.e. by default, except in
constrained environments). `RenderMeshInstance` and `MeshUniformData`
are stored from frame to frame and are updated only for the entities
that changed state. This was rather tricky and requires some careful
surgery to keep the data valid in the case of removals.
This patch is built on top of Bevy's change detection. Generally, this
worked, except that `ViewVisibility` isn't currently properly tracked.
Therefore, this commit adds proper change tracking for `ViewVisibility`.
Doing this required adding a new system that runs after all
`check_visibility` invocations, as no single `check_visibility`
invocation has enough global information to detect changes.
On the Bistro exterior scene, with all textures forced to opaque, this
patch improves steady-state `extract_meshes_for_gpu_building` from
93.8us to 34.5us and steady-state `collect_meshes_for_gpu_building` from
195.7us to 4.28us. Altogether this constitutes an improvement from 290us
to 38us, which is a 7.46x speedup.


This patch is only lightly tested and shouldn't land before 0.15 is
released anyway, so I'm releasing it as a draft.
This commit allows the Bevy renderer to use the clustering
infrastructure for light probes (reflection probes and irradiance
volumes) on platforms where at least 3 storage buffers are available. On
such platforms (the vast majority), we stop performing brute-force
searches of light probes for each fragment and instead only search the
light probes with bounding spheres that intersect the current cluster.
This should dramatically improve scalability of irradiance volumes and
reflection probes.
The primary platform that doesn't support 3 storage buffers is WebGL 2,
and we continue using a brute-force search of light probes on that
platform, as the UBO that stores per-cluster indices is too small to fit
the light probe counts. Note, however, that that platform also doesn't
support bindless textures (indeed, it would be very odd for a platform
to support bindless textures but not SSBOs), so we only support one of
each type of light probe per drawcall there in the first place.
Consequently, this isn't a performance problem, as the search will only
have one light probe to consider. (In fact, clustering would probably
end up being a performance loss.)
Known potential improvements include:
1. We currently cull based on a conservative bounding sphere test and
not based on the oriented bounding box (OBB) of the light probe. This is
improvable, but in the interests of simplicity, I opted to keep the
bounding sphere test for now. The OBB improvement can be a follow-up.
2. This patch doesn't change the fact that each fragment only takes a
single light probe into account. Typical light probe implementations
detect the case in which multiple light probes cover the current
fragment and perform some sort of weighted blend between them. As the
light probe fetch function presently returns only a single light probe,
implementing that feature would require more code restructuring, so I
left it out for now. It can be added as a follow-up.
3. Light probe implementations typically have a falloff range. Although
this is a wanted feature in Bevy, this particular commit also doesn't
implement that feature, as it's out of scope.
4. This commit doesn't raise the maximum number of light probes past its
current value of 8 for each type. This should be addressed later, but
would possibly require more bindings on platforms with storage buffers,
which would increase this patch's complexity. Even without raising the
limit, this patch should constitute a significant performance
improvement for scenes that get anywhere close to this limit. In the
interest of keeping this patch small, I opted to leave raising the limit
to a follow-up.
## Changelog
### Changed
* Light probes (reflection probes and irradiance volumes) are now
clustered on most platforms, improving performance when many light
probes are present.
---------
Co-authored-by: Benjamin Brienen <Benjamin.Brienen@outlook.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Currently, the prepass has no support for visibility ranges, so
artifacts appear when using dithering visibility ranges in conjunction
with a prepass. This patch fixes that problem.
Note that this patch changes the prepass to use sparse bind group
indices instead of sequential ones. I figured this is cleaner, because
it allows for greater sharing of WGSL code between the forward pipeline
and the prepass pipeline.
The `visibility_range` example has been updated to allow the prepass to
be toggled on and off.
This patch adds the infrastructure necessary for Bevy to support
*bindless resources*, by adding a new `#[bindless]` attribute to
`AsBindGroup`.
Classically, only a single texture (or sampler, or buffer) can be
attached to each shader binding. This means that switching materials
requires breaking a batch and issuing a new drawcall, even if the mesh
is otherwise identical. This adds significant overhead not only in the
driver but also in `wgpu`, as switching bind groups increases the amount
of validation work that `wgpu` must do.
*Bindless resources* are the typical solution to this problem. Instead
of switching bindings between each texture, the renderer instead
supplies a large *array* of all textures in the scene up front, and the
material contains an index into that array. This pattern is repeated for
buffers and samplers as well. The renderer now no longer needs to switch
binding descriptor sets while drawing the scene.
Unfortunately, as things currently stand, this approach won't quite work
for Bevy. Two aspects of `wgpu` conspire to make this ideal approach
unacceptably slow:
1. In the DX12 backend, all binding arrays (bindless resources) must
have a constant size declared in the shader, and all textures in an
array must be bound to actual textures. Changing the size requires a
recompile.
2. Changing even one texture incurs revalidation of all textures, a
process that takes time that's linear in the total size of the binding
array.
This means that declaring a large array of textures big enough to
encompass the entire scene is presently unacceptably slow. For example,
if you declare 4096 textures, then `wgpu` will have to revalidate all
4096 textures if even a single one changes. This process can take
multiple frames.
To work around this problem, this PR groups bindless resources into
small *slabs* and maintains a free list for each. The size of each slab
for the bindless arrays associated with a material is specified via the
`#[bindless(N)]` attribute. For instance, consider the following
declaration:
```rust
#[derive(AsBindGroup)]
#[bindless(16)]
struct MyMaterial {
#[buffer(0)]
color: Vec4,
#[texture(1)]
#[sampler(2)]
diffuse: Handle<Image>,
}
```
The `#[bindless(N)]` attribute specifies that, if bindless arrays are
supported on the current platform, each resource becomes a binding array
of N instances of that resource. So, for `MyMaterial` above, the `color`
attribute is exposed to the shader as `binding_array<vec4<f32>, 16>`,
the `diffuse` texture is exposed to the shader as
`binding_array<texture_2d<f32>, 16>`, and the `diffuse` sampler is
exposed to the shader as `binding_array<sampler, 16>`. Inside the
material's vertex and fragment shaders, the applicable index is
available via the `material_bind_group_slot` field of the `Mesh`
structure. So, for instance, you can access the current color like so:
```wgsl
// `uniform` binding arrays are a non-sequitur, so `uniform` is automatically promoted
// to `storage` in bindless mode.
@group(2) @binding(0) var<storage> material_color: binding_array<Color, 4>;
...
@fragment
fn fragment(in: VertexOutput) -> @location(0) vec4<f32> {
let color = material_color[mesh[in.instance_index].material_bind_group_slot];
...
}
```
Note that portable shader code can't guarantee that the current platform
supports bindless textures. Indeed, bindless mode is only available in
Vulkan and DX12. The `BINDLESS` shader definition is available for your
use to determine whether you're on a bindless platform or not. Thus a
portable version of the shader above would look like:
```wgsl
#ifdef BINDLESS
@group(2) @binding(0) var<storage> material_color: binding_array<Color, 4>;
#else // BINDLESS
@group(2) @binding(0) var<uniform> material_color: Color;
#endif // BINDLESS
...
@fragment
fn fragment(in: VertexOutput) -> @location(0) vec4<f32> {
#ifdef BINDLESS
let color = material_color[mesh[in.instance_index].material_bind_group_slot];
#else // BINDLESS
let color = material_color;
#endif // BINDLESS
...
}
```
Importantly, this PR *doesn't* update `StandardMaterial` to be bindless.
So, for example, `scene_viewer` will currently not run any faster. I
intend to update `StandardMaterial` to use bindless mode in a follow-up
patch.
A new example, `shaders/shader_material_bindless`, has been added to
demonstrate how to use this new feature.
Here's a Tracy profile of `submit_graph_commands` of this patch and an
additional patch (not submitted yet) that makes `StandardMaterial` use
bindless. Red is those patches; yellow is `main`. The scene was Bistro
Exterior with a hack that forces all textures to opaque. You can see a
1.47x mean speedup.

## Migration Guide
* `RenderAssets::prepare_asset` now takes an `AssetId` parameter.
* Bin keys now have Bevy-specific material bind group indices instead of
`wgpu` material bind group IDs, as part of the bindless change. Use the
new `MaterialBindGroupAllocator` to map from bind group index to bind
group ID.
# Objective
- Fixes#16078
## Solution
- Rename things to clarify that we _want_ unclipped depth for
directional light shadow views, and need some way of disabling the GPU's
builtin depth clipping
- Use DEPTH_CLIP_CONTROL instead of the fragment shader emulation on
supported platforms
- Pass only the clip position depth instead of the whole clip position
between vertex->fragment shader (no idea if this helps performance or
not, compiler might optimize it anyways)
- Meshlets
- HW raster always uses DEPTH_CLIP_CONTROL since it targets a more
limited set of platforms
- SW raster was not handling DEPTH_CLAMP_ORTHO correctly, it ended up
pretty much doing nothing.
- This PR made me realize that SW raster technically should have depth
clipping for all views that are not directional light shadows, but I
decided not to bother writing it. I'm not sure that it ever matters in
practice. If proven otherwise, I can add it.
## Testing
- Did you test these changes? If so, how?
- Lighting example. Both opaque (no fragment shader) and alpha masked
geometry (fragment shader emulation) are working with
depth_clip_control, and both work when it's turned off. Also tested
meshlet example.
- Are there any parts that need more testing?
- Performance. I can't figure out a good test scene.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Toggle depth_clip_control_supported in prepass/mod.rs line 323 to turn
this PR on or off.
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
- Native
---
## Migration Guide
- `MeshPipelineKey::DEPTH_CLAMP_ORTHO` is now
`MeshPipelineKey::UNCLIPPED_DEPTH_ORTHO`
- The `DEPTH_CLAMP_ORTHO` shaderdef has been renamed to
`UNCLIPPED_DEPTH_ORTHO_EMULATION`
- `clip_position_unclamped: vec4<f32>` is now `unclipped_depth: f32`
# Objective
PCSS still has some fundamental issues (#16155). We should resolve them
before "releasing" the feature.
## Solution
1. Rename the already-optional `pbr_pcss` cargo feature to
`experimental_pbr_pcss` to better communicate its state to developers.
2. Adjust the description of the `experimental_pbr_pcss` cargo feature
to better communicate its state to developers.
3. Gate PCSS-related light component fields behind that cargo feature,
to prevent surfacing them to developers by default.
# Objective
_If I understand it correctly_, we were checking mesh visibility, as
well as re-rendering point and spot light shadow maps for each view.
This makes it so that M views and N lights produce M x N complexity.
This PR aims to fix that, as well as introduce a stress test for this
specific scenario.
## Solution
- Keep track of what lights have already had mesh visibility calculated
and do not calculate it again;
- Reuse shadow depth textures and attachments across all views, and only
render shadow maps for the _first_ time a light is encountered on a
view;
- Directional lights remain unaltered, since their shadow map cascades
are view-dependent;
- Add a new `many_cameras_lights` stress test example to verify the
solution
## Showcase
110% speed up on the stress test
83% reduction of memory usage in stress test
### Before (5.35 FPS on stress test)
<img width="1392" alt="Screenshot 2024-09-11 at 12 25 57"
src="https://github.com/user-attachments/assets/136b0785-e9a4-44df-9a22-f99cc465e126">
### After (11.34 FPS on stress test)
<img width="1392" alt="Screenshot 2024-09-11 at 12 24 35"
src="https://github.com/user-attachments/assets/b8dd858f-5e19-467f-8344-2b46ca039630">
## Testing
- Did you test these changes? If so, how?
- On my game project where I have two cameras, and many shadow casting
lights I managed to get pretty much double the FPS.
- Also included a stress test, see the comparison above
- Are there any parts that need more testing?
- Yes, I would like help verifying that this fix is indeed correct, and
that we were really re-rendering the shadow maps by mistake and it's
indeed okay to not do that
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run the `many_cameras_lights` example
- On the `main` branch, cherry pick the commit with the example (`git
cherry-pick --no-commit 1ed4ace01`) and run it
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
- macOS
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
Fixes#15940
## Solution
Remove the `pub use` and fix the compile errors.
Make `bevy_image` available as `bevy::image`.
## Testing
Feature Frenzy would be good here! Maybe I'll learn how to use it if I
have some time this weekend, or maybe a reviewer can use it.
## Migration Guide
Use `bevy_image` instead of `bevy_render::texture` items.
---------
Co-authored-by: chompaa <antony.m.3012@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- wgpu 0.20 made workgroup vars stop being zero-init by default. this
broke some applications (cough foresight cough) and now we workaround
it. wgpu exposes a compilation option that zero initializes workgroup
memory by default, but bevy does not expose it.
## Solution
- expose the compilation option wgpu gives us
## Testing
- ran examples: 3d_scene, compute_shader_game_of_life, gpu_readback,
lines, specialized_mesh_pipeline. they all work
- confirmed fix for our own problems
---
</details>
## Migration Guide
- add `zero_initialize_workgroup_memory: false,` to
`ComputePipelineDescriptor` or `RenderPipelineDescriptor` structs to
preserve 0.14 functionality, add `zero_initialize_workgroup_memory:
true,` to restore bevy 0.13 functionality.
# Objective
- fix formatting issue in "mesh_view_binding.wgsl"
_note: As naga-oil preprocessor match the whole line when finding an
"#endif",
It's just for external formatting tool and consistency._
## Solution
Trivial change.
Add '//' before the closing comment of the "#endif"
# Objective
gpu based mesh uniform construction in the `GpuPreprocessNode` is
currently in `Core3d`. The node iterates all views and schedules the
uniform construction for each. so
- when there are multiple 3d cameras, it runs multiple times on each
view
- if a view wants to render meshes but doesn't use the `Core3d` graph,
the camera must run later than at least one `Core3d`-based camera (or
add the node to its own graph, duplicating the work)
- If views want to share mesh uniforms there is no way to avoid running
the preprocessing for every view
## Solution
- move the node to the top level of the rendergraph, before the camera
driver node
- make the `PreprocessBindGroup` `clone`able, and add a
`SkipGpuPreprocessing` component to allow opting out per view
# Objective
Order independent transparency can filter fragment writes based on the
alpha value and it is currently hard-coded to anything higher than 0.0.
By making that value configurable, users can optimize fragment writes,
potentially reducing the number of layers needed and improving
performance in favor of some transparency quality.
## Solution
This PR adds `alpha_threshold` to the
OrderIndependentTransparencySettings component and uses the struct to
configure a corresponding shader uniform. This uniform is then used
instead of the hard-coded value.
To configure OIT with a custom alpha threshold, use:
```rust
fn setup(mut commands: Commands) {
commands.spawn((
Camera3d::default(),
OrderIndependentTransparencySettings {
layer_count: 8,
alpha_threshold: 0.2,
},
));
}
```
## Testing
I tested this change using the included OIT example, as well as with two
additional projects.
## Migration Guide
If you previously explicitly initialized
OrderIndependentTransparencySettings with your own `layer_count`, you
will now have to add either a `..default()` statement or an explicit
`alpha_threshold` value:
```rust
fn setup(mut commands: Commands) {
commands.spawn((
Camera3d::default(),
OrderIndependentTransparencySettings {
layer_count: 16,
..default()
},
));
}
```
---------
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
The two additional linear texture samplers that PCSS added caused us to
blow past the limit on Apple Silicon macOS and WebGL. To fix the issue,
this commit adds a `--feature pbr_pcss` feature gate that disables PCSS
if not present.
Closes#15345.
Closes#15525.
Closes#15821.
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
The PCSS PR #13497 increased the size of clusterable objects from 64
bytes to 80 bytes but didn't decrease the UBO size to compensate, so we
blew past the 16kB limit on WebGL 2. This commit fixes the issue by
lowering the maximum number of clusterable objects to 204, which puts us
under the 16kB limit again.
Closes#15998.
# Objective
- I made a mistake in #15902, specifically [this
diff](e2faedb99c)
-- the `point_light_count` variable is used for all point lights, not
just shadow mapped ones, so I cannot add `.min(max_texture_cubes)`
there. (Despite `spot_light_count` having `.min(..)`)
It may have broken code like this (where `index` is index of
`point_light` vec):
9930df83ed/crates/bevy_pbr/src/render/light.rs (L848-L850)
and also causes panic here:
9930df83ed/crates/bevy_pbr/src/render/light.rs (L1173-L1174)
## Solution
- Adds `.min(max_texture_cubes)` directly to the loop where texture
views for point lights are created.
## Testing
- `lighting` example (with the directional light removed; original
example doesn't crash as only 1 directional-or-spot light in total is
shadow-mapped on webgl) no longer crashes on webgl
# Objective
Bevy seems to want to standardize on "American English" spellings. Not
sure if this is laid out anywhere in writing, but see also #15947.
While perusing the docs for `typos`, I noticed that it has a `locale`
config option and tried it out.
## Solution
Switch to `en-us` locale in the `typos` config and run `typos -w`
## Migration Guide
The following methods or fields have been renamed from `*dependants*` to
`*dependents*`.
- `ProcessorAssetInfo::dependants`
- `ProcessorAssetInfos::add_dependant`
- `ProcessorAssetInfos::non_existent_dependants`
- `AssetInfo::dependants_waiting_on_load`
- `AssetInfo::dependants_waiting_on_recursive_dep_load`
- `AssetInfos::loader_dependants`
- `AssetInfos::remove_dependants_and_labels`
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/15871
(Camera is done in #15946)
## Solution
- Do the same as #15904 for other extraction systems
- Added missing `SyncComponentPlugin` for DOF, TAA, and SSAO
(According to the
[documentation](https://dev-docs.bevyengine.org/bevy/render/sync_component/struct.SyncComponentPlugin.html),
this plugin "needs to be added for manual extraction implementations."
We may need to check this is done.)
## Testing
Modified example locally to add toggles if not exist.
- [x] DOF - toggling DOF component and perspective in `depth_of_field`
example
- [x] TAA - toggling `Camera.is_active` and TAA component
- [x] clusters - not entirely sure, toggling `Camera.is_active` in
`many_lights` example (no crash/glitch even without this PR)
- [x] previous_view - toggling `Camera.is_active` in `skybox` (no
crash/glitch even without this PR)
- [x] lights - toggling `Visibility` of `DirectionalLight` in `lighting`
example
- [x] SSAO - toggling `Camera.is_active` and SSAO component in `ssao`
example
- [x] default UI camera view - toggling `Camera.is_active` (nop without
#15946 because UI defaults to some camera even if `DefaultCameraView` is
not there)
- [x] volumetric fog - toggling existence of volumetric light. Looks
like optimization, no change in behavior/visuals
# Objective
- Fixes#15897
## Solution
- Despawn light view entities when they go unused or when the
corresponding view is not alive.
## Testing
- `scene_viewer` example no longer prints "The preprocessing index
buffer wasn't present" warning
- modified an example to try toggling shadows for all kinds of light:
https://gist.github.com/akimakinai/ddb0357191f5052b654370699d2314cf
# Objective
#15320 is a particularly painful breaking change, and the new
`RenderEntity` in particular is very noisy, with a lot of `let entity =
entity.id()` spam.
## Solution
Implement `WorldQuery`, `QueryData` and `ReadOnlyQueryData` for
`RenderEntity` and `WorldEntity`.
These work the same as the `Entity` impls from a user-facing
perspective: they simply return an owned (copied) `Entity` identifier.
This dramatically reduces noise and eases migration.
Under the hood, these impls defer to the implementations for `&T` for
everything other than the "call .id() for the user" bit, as they involve
read-only access to component data. Doing it this way (as opposed to
implementing a custom fetch, as tried in the first commit) dramatically
reduces the maintenance risk of complex unsafe code outside of
`bevy_ecs`.
To make this easier (and encourage users to do this themselves!), I've
made `ReadFetch` and `WriteFetch` slightly more public: they're no
longer `doc(hidden)`. This is a good change, since trying to vendor the
logic is much worse than just deferring to the existing tested impls.
## Testing
I've run a handful of rendering examples (breakout, alien_cake_addict,
auto_exposure, fog_volumes, box_shadow) and nothing broke.
## Follow-up
We should lint for the uses of `&RenderEntity` and `&MainEntity` in
queries: this is just less nice for no reason.
---------
Co-authored-by: Trashtalk217 <trashtalk217@gmail.com>
# Objective
In the Render World, there are a number of collections that are derived
from Main World entities and are used to drive rendering. The most
notable are:
- `VisibleEntities`, which is generated in the `check_visibility` system
and contains visible entities for a view.
- `ExtractedInstances`, which maps entity ids to asset ids.
In the old model, these collections were trivially kept in sync -- any
extracted phase item could look itself up because the render entity id
was guaranteed to always match the corresponding main world id.
After #15320, this became much more complicated, and was leading to a
number of subtle bugs in the Render World. The main rendering systems,
i.e. `queue_material_meshes` and `queue_material2d_meshes`, follow a
similar pattern:
```rust
for visible_entity in visible_entities.iter::<With<Mesh2d>>() {
let Some(mesh_instance) = render_mesh_instances.get_mut(visible_entity) else {
continue;
};
// Look some more stuff up and specialize the pipeline...
let bin_key = Opaque2dBinKey {
pipeline: pipeline_id,
draw_function: draw_opaque_2d,
asset_id: mesh_instance.mesh_asset_id.into(),
material_bind_group_id: material_2d.get_bind_group_id().0,
};
opaque_phase.add(
bin_key,
*visible_entity,
BinnedRenderPhaseType::mesh(mesh_instance.automatic_batching),
);
}
```
In this case, `visible_entities` and `render_mesh_instances` are both
collections that are created and keyed by Main World entity ids, and so
this lookup happens to work by coincidence. However, there is a major
unintentional bug here: namely, because `visible_entities` is a
collection of Main World ids, the phase item being queued is created
with a Main World id rather than its correct Render World id.
This happens to not break mesh rendering because the render commands
used for drawing meshes do not access the `ItemQuery` parameter, but
demonstrates the confusion that is now possible: our UI phase items are
correctly being queued with Render World ids while our meshes aren't.
Additionally, this makes it very easy and error prone to use the wrong
entity id to look up things like assets. For example, if instead we
ignored visibility checks and queued our meshes via a query, we'd have
to be extra careful to use `&MainEntity` instead of the natural
`Entity`.
## Solution
Make all collections that are derived from Main World data use
`MainEntity` as their key, to ensure type safety and avoid accidentally
looking up data with the wrong entity id:
```rust
pub type MainEntityHashMap<V> = hashbrown::HashMap<MainEntity, V, EntityHash>;
```
Additionally, we make all `PhaseItem` be able to provide both their Main
and Render World ids, to allow render phase implementors maximum
flexibility as to what id should be used to look up data.
You can think of this like tracking at the type level whether something
in the Render World should use it's "primary key", i.e. entity id, or
needs to use a foreign key, i.e. `MainEntity`.
## Testing
##### TODO:
This will require extensive testing to make sure things didn't break!
Additionally, some extraction logic has become more complicated and
needs to be checked for regressions.
## Migration Guide
With the advent of the retained render world, collections that contain
references to `Entity` that are extracted into the render world have
been changed to contain `MainEntity` in order to prevent errors where a
render world entity id is used to look up an item by accident. Custom
rendering code may need to be changed to query for `&MainEntity` in
order to look up the correct item from such a collection. Additionally,
users who implement their own extraction logic for collections of main
world entity should strongly consider extracting into a different
collection that uses `MainEntity` as a key.
Additionally, render phases now require specifying both the `Entity` and
`MainEntity` for a given `PhaseItem`. Custom render phases should ensure
`MainEntity` is available when queuing a phase item.
# Objective
Fixes#15560
Fixes (most of) #15570
Currently a lot of examples (and presumably some user code) depend on
toggling certain render features by adding/removing a single component
to an entity, e.g. `SpotLight` to toggle a light. Because of the
retained render world this no longer works: Extract will add any new
components, but when it is removed the entity persists unchanged in the
render world.
## Solution
Add `SyncComponentPlugin<C: Component>` that registers
`SyncToRenderWorld` as a required component for `C`, and adds a
component hook that will clear all components from the render world
entity when `C` is removed. We add this plugin to
`ExtractComponentPlugin` which fixes most instances of the problem. For
custom extraction logic we can manually add `SyncComponentPlugin` for
that component.
We also rename `WorldSyncPlugin` to `SyncWorldPlugin` so we start a
naming convention like all the `Extract` plugins.
In this PR I also fixed a bunch of breakage related to the retained
render world, stemming from old code that assumed that `Entity` would be
the same in both worlds.
I found that using the `RenderEntity` wrapper instead of `Entity` in
data structures when referring to render world entities makes intent
much clearer, so I propose we make this an official pattern.
## Testing
Run examples like
```
cargo run --features pbr_multi_layer_material_textures --example clearcoat
cargo run --example volumetric_fog
```
and see that they work, and that toggles work correctly. But really we
should test every single example, as we might not even have caught all
the breakage yet.
---
## Migration Guide
The retained render world notes should be updated to explain this edge
case and `SyncComponentPlugin`
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Trashtalk217 <trashtalk217@gmail.com>
# Objective
- Prepare for streaming by storing vertex data per-meshlet, rather than
per-mesh (this means duplicating vertices per-meshlet)
- Compress vertex data to reduce the cost of this
## Solution
The important parts are in from_mesh.rs, the changes to the Meshlet type
in asset.rs, and the changes in meshlet_bindings.wgsl. Everything else
is pretty secondary/boilerplate/straightforward changes.
- Positions are quantized in centimeters with a user-provided power of 2
factor (ideally auto-determined, but that's a TODO for the future),
encoded as an offset relative to the minimum value within the meshlet,
and then stored as a packed list of bits using the minimum number of
bits needed for each vertex position channel for that meshlet
- E.g. quantize positions (lossly, throws away precision that's not
needed leading to using less bits in the bitstream encoding)
- Get the min/max quantized value of each X/Y/Z channel of the quantized
positions within a meshlet
- Encode values relative to the min value of the meshlet. E.g. convert
from [min, max] to [0, max - min]
- The new max value in the meshlet is (max - min), which only takes N
bits, so we only need N bits to store each channel within the meshlet
(lossless)
- We can store the min value and that it takes N bits per channel in the
meshlet metadata, and reconstruct the position from the bitstream
- Normals are octahedral encoded and than snorm2x16 packed and stored as
a single u32.
- Would be better to implement the precise variant of octhedral encoding
for extra precision (no extra decode cost), but decided to keep it
simple for now and leave that as a followup
- Tried doing a quantizing and bitstream encoding scheme like I did for
positions, but struggled to get it smaller. Decided to go with this for
simplicity for now
- UVs are uncompressed and take a full 64bits per vertex which is
expensive
- In the future this should be improved
- Tangents, as of the previous PR, are not explicitly stored and are
instead derived from screen space gradients
- While I'm here, split up MeshletMeshSaverLoader into two separate
types
Other future changes include implementing a smaller encoding of triangle
data (3 u8 indices = 24 bits per triangle currently), and more
disk-oriented compression schemes.
References:
* "A Deep Dive into UE5's Nanite Virtualized Geometry"
https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf#page=128
(also available on youtube)
* "Towards Practical Meshlet Compression"
https://arxiv.org/pdf/2404.06359
* "Vertex quantization in Omniforce Game Engine"
https://daniilvinn.github.io/2024/05/04/omniforce-vertex-quantization.html
## Testing
- Did you test these changes? If so, how?
- Converted the stanford bunny, and rendered it with a debug material
showing normals, and confirmed that it's identical to what's on main.
EDIT: See additional testing in the comments below.
- Are there any parts that need more testing?
- Could use some more size comparisons on various meshes, and testing
different quantization factors. Not sure if 4 is a good default. EDIT:
See additional testing in the comments below.
- Also did not test runtime performance of the shaders. EDIT: See
additional testing in the comments below.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Use my unholy script, replacing the meshlet example
https://paste.rs/7xQHk.rs (must make MeshletMesh fields pub instead of
pub crate, must add lz4_flex as a dev-dependency) (must compile with
meshlet and meshlet_processor features, mesh must have only positions,
normals, and UVs, no vertex colors or tangents)
---
## Migration Guide
- TBD by JMS55 at the end of the release
The previous fixes were breaking pretty much everything on main due to
naga-oil complaining about the OIT shader not being loaded, since
apparently webgl is a default feature. This fix is a bit messier, but
properly warns the user and is probably what we should have gone for in
the first place.
# Objective
- Alpha blending can easily fail in many situations and requires sorting
on the cpu
## Solution
- Implement order independent transparency (OIT) as an alternative to
alpha blending
- The implementation uses 2 passes
- The first pass records all the fragments colors and position to a
buffer that is the size of N layers * the render target resolution.
- The second pass sorts the fragments, blends them and draws them to the
screen. It also currently does manual depth testing because early-z
fails in too many cases in the first pass.
## Testing
- We've been using this implementation at foresight in production for
many months now and we haven't had any issues related to OIT.
---
## Showcase


## Future work
- Add an example showing how to use OIT for a custom material
- Next step would be to implement a per-pixel linked list to reduce
memory use
- I'd also like to investigate using a BinnedRenderPhase instead of a
SortedRenderPhase. If it works, it would make the transparent pass
significantly faster.
---------
Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com>
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
Co-authored-by: Charlotte McElwain <charlotte.c.mcelwain@gmail.com>
Currently, it's possible for the `collect_meshes_for_gpu_building`
system to run after `set_mesh_motion_vector_flags`. This will cause
those motion vector flags to be overwritten, which will cause the shader
to ignore the motion vectors for skinned meshes, which will cause
graphical artifacts.
This patch corrects the issue by forcing `set_mesh_motion_vector_flags`
to run after `collect_meshes_for_gpu_building`.
# Objective
After merging retained rendering world #15320, we now have a good way of
creating a link between worlds (*HIYAA intensifies*). This means that
`get_or_spawn` is no longer necessary for that function. Entity should
be opaque as the warning above `get_or_spawn` says. This is also part of
#15459.
I'm deprecating `get_or_spawn_batch` in a different PR in order to keep
the PR small in size.
## Solution
Deprecate `get_or_spawn` and replace it with `get_entity` in most
contexts. If it's possible to query `&RenderEntity`, then the entity is
synced and `render_entity.id()` is initialized in the render world.
## Migration Guide
If you are given an `Entity` and you want to do something with it, use
`Commands.entity(...)` or `World.entity(...)`. If instead you want to
spawn something use `Commands.spawn(...)` or `World.spawn(...)`. If you
are not sure if an entity exists, you can always use `get_entity` and
match on the `Option<...>` that is returned.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Fixes#15525
The deferred and mesh pipelines tonemapping LUT bindings were
accidentally out of sync, breaking deferred rendering.
As noted in the issue it's still broken on wasm due to hitting a texture
limit.
## Solution
Add constants for these instead of hardcoding them.
## Testing
Test with `cargo run --example deferred_rendering` and see it works, run
the same on main and see it crash.
Early implementation. I still have to fix the documentation and consider
writing a small migration guide.
Questions left to answer:
* [x] should thickness be an overridable constant?
* [x] is there a better way to implement `Eq`/`Hash` for `SSAOMethod`?
* [x] do we want to keep the linear sampler for the depth texture?
* [x] is there a better way to separate the logic than preprocessor
macros?

## Migration guide
SSAO algorithm was changed from GTAO to VBAO (visibility bitmasks). A
new field, `constant_object_thickness`, was added to
`ScreenSpaceAmbientOcclusion`. `ScreenSpaceAmbientOcclusion` also lost
its `Eq` and `Hash` implementations.
---------
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
As discussed in #15521
- Partial revert of #14897, reverting the change to the methods to
consume `self`
- The `insert_if` method is kept
The migration guide of #14897 should be removed
Closes#15521
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
A big step in the migration to required components: meshes and
materials!
## Solution
As per the [selected
proposal](https://hackmd.io/@bevy/required_components/%2Fj9-PnF-2QKK0on1KQ29UWQ):
- Deprecate `MaterialMesh2dBundle`, `MaterialMeshBundle`, and
`PbrBundle`.
- Add `Mesh2d` and `Mesh3d` components, which wrap a `Handle<Mesh>`.
- Add `MeshMaterial2d<M: Material2d>` and `MeshMaterial3d<M: Material>`,
which wrap a `Handle<M>`.
- Meshes *without* a mesh material should be rendered with a default
material. The existence of a material is determined by
`HasMaterial2d`/`HasMaterial3d`, which is required by
`MeshMaterial2d`/`MeshMaterial3d`. This gets around problems with the
generics.
Previously:
```rust
commands.spawn(MaterialMesh2dBundle {
mesh: meshes.add(Circle::new(100.0)).into(),
material: materials.add(Color::srgb(7.5, 0.0, 7.5)),
transform: Transform::from_translation(Vec3::new(-200., 0., 0.)),
..default()
});
```
Now:
```rust
commands.spawn((
Mesh2d(meshes.add(Circle::new(100.0))),
MeshMaterial2d(materials.add(Color::srgb(7.5, 0.0, 7.5))),
Transform::from_translation(Vec3::new(-200., 0., 0.)),
));
```
If the mesh material is missing, previously nothing was rendered. Now,
it renders a white default `ColorMaterial` in 2D and a
`StandardMaterial` in 3D (this can be overridden). Below, only every
other entity has a material:


Why white? This is still open for discussion, but I think white makes
sense for a *default* material, while *invalid* asset handles pointing
to nothing should have something like a pink material to indicate that
something is broken (I don't handle that in this PR yet). This is kind
of a mix of Godot and Unity: Godot just renders a white material for
non-existent materials, while Unity renders nothing when no materials
exist, but renders pink for invalid materials. I can also change the
default material to pink if that is preferable though.
## Testing
I ran some 2D and 3D examples to test if anything changed visually. I
have not tested all examples or features yet however. If anyone wants to
test more extensively, it would be appreciated!
## Implementation Notes
- The relationship between `bevy_render` and `bevy_pbr` is weird here.
`bevy_render` needs `Mesh3d` for its own systems, but `bevy_pbr` has all
of the material logic, and `bevy_render` doesn't depend on it. I feel
like the two crates should be refactored in some way, but I think that's
out of scope for this PR.
- I didn't migrate meshlets to required components yet. That can
probably be done in a follow-up, as this is already a huge PR.
- It is becoming increasingly clear to me that we really, *really* want
to disallow raw asset handles as components. They caused me a *ton* of
headache here already, and it took me a long time to find every place
that queried for them or inserted them directly on entities, since there
were no compiler errors for it. If we don't remove the `Component`
derive, I expect raw asset handles to be a *huge* footgun for users as
we transition to wrapper components, especially as handles as components
have been the norm so far. I personally consider this to be a blocker
for 0.15: we need to migrate to wrapper components for asset handles
everywhere, and remove the `Component` derive. Also see
https://github.com/bevyengine/bevy/issues/14124.
---
## Migration Guide
Asset handles for meshes and mesh materials must now be wrapped in the
`Mesh2d` and `MeshMaterial2d` or `Mesh3d` and `MeshMaterial3d`
components for 2D and 3D respectively. Raw handles as components no
longer render meshes.
Additionally, `MaterialMesh2dBundle`, `MaterialMeshBundle`, and
`PbrBundle` have been deprecated. Instead, use the mesh and material
components directly.
Previously:
```rust
commands.spawn(MaterialMesh2dBundle {
mesh: meshes.add(Circle::new(100.0)).into(),
material: materials.add(Color::srgb(7.5, 0.0, 7.5)),
transform: Transform::from_translation(Vec3::new(-200., 0., 0.)),
..default()
});
```
Now:
```rust
commands.spawn((
Mesh2d(meshes.add(Circle::new(100.0))),
MeshMaterial2d(materials.add(Color::srgb(7.5, 0.0, 7.5))),
Transform::from_translation(Vec3::new(-200., 0., 0.)),
));
```
If the mesh material is missing, a white default material is now used.
Previously, nothing was rendered if the material was missing.
The `WithMesh2d` and `WithMesh3d` query filter type aliases have also
been removed. Simply use `With<Mesh2d>` or `With<Mesh3d>`.
---------
Co-authored-by: Tim Blackbird <justthecooldude@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
- Adopted from #14449
- Still fixes#12144.
## Migration Guide
The retained render world is a complex change: migrating might take one
of a few different forms depending on the patterns you're using.
For every example, we specify in which world the code is run. Most of
the changes affect render world code, so for the average Bevy user who's
using Bevy's high-level rendering APIs, these changes are unlikely to
affect your code.
### Spawning entities in the render world
Previously, if you spawned an entity with `world.spawn(...)`,
`commands.spawn(...)` or some other method in the rendering world, it
would be despawned at the end of each frame. In 0.15, this is no longer
the case and so your old code could leak entities. This can be mitigated
by either re-architecting your code to no longer continuously spawn
entities (like you're used to in the main world), or by adding the
`bevy_render::world_sync::TemporaryRenderEntity` component to the entity
you're spawning. Entities tagged with `TemporaryRenderEntity` will be
removed at the end of each frame (like before).
### Extract components with `ExtractComponentPlugin`
```
// main world
app.add_plugins(ExtractComponentPlugin::<ComponentToExtract>::default());
```
`ExtractComponentPlugin` has been changed to only work with synced
entities. Entities are automatically synced if `ComponentToExtract` is
added to them. However, entities are not "unsynced" if any given
`ComponentToExtract` is removed, because an entity may have multiple
components to extract. This would cause the other components to no
longer get extracted because the entity is not synced.
So be careful when only removing extracted components from entities in
the render world, because it might leave an entity behind in the render
world. The solution here is to avoid only removing extracted components
and instead despawn the entire entity.
### Manual extraction using `Extract<Query<(Entity, ...)>>`
```rust
// in render world, inspired by bevy_pbr/src/cluster/mod.rs
pub fn extract_clusters(
mut commands: Commands,
views: Extract<Query<(Entity, &Clusters, &Camera)>>,
) {
for (entity, clusters, camera) in &views {
// some code
commands.get_or_spawn(entity).insert(...);
}
}
```
One of the primary consequences of the retained rendering world is that
there's no longer a one-to-one mapping from entity IDs in the main world
to entity IDs in the render world. Unlike in Bevy 0.14, Entity 42 in the
main world doesn't necessarily map to entity 42 in the render world.
Previous code which called `get_or_spawn(main_world_entity)` in the
render world (`Extract<Query<(Entity, ...)>>` returns main world
entities). Instead, you should use `&RenderEntity` and
`render_entity.id()` to get the correct entity in the render world. Note
that this entity does need to be synced first in order to have a
`RenderEntity`.
When performing manual abstraction, this won't happen automatically
(like with `ExtractComponentPlugin`) so add a `SyncToRenderWorld` marker
component to the entities you want to extract.
This results in the following code:
```rust
// in render world, inspired by bevy_pbr/src/cluster/mod.rs
pub fn extract_clusters(
mut commands: Commands,
views: Extract<Query<(&RenderEntity, &Clusters, &Camera)>>,
) {
for (render_entity, clusters, camera) in &views {
// some code
commands.get_or_spawn(render_entity.id()).insert(...);
}
}
// in main world, when spawning
world.spawn(Clusters::default(), Camera::default(), SyncToRenderWorld)
```
### Looking up `Entity` ids in the render world
As previously stated, there's now no correspondence between main world
and render world `Entity` identifiers.
Querying for `Entity` in the render world will return the `Entity` id in
the render world: query for `MainEntity` (and use its `id()` method) to
get the corresponding entity in the main world.
This is also a good way to tell the difference between synced and
unsynced entities in the render world, because unsynced entities won't
have a `MainEntity` component.
---------
Co-authored-by: re0312 <re0312@outlook.com>
Co-authored-by: re0312 <45868716+re0312@users.noreply.github.com>
Co-authored-by: Periwink <charlesbour@gmail.com>
Co-authored-by: Anselmo Sampietro <ans.samp@gmail.com>
Co-authored-by: Emerson Coskey <56370779+ecoskey@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Christian Hughes <9044780+ItsDoot@users.noreply.github.com>
* Save 16 bytes per vertex by calculating tangents in the shader at
runtime, rather than storing them in the vertex data.
* Based on https://jcgt.org/published/0009/03/04,
https://www.jeremyong.com/graphics/2023/12/16/surface-gradient-bump-mapping.
* Fixed visbuffer resolve to use the updated algorithm that flips ddy
correctly
* Added some more docs about meshlet material limitations, and some
TODOs about transforming UV coordinates for the future.

For testing add a normal map to the bunnies with StandardMaterial like
below, and then test that on both main and this PR (make sure to
download the correct bunny for each). Results should be mostly
identical.
```rust
normal_map_texture: Some(asset_server.load_with_settings(
"textures/BlueNoise-Normal.png",
|settings: &mut ImageLoaderSettings| settings.is_srgb = false,
)),
```
# Objective
- Fixes#6370
- Closes#6581
## Solution
- Added the following lints to the workspace:
- `std_instead_of_core`
- `std_instead_of_alloc`
- `alloc_instead_of_core`
- Used `cargo +nightly fmt` with [item level use
formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Item%5C%3A)
to split all `use` statements into single items.
- Used `cargo clippy --workspace --all-targets --all-features --fix
--allow-dirty` to _attempt_ to resolve the new linting issues, and
intervened where the lint was unable to resolve the issue automatically
(usually due to needing an `extern crate alloc;` statement in a crate
root).
- Manually removed certain uses of `std` where negative feature gating
prevented `--all-features` from finding the offending uses.
- Used `cargo +nightly fmt` with [crate level use
formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Crate%5C%3A)
to re-merge all `use` statements matching Bevy's previous styling.
- Manually fixed cases where the `fmt` tool could not re-merge `use`
statements due to conditional compilation attributes.
## Testing
- Ran CI locally
## Migration Guide
The MSRV is now 1.81. Please update to this version or higher.
## Notes
- This is a _massive_ change to try and push through, which is why I've
outlined the semi-automatic steps I used to create this PR, in case this
fails and someone else tries again in the future.
- Making this change has no impact on user code, but does mean Bevy
contributors will be warned to use `core` and `alloc` instead of `std`
where possible.
- This lint is a critical first step towards investigating `no_std`
options for Bevy.
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
[*Percentage-closer soft shadows*] are a technique from 2004 that allow
shadows to become blurrier farther from the objects that cast them. It
works by introducing a *blocker search* step that runs before the normal
shadow map sampling. The blocker search step detects the difference
between the depth of the fragment being rasterized and the depth of the
nearby samples in the depth buffer. Larger depth differences result in a
larger penumbra and therefore a blurrier shadow.
To enable PCSS, fill in the `soft_shadow_size` value in
`DirectionalLight`, `PointLight`, or `SpotLight`, as appropriate. This
shadow size value represents the size of the light and should be tuned
as appropriate for your scene. Higher values result in a wider penumbra
(i.e. blurrier shadows).
When using PCSS, temporal shadow maps
(`ShadowFilteringMethod::Temporal`) are recommended. If you don't use
`ShadowFilteringMethod::Temporal` and instead use
`ShadowFilteringMethod::Gaussian`, Bevy will use the same technique as
`Temporal`, but the result won't vary over time. This produces a rather
noisy result. Doing better would likely require downsampling the shadow
map, which would be complex and slower (and would require PR #13003 to
land first).
In addition to PCSS, this commit makes the near Z plane for the shadow
map configurable on a per-light basis. Previously, it had been hardcoded
to 0.1 meters. This change was necessary to make the point light shadow
map in the example look reasonable, as otherwise the shadows appeared
far too aliased.
A new example, `pcss`, has been added. It demonstrates the
percentage-closer soft shadow technique with directional lights, point
lights, spot lights, non-temporal operation, and temporal operation. The
assets are my original work.
Both temporal and non-temporal shadows are rather noisy in the example,
and, as mentioned before, this is unavoidable without downsampling the
depth buffer, which we can't do yet. Note also that the shadows don't
look particularly great for point lights; the example simply isn't an
ideal scene for them. Nevertheless, I felt that the benefits of the
ability to do a side-by-side comparison of directional and point lights
outweighed the unsightliness of the point light shadows in that example,
so I kept the point light feature in.
Fixes#3631.
[*Percentage-closer soft shadows*]:
https://developer.download.nvidia.com/shaderlibrary/docs/shadow_PCSS.pdf
## Changelog
### Added
* Percentage-closer soft shadows (PCSS) are now supported, allowing
shadows to become blurrier as they stretch away from objects. To use
them, set the `soft_shadow_size` field in `DirectionalLight`,
`PointLight`, or `SpotLight`, as applicable.
* The near Z value for shadow maps is now customizable via the
`shadow_map_near_z` field in `DirectionalLight`, `PointLight`, and
`SpotLight`.
## Screenshots
PCSS off:

PCSS on:

---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
# Objective
- Fixes#15236
## Solution
- Use bevy_math::ops instead of std floating point operations.
## Testing
- Did you test these changes? If so, how?
Unit tests and `cargo run -p ci -- test`
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
Execute `cargo run -p ci -- test` on Windows.
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
Windows
## Migration Guide
- Not a breaking change
- Projects should use bevy math where applicable
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: IQuick 143 <IQuick143cz@gmail.com>
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
# Objective
The names of numerous rendering components in Bevy are inconsistent and
a bit confusing. Relevant names include:
- `AutoExposureSettings`
- `AutoExposureSettingsUniform`
- `BloomSettings`
- `BloomUniform` (no `Settings`)
- `BloomPrefilterSettings`
- `ChromaticAberration` (no `Settings`)
- `ContrastAdaptiveSharpeningSettings`
- `DepthOfFieldSettings`
- `DepthOfFieldUniform` (no `Settings`)
- `FogSettings`
- `SmaaSettings`, `Fxaa`, `TemporalAntiAliasSettings` (really
inconsistent??)
- `ScreenSpaceAmbientOcclusionSettings`
- `ScreenSpaceReflectionsSettings`
- `VolumetricFogSettings`
Firstly, there's a lot of inconsistency between `Foo`/`FooSettings` and
`FooUniform`/`FooSettingsUniform` and whether names are abbreviated or
not.
Secondly, the `Settings` post-fix seems unnecessary and a bit confusing
semantically, since it makes it seem like the component is mostly just
auxiliary configuration instead of the core *thing* that actually
enables the feature. This will be an even bigger problem once bundles
like `TemporalAntiAliasBundle` are deprecated in favor of required
components, as users will expect a component named `TemporalAntiAlias`
(or similar), not `TemporalAntiAliasSettings`.
## Solution
Drop the `Settings` post-fix from the component names, and change some
names to be more consistent.
- `AutoExposure`
- `AutoExposureUniform`
- `Bloom`
- `BloomUniform`
- `BloomPrefilter`
- `ChromaticAberration`
- `ContrastAdaptiveSharpening`
- `DepthOfField`
- `DepthOfFieldUniform`
- `DistanceFog`
- `Smaa`, `Fxaa`, `TemporalAntiAliasing` (note: we might want to change
to `Taa`, see "Discussion")
- `ScreenSpaceAmbientOcclusion`
- `ScreenSpaceReflections`
- `VolumetricFog`
I kept the old names as deprecated type aliases to make migration a bit
less painful for users. We should remove them after the next release.
(And let me know if I should just... not add them at all)
I also added some very basic docs for a few types where they were
missing, like on `Fxaa` and `DepthOfField`.
## Discussion
- `TemporalAntiAliasing` is still inconsistent with `Smaa` and `Fxaa`.
Consensus [on
Discord](https://discord.com/channels/691052431525675048/743663924229963868/1280601167209955431)
seemed to be that renaming to `Taa` would probably be fine, but I think
it's a bit more controversial, and it would've required renaming a lot
of related types like `TemporalAntiAliasNode`,
`TemporalAntiAliasBundle`, and `TemporalAntiAliasPlugin`, so I think
it's better to leave to a follow-up.
- I think `Fog` should probably have a more specific name like
`DistanceFog` considering it seems to be distinct from `VolumetricFog`.
~~This should probably be done in a follow-up though, so I just removed
the `Settings` post-fix for now.~~ (done)
---
## Migration Guide
Many rendering components have been renamed for improved consistency and
clarity.
- `AutoExposureSettings` → `AutoExposure`
- `BloomSettings` → `Bloom`
- `BloomPrefilterSettings` → `BloomPrefilter`
- `ContrastAdaptiveSharpeningSettings` → `ContrastAdaptiveSharpening`
- `DepthOfFieldSettings` → `DepthOfField`
- `FogSettings` → `DistanceFog`
- `SmaaSettings` → `Smaa`
- `TemporalAntiAliasSettings` → `TemporalAntiAliasing`
- `ScreenSpaceAmbientOcclusionSettings` → `ScreenSpaceAmbientOcclusion`
- `ScreenSpaceReflectionsSettings` → `ScreenSpaceReflections`
- `VolumetricFogSettings` → `VolumetricFog`
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
Make choosing of diffuse indirect lighting explicit, instead of using
numerical conditions like `all(indirect_light == vec3(0.0f))`, as using
that may lead to unwanted light leakage.
## Solution
Use an explicit `found_diffuse_indirect` condition to indicate the found
indirect lighting source.
## Testing
I have tested examples `lightmaps`, `irradiance_volumes` and
`reflection_probes`, there are no visual changes. For further testing,
consider a "cave" scene with lightmaps and irradiance volumes. In the
cave there are some purly dark occluded area, those dark area will
sample the irradiance volume, and that is easy to leak light.
# Objective
- Fixes#14974
## Solution
- Replace all* instances of `NonZero*` with `NonZero<*>`
## Testing
- CI passed locally.
---
## Notes
Within the `bevy_reflect` implementations for `std` types,
`impl_reflect_value!()` will continue to use the type aliases instead,
as it inappropriately parses the concrete type parameter as a generic
argument. If the `ZeroablePrimitive` trait was stable, or the macro
could be modified to accept a finite list of types, then we could fully
migrate.
# Objective
Fixes#14782
## Solution
Enable the lint and fix all upcoming hints (`--fix`). Also tried to
figure out the false-positive (see review comment). Maybe split this PR
up into multiple parts where only the last one enables the lint, so some
can already be merged resulting in less many files touched / less
potential for merge conflicts?
Currently, there are some cases where it might be easier to read the
code with the qualifier, so perhaps remove the import of it and adapt
its cases? In the current stage it's just a plain adoption of the
suggestions in order to have a base to discuss.
## Testing
`cargo clippy` and `cargo run -p ci` are happy.
# Objective
- It's possible to have errors in a draw command, but these errors are
ignored
## Solution
- Return a result with the error
## Changelog
Renamed `RenderCommandResult::Failure` to `RenderCommandResult::Skip`
Added a `reason` string parameter to `RenderCommandResult::Failure`
## Migration Guide
If you were using `RenderCommandResult::Failure` to just ignore an error
and retry later, use `RenderCommandResult::Skip` instead.
This wasn't intentional, but this PR should also help with
https://github.com/bevyengine/bevy/issues/12660 since we can turn a few
unwraps into error messages now.
---------
Co-authored-by: Charlotte McElwain <charlotte.c.mcelwain@gmail.com>
The "uberbuffers" PR #14257 caused some examples to fail intermittently
for different reasons:
1. `morph_targets` could fail because vertex displacements for morph
targets are keyed off the vertex index. With buffer packing, the vertex
index can vary based on the position in the buffer, which caused the
morph targets to be potentially incorrect. The solution is to include
the first vertex index with the `MeshUniform` (and `MeshInputUniform` if
GPU preprocessing is in use), so that the shader can calculate the true
vertex index before performing the morph operation. This results in
wasted space in `MeshUniform`, which is unfortunate, but we'll soon be
filling in the padding with the ID of the material when bindless
textures land, so this had to happen sooner or later anyhow.
Including the vertex index in the `MeshInputUniform` caused an ordering
problem. The `MeshInputUniform` was created during the extraction phase,
before the allocations occurred, so the extraction logic didn't know
where the mesh vertex data was going to end up. The solution is to move
the `MeshInputUniform` creation (the `collect_meshes_for_gpu_building`
system) to after the allocations phase. This should be better for
parallelism anyhow, because it allows the extraction phase to finish
quicker. It's also something we'll have to do for bindless in any event.
2. The `lines` and `fog_volumes` examples could fail because their
custom drawing nodes weren't updated to supply the vertex and index
offsets in their `draw_indexed` and `draw` calls. This commit fixes this
oversight.
Fixes#14366.
Switches `Msaa` from being a globally configured resource to a per
camera view component.
Closes#7194
# Objective
Allow individual views to describe their own MSAA settings. For example,
when rendering to different windows or to different parts of the same
view.
## Solution
Make `Msaa` a component that is required on all camera bundles.
## Testing
Ran a variety of examples to ensure that nothing broke.
TODO:
- [ ] Make sure android still works per previous comment in
`extract_windows`.
---
## Migration Guide
`Msaa` is no longer configured as a global resource, and should be
specified on each spawned camera if a non-default setting is desired.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
- Fixes: https://github.com/bevyengine/bevy/issues/14036
## Solution
- Add a world space transformation for the environment sample direction.
## Testing
- I have tested the newly added `transform` field using the newly added
`rotate_environment_map` example.
https://github.com/user-attachments/assets/2de77c65-14bc-48ee-b76a-fb4e9782dbdb
## Migration Guide
- Since we have added a new filed to the `EnvironmentMapLight` struct,
users will need to include `..default()` or some rotation value in their
initialization code.
This commit uses the [`offset-allocator`] crate to combine vertex and
index arrays from different meshes into single buffers. Since the
primary source of `wgpu` overhead is from validation and synchronization
when switching buffers, this significantly improves Bevy's rendering
performance on many scenes.
This patch is a more flexible version of #13218, which also used slabs.
Unlike #13218, which used slabs of a fixed size, this commit implements
slabs that start small and can grow. In addition to reducing memory
usage, supporting slab growth reduces the number of vertex and index
buffer switches that need to happen during rendering, leading to
improved performance. To prevent pathological fragmentation behavior,
slabs are capped to a maximum size, and mesh arrays that are too large
get their own dedicated slabs.
As an additional improvement over #13218, this commit allows the
application to customize all allocator heuristics. The
`MeshAllocatorSettings` resource contains values that adjust the minimum
and maximum slab sizes, the cutoff point at which meshes get their own
dedicated slabs, and the rate at which slabs grow. Hopefully-sensible
defaults have been chosen for each value.
Unfortunately, WebGL 2 doesn't support the *base vertex* feature, which
is necessary to pack vertex arrays from different meshes into the same
buffer. `wgpu` represents this restriction as the downlevel flag
`BASE_VERTEX`. This patch detects that bit and ensures that all vertex
buffers get dedicated slabs on that platform. Even on WebGL 2, though,
we can combine all *index* arrays into single buffers to reduce buffer
changes, and we do so.
The following measurements are on Bistro:
Overall frame time improves from 8.74 ms to 5.53 ms (1.58x speedup):

Render system time improves from 6.57 ms to 3.54 ms (1.86x speedup):

Opaque pass time improves from 4.64 ms to 2.33 ms (1.99x speedup):

## Migration Guide
### Changed
* Vertex and index buffers for meshes may now be packed alongside other
buffers, for performance.
* `GpuMesh` has been renamed to `RenderMesh`, to reflect the fact that
it no longer directly stores handles to GPU objects.
* Because meshes no longer have their own vertex and index buffers, the
responsibility for the buffers has moved from `GpuMesh` (now called
`RenderMesh`) to the `MeshAllocator` resource. To access the vertex data
for a mesh, use `MeshAllocator::mesh_vertex_slice`. To access the index
data for a mesh, use `MeshAllocator::mesh_index_slice`.
[`offset-allocator`]: https://github.com/pcwalton/offset-allocator
# Objective
- After #13894, I noticed the performance of `many_lights `dropped from
120+ to 60+. I reviewed the PR but couldn't identify any mistakes. After
profiling, I discovered that `Hashmap::Clone `was very slow when its not
empty, causing `extract_light` to increase from 3ms to 8ms.
- Lighting only checks visibility for 3D Meshes. We don't need to
maintain a TypeIdMap for this, as it not only impacts performance
negatively but also reduces ergonomics.
## Solution
- use VisibleMeshEntities for lighint visibility checking.
## Performance
cargo run --release --example many_lights --features bevy/trace_tracy
name="bevy_pbr::light::check_point_light_mesh_visibility"}

system{name="bevy_pbr::render::light::extract_lights"}

## Migration Guide
> now `SpotLightBundle` , `CascadesVisibleEntities `and
`CubemapVisibleEntities `use VisibleMeshEntities instead of
`VisibleEntities`
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Bevy currently has lot of invalid intra-doc links, let's fix them!
- Also make CI test them, to avoid future regressions.
- Helps with #1983 (but doesn't fix it, as there could still be explicit
links to docs.rs that are broken)
## Solution
- Make `cargo r -p ci -- doc-check` check fail on warnings (could also
be changed to just some specific lints)
- Manually fix all the warnings (note that in some cases it was unclear
to me what the fix should have been, I'll try to highlight them in a
self-review)
# Objective
- Standard Material is starting to run out of samplers (currently uses
13 with no additional features off, I think in 0.13 it was 12).
- This change adds a new feature switch, modelled on the other ones
which add features to Standard Material, to turn off the new anisotropy
feature by default.
## Solution
- feature + texture define
## Testing
- Anisotropy example still works fine
- Other samples work fine
- Standard Material now takes 12 samplers by default on my Mac instead
of 13
## Migration Guide
- Add feature pbr_anisotropy_texture if you are using that texture in
any standard materials.
---------
Co-authored-by: John Payne <20407779+johngpayne@users.noreply.github.com>
# Objective
- Fixes#14059
- `morphed_skinned_mesh_layout` is the same as
`morphed_skinned_motion_mesh_layout` but shouldn't have the skin / morph
from previous frame, as they're used for motion
## Solution
- Remove the extra entries
## Testing
- Run with the glTF file reproducing #14059, it works
As reported in #14004, many third-party plugins, such as Hanabi, enqueue
entities that don't have meshes into render phases. However, the
introduction of indirect mode added a dependency on mesh-specific data,
breaking this workflow. This is because GPU preprocessing requires that
the render phases manage indirect draw parameters, which don't apply to
objects that aren't meshes. The existing code skips over binned entities
that don't have indirect draw parameters, which causes the rendering to
be skipped for such objects.
To support this workflow, this commit adds a new field,
`non_mesh_items`, to `BinnedRenderPhase`. This field contains a simple
list of (bin key, entity) pairs. After drawing batchable and unbatchable
objects, the non-mesh items are drawn one after another. Bevy itself
doesn't enqueue any items into this list; it exists solely for the
application and/or plugins to use.
Additionally, this commit switches the asset ID in the standard bin keys
to be an untyped asset ID rather than that of a mesh. This allows more
flexibility, allowing bins to be keyed off any type of asset.
This patch adds a new example, `custom_phase_item`, which simultaneously
serves to demonstrate how to use this new feature and to act as a
regression test so this doesn't break again.
Fixes#14004.
## Changelog
### Added
* `BinnedRenderPhase` now contains a `non_mesh_items` field for plugins
to add custom items to.
# Objective
- Fixes#13728
## Solution
- add a new feature `smaa_luts`. if enables, it also enables `ktx2` and
`zstd`. if not, it doesn't load the files but use placeholders instead
- adds all the resources needed in the same places that system that uses
them are added.
# Objective
- Mikktspace requires that we normalize world normals/tangents _before_
interpolation across vertices, and then do _not_ normalize after. I had
it backwards.
- We do not (am not supposed to?) need a second set of barycentrics for
motion vectors. If you think about the typical raster pipeline, in the
vertex shader we calculate previous_world_position, and then it gets
interpolated using the current triangle's barycentrics.
## Solution
- Fix normal/tangent processing
- Reuse barycentrics for motion vector calculations
- Not implementing this for 0.14, but long term I aim to remove explicit
vertex tangents and calculate them in the shader on the fly.
## Testing
- I tested out some of the normal maps we have in repo. Didn't seem to
make a difference, but mikktspace is all about correctness across
various baking tools. I probably just didn't have any of the ones that
would cause it to break.
- Didn't test motion vectors as there's a known bug with the depth
buffer and meshlets that I'm waiting on the render graph rewrite to fix.
# Objective
- If the fog is disabled it still generates a useless branch which can
hurt performance
## Solution
- Make the flag a shader_def instead
## Testing
- I tested enabling/disabling fog works as expected per-material in the
fog example
- I also tested that scenes that don't add the FogSettings resource
still work correctly
## Review notes
I'm not sure how to handle the removed material flag. Right now I just
commented it out and added a not to reuse it instead of creating a new
one.
# Objective
One thing missing from the new Color implementation in 0.14 is the
ability to easily convert to a u8 representation of the rgb color.
(note this is a redo of PR https://github.com/bevyengine/bevy/pull/13739
as I needed to move the source branch
## Solution
I have added to_u8_array and to_u8_array_no_alpha to a new trait called
ColorToPacked to mirror the f32 conversions in ColorToComponents and
implemented the new trait for Srgba and LinearRgba.
To go with those I also added matching from_u8... functions and
converted a couple of cases that used ad-hoc implementations of that
conversion to use these.
After discussion on Discord of the experience of using the API I renamed
Color::linear to Color::to_linear, as without that it looks like a
constructor (like Color::rgb).
I also added to_srgba which is the other commonly converted to type of
color (for UI and 2D) to match to_linear.
Removed a redundant extra implementation of to_f32_array for LinearColor
as it is also supplied in ColorToComponents (I'm surprised that's
allowed?)
## Testing
Ran all tests and manually tested.
Added to_and_from_u8 to linear_rgba::tests
## Changelog
visible change is Color::linear becomes Color::to_linear.
---------
Co-authored-by: John Payne <20407779+johngpayne@users.noreply.github.com>
# Objective
- apply_normal_mapping was changed to use TBN but the pbr_prepass was
not updated for that change
## Solution
- Update the pbr_prepass to correctly apply normal mapping
We want to use the clustering infrastructure for light probes and decals
as well, not just point lights. This patch builds on top of #13640 and
performs the rename.
To make this series easier to review, this patch makes no code changes.
Only identifiers and comments are modified.
## Migration Guide
* In the PBR shaders, `point_lights` is now known as
`clusterable_objects`, `PointLight` is now known as `ClusterableObject`,
and `cluster_light_index_lists` is now known as
`clusterable_object_index_lists`.
This commit implements support for physically-based anisotropy in Bevy's
`StandardMaterial`, following the specification for the
[`KHR_materials_anisotropy`] glTF extension.
[*Anisotropy*] (not to be confused with [anisotropic filtering]) is a
PBR feature that allows roughness to vary along the tangent and
bitangent directions of a mesh. In effect, this causes the specular
light to stretch out into lines instead of a round lobe. This is useful
for modeling brushed metal, hair, and similar surfaces. Support for
anisotropy is a common feature in major game and graphics engines;
Unity, Unreal, Godot, three.js, and Blender all support it to varying
degrees.
Two new parameters have been added to `StandardMaterial`:
`anisotropy_strength` and `anisotropy_rotation`. Anisotropy strength,
which ranges from 0 to 1, represents how much the roughness differs
between the tangent and the bitangent of the mesh. In effect, it
controls how stretched the specular highlight is. Anisotropy rotation
allows the roughness direction to differ from the tangent of the model.
In addition to these two fixed parameters, an *anisotropy texture* can
be supplied. Such a texture should be a 3-channel RGB texture, where the
red and green values specify a direction vector using the same
conventions as a normal map ([0, 1] color values map to [-1, 1] vector
values), and the the blue value represents the strength. This matches
the format that the [`KHR_materials_anisotropy`] specification requires.
Such textures should be loaded as linear and not sRGB. Note that this
texture does consume one additional texture binding in the standard
material shader.
The glTF loader has been updated to properly parse the
`KHR_materials_anisotropy` extension.
A new example, `anisotropy`, has been added. This example loads and
displays the barn lamp example from the [`glTF-Sample-Assets`]
repository. Note that the textures were rather large, so I shrunk them
down and converted them to a mixture of JPEG and KTX2 format, in the
interests of saving space in the Bevy repository.
[*Anisotropy*]:
https://google.github.io/filament/Filament.md.html#materialsystem/anisotropicmodel
[anisotropic filtering]:
https://en.wikipedia.org/wiki/Anisotropic_filtering
[`KHR_materials_anisotropy`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_anisotropy/README.md
[`glTF-Sample-Assets`]:
https://github.com/KhronosGroup/glTF-Sample-Assets/
## Changelog
### Added
* Physically-based anisotropy is now available for materials, which
enhances the look of surfaces such as brushed metal or hair. glTF scenes
can use the new feature with the `KHR_materials_anisotropy` extension.
## Screenshots
With anisotropy:

Without anisotropy:

# Objective
- Fixes#10909
- Fixes#8492
## Solution
- Name all matrices `x_from_y`, for example `world_from_view`.
## Testing
- I've tested most of the 3D examples. The `lighting` example
particularly should hit a lot of the changes and appears to run fine.
---
## Changelog
- Renamed matrices across the engine to follow a `y_from_x` naming,
making the space conversion more obvious.
## Migration Guide
- `Frustum`'s `from_view_projection`, `from_view_projection_custom_far`
and `from_view_projection_no_far` were renamed to
`from_clip_from_world`, `from_clip_from_world_custom_far` and
`from_clip_from_world_no_far`.
- `ComputedCameraValues::projection_matrix` was renamed to
`clip_from_view`.
- `CameraProjection::get_projection_matrix` was renamed to
`get_clip_from_view` (this affects implementations on `Projection`,
`PerspectiveProjection` and `OrthographicProjection`).
- `ViewRangefinder3d::from_view_matrix` was renamed to
`from_world_from_view`.
- `PreviousViewData`'s members were renamed to `view_from_world` and
`clip_from_world`.
- `ExtractedView`'s `projection`, `transform` and `view_projection` were
renamed to `clip_from_view`, `world_from_view` and `clip_from_world`.
- `ViewUniform`'s `view_proj`, `unjittered_view_proj`,
`inverse_view_proj`, `view`, `inverse_view`, `projection` and
`inverse_projection` were renamed to `clip_from_world`,
`unjittered_clip_from_world`, `world_from_clip`, `world_from_view`,
`view_from_world`, `clip_from_view` and `view_from_clip`.
- `GpuDirectionalCascade::view_projection` was renamed to
`clip_from_world`.
- `MeshTransforms`' `transform` and `previous_transform` were renamed to
`world_from_local` and `previous_world_from_local`.
- `MeshUniform`'s `transform`, `previous_transform`,
`inverse_transpose_model_a` and `inverse_transpose_model_b` were renamed
to `world_from_local`, `previous_world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh` type in WGSL mirrors this, however `transform` and
`previous_transform` were named `model` and `previous_model`).
- `Mesh2dTransforms::transform` was renamed to `world_from_local`.
- `Mesh2dUniform`'s `transform`, `inverse_transpose_model_a` and
`inverse_transpose_model_b` were renamed to `world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh2d` type in WGSL mirrors this).
- In WGSL, in `bevy_pbr::mesh_functions`, `get_model_matrix` and
`get_previous_model_matrix` were renamed to `get_world_from_local` and
`get_previous_world_from_local`.
- In WGSL, `bevy_sprite::mesh2d_functions::get_model_matrix` was renamed
to `get_world_from_local`.
As a prerequisite for decals and clustering of light probes, we want
clustering to operate on objects other than lights. (Currently, it only
operates on point and spot lights.) This necessitates a large
refactoring, so I'm splitting it up into small steps.
The first such step is to separate clustering from lighting by moving
clustering-related types and functions out of lighting and into their
own module subtree within the `bevy_pbr` crate. (Ultimately, we may want
to move it to `bevy_render`, but that requires more work and can be a
followup.)
No code changes have been made other than adjusting import lists and
moving code. This is to make this code easy to review. Ultimately, I
want to rename "light" to "clusterable object" in most cases, but doing
that at the same time as moving the code would make reviewing harder. So
instead I'm moving the code first and will follow this up with renaming.
## Migration Guide
* Clustering-related types and functions (e.g.
`assign_lights_to_clusters`) have moved under `bevy_pbr::cluster`, in
preparation for the ability to cluster objects other than lights.
This is a revamped equivalent to #9902, though it shares none of the
code. It handles all special cases that I've tested correctly.
The overall technique consists of double-buffering the joint matrix and
morph weights buffers, as most of the previous attempts to solve this
problem did. The process is generally straightforward. Note that, to
avoid regressing the ability of mesh extraction, skin extraction, and
morph target extraction to run in parallel, I had to add a new system to
rendering, `set_mesh_motion_vector_flags`. The comment there explains
the details; it generally runs very quickly.
I've tested this with modified versions of the `animated_fox`,
`morph_targets`, and `many_foxes` examples that add TAA, and the patch
works. To avoid bloating those examples, I didn't add switches for TAA
to them.
Addresses points (1) and (2) of #8423.
## Changelog
### Fixed
* Motion vectors, and therefore TAA, are now supported for meshes with
skins and/or morph targets.
Fixes#13118
If you use `Sprite` or `Mesh2d` and create `Camera` with
* hdr=false
* any tonemapper
You would get
```
wgpu error: Validation Error
Caused by:
In Device::create_render_pipeline
note: label = `sprite_pipeline`
Error matching ShaderStages(FRAGMENT) shader requirements against the pipeline
Shader global ResourceBinding { group: 0, binding: 19 } is not available in the pipeline layout
Binding is missing from the pipeline layout
```
Because of missing tonemapping LUT bindings
## Solution
Add missing bindings for tonemapping LUT's to `SpritePipeline` &
`Mesh2dPipeline`
## Testing
I checked that
* `tonemapping`
* `color_grading`
* `sprite_animations`
* `2d_shapes`
* `meshlet`
* `deferred_rendering`
examples are still working
2d cases I checked with this code:
```
use bevy::{
color::palettes::css::PURPLE, core_pipeline::tonemapping::Tonemapping, prelude::*,
sprite::MaterialMesh2dBundle,
};
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_systems(Startup, setup)
.add_systems(Update, toggle_tonemapping_method)
.run();
}
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<ColorMaterial>>,
asset_server: Res<AssetServer>,
) {
commands.spawn(Camera2dBundle {
camera: Camera {
hdr: false,
..default()
},
tonemapping: Tonemapping::BlenderFilmic,
..default()
});
commands.spawn(MaterialMesh2dBundle {
mesh: meshes.add(Rectangle::default()).into(),
transform: Transform::default().with_scale(Vec3::splat(128.)),
material: materials.add(Color::from(PURPLE)),
..default()
});
commands.spawn(SpriteBundle {
texture: asset_server.load("asd.png"),
..default()
});
}
fn toggle_tonemapping_method(
keys: Res<ButtonInput<KeyCode>>,
mut tonemapping: Query<&mut Tonemapping>,
) {
let mut method = tonemapping.single_mut();
if keys.just_pressed(KeyCode::Digit1) {
*method = Tonemapping::None;
} else if keys.just_pressed(KeyCode::Digit2) {
*method = Tonemapping::Reinhard;
} else if keys.just_pressed(KeyCode::Digit3) {
*method = Tonemapping::ReinhardLuminance;
} else if keys.just_pressed(KeyCode::Digit4) {
*method = Tonemapping::AcesFitted;
} else if keys.just_pressed(KeyCode::Digit5) {
*method = Tonemapping::AgX;
} else if keys.just_pressed(KeyCode::Digit6) {
*method = Tonemapping::SomewhatBoringDisplayTransform;
} else if keys.just_pressed(KeyCode::Digit7) {
*method = Tonemapping::TonyMcMapface;
} else if keys.just_pressed(KeyCode::Digit8) {
*method = Tonemapping::BlenderFilmic;
}
}
```
---
## Changelog
Fix the bug which led to the crash when user uses any tonemapper without
hdr for rendering sprites and 2d meshes.
This commit, a revamp of #12959, implements screen-space reflections
(SSR), which approximate real-time reflections based on raymarching
through the depth buffer and copying samples from the final rendered
frame. This patch is a relatively minimal implementation of SSR, so as
to provide a flexible base on which to customize and build in the
future. However, it's based on the production-quality [raymarching code
by Tomasz
Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db).
For a general basic overview of screen-space reflections, see
[1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html).
The raymarching shader uses the basic algorithm of tracing forward in
large steps, refining that trace in smaller increments via binary
search, and then using the secant method. No temporal filtering or
roughness blurring, is performed at all; for this reason, SSR currently
only operates on very shiny surfaces. No acceleration via the
hierarchical Z-buffer is implemented (though note that
https://github.com/bevyengine/bevy/pull/12899 will add the
infrastructure for this). Reflections are traced at full resolution,
which is often considered slow. All of these improvements and more can
be follow-ups.
SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections aren't supported in WebGL 2, because they
require sampling from the depth buffer, which Naga can't do because of a
bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`;
this is the same reason why depth of field is disabled on that
platform).
To add screen-space reflections to a camera, use the
`ScreenSpaceReflectionsBundle` bundle or the
`ScreenSpaceReflectionsSettings` component. In addition to
`ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflectionsSettings` component contains several settings
that artists can tweak, and also comes with sensible defaults.
A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean
sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all
the assets are original. Note that the three.js demo has no screen-space
reflections and instead renders a mirror world. In contrast to #12959,
this demo tests not only a cube but also a more complex model (the
flight helmet).
## Changelog
### Added
* Screen-space reflections can be enabled for very smooth surfaces by
adding the `ScreenSpaceReflections` component to a camera. Deferred
rendering must be enabled for the reflections to appear.


This commit makes us stop using the render world ECS for
`BinnedRenderPhase` and `SortedRenderPhase` and instead use resources
with `EntityHashMap`s inside. There are three reasons to do this:
1. We can use `clear()` to clear out the render phase collections
instead of recreating the components from scratch, allowing us to reuse
allocations.
2. This is a prerequisite for retained bins, because components can't be
retained from frame to frame in the render world, but resources can.
3. We want to move away from storing anything in components in the
render world ECS, and this is a step in that direction.
This patch results in a small performance benefit, due to point (1)
above.
## Changelog
### Changed
* The `BinnedRenderPhase` and `SortedRenderPhase` render world
components have been replaced with `ViewBinnedRenderPhases` and
`ViewSortedRenderPhases` resources.
## Migration Guide
* The `BinnedRenderPhase` and `SortedRenderPhase` render world
components have been replaced with `ViewBinnedRenderPhases` and
`ViewSortedRenderPhases` resources. Instead of querying for the
components, look the camera entity up in the
`ViewBinnedRenderPhases`/`ViewSortedRenderPhases` tables.
Commit 3f5a090b1b added a reference to
`STANDARD_MATERIAL_FLAGS_BASE_COLOR_UV_BIT`, a nonexistent identifier,
in the alpha discard portion of the prepass shader. Moreover, the logic
didn't make sense to me. I think the code was trying to choose between
the two UV sets depending on which is present, so I made it do that.
I noticed this when trying Bistro with #13277. I'm not sure why this
issue didn't manifest itself before, but it's clearly a bug, so here's a
fix. We should probably merge this before 0.14.
# Objective
- The volumetric fog PR originally needed to be modified to use
`.view_layouts` but that was changed in another PR. The merge with main
still kept those around.
## Solution
- Remove them because they aren't necessary
This commit implements a more physically-accurate, but slower, form of
fog than the `bevy_pbr::fog` module does. Notably, this *volumetric fog*
allows for light beams from directional lights to shine through,
creating what is known as *light shafts* or *god rays*.
To add volumetric fog to a scene, add `VolumetricFogSettings` to the
camera, and add `VolumetricLight` to directional lights that you wish to
be volumetric. `VolumetricFogSettings` has numerous settings that allow
you to define the accuracy of the simulation, as well as the look of the
fog. Currently, only interaction with directional lights that have
shadow maps is supported. Note that the overhead of the effect scales
directly with the number of directional lights in use, so apply
`VolumetricLight` sparingly for the best results.
The overall algorithm, which is implemented as a postprocessing effect,
is a combination of the techniques described in [Scratchapixel] and
[this blog post]. It uses raymarching in screen space, transformed into
shadow map space for sampling and combined with physically-based
modeling of absorption and scattering. Bevy employs the widely-used
[Henyey-Greenstein phase function] to model asymmetry; this essentially
allows light shafts to fade into and out of existence as the user views
them.
Volumetric rendering is a huge subject, and I deliberately kept the
scope of this commit small. Possible follow-ups include:
1. Raymarching at a lower resolution.
2. A post-processing blur (especially useful when combined with (1)).
3. Supporting point lights and spot lights.
4. Supporting lights with no shadow maps.
5. Supporting irradiance volumes and reflection probes.
6. Voxel components that reuse the volumetric fog code to create voxel
shapes.
7. *Horizon: Zero Dawn*-style clouds.
These are all useful, but out of scope of this patch for now, to keep
things tidy and easy to review.
A new example, `volumetric_fog`, has been added to demonstrate the
effect.
## Changelog
### Added
* A new component, `VolumetricFog`, is available, to allow for a more
physically-accurate, but more resource-intensive, form of fog.
* A new component, `VolumetricLight`, can be placed on directional
lights to make them interact with `VolumetricFog`. Notably, this allows
such lights to emit light shafts/god rays.


[Scratchapixel]:
https://www.scratchapixel.com/lessons/3d-basic-rendering/volume-rendering-for-developers/intro-volume-rendering.html
[this blog post]: https://www.alexandre-pestana.com/volumetric-lights/
[Henyey-Greenstein phase function]:
https://www.pbr-book.org/4ed/Volume_Scattering/Phase_Functions#TheHenyeyndashGreensteinPhaseFunction
# Objective
Remove the limit of `RenderLayer` by using a growable mask using
`SmallVec`.
Changes adopted from @UkoeHB's initial PR here
https://github.com/bevyengine/bevy/pull/12502 that contained additional
changes related to propagating render layers.
Changes
## Solution
The main thing needed to unblock this is removing `RenderLayers` from
our shader code. This primarily affects `DirectionalLight`. We are now
computing a `skip` field on the CPU that is then used to skip the light
in the shader.
## Testing
Checked a variety of examples and did a quick benchmark on `many_cubes`.
There were some existing problems identified during the development of
the original pr (see:
https://discord.com/channels/691052431525675048/1220477928605749340/1221190112939872347).
This PR shouldn't change any existing behavior besides removing the
layer limit (sans the comment in migration about `all` layers no longer
being possible).
---
## Changelog
Removed the limit on `RenderLayers` by using a growable bitset that only
allocates when layers greater than 64 are used.
## Migration Guide
- `RenderLayers::all()` no longer exists. Entities expecting to be
visible on all layers, e.g. lights, should compute the active layers
that are in use.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>