Commit Graph

9 Commits

Author SHA1 Message Date
Gino Valente
9b32e09551
bevy_reflect: Add clone registrations project-wide (#18307)
# Objective

Now that #13432 has been merged, it's important we update our reflected
types to properly opt into this feature. If we do not, then this could
cause issues for users downstream who want to make use of
reflection-based cloning.

## Solution

This PR is broken into 4 commits:

1. Add `#[reflect(Clone)]` on all types marked `#[reflect(opaque)]` that
are also `Clone`. This is mandatory as these types would otherwise cause
the cloning operation to fail for any type that contains it at any
depth.
2. Update the reflection example to suggest adding `#[reflect(Clone)]`
on opaque types.
3. Add `#[reflect(clone)]` attributes on all fields marked
`#[reflect(ignore)]` that are also `Clone`. This prevents the ignored
field from causing the cloning operation to fail.
   
Note that some of the types that contain these fields are also `Clone`,
and thus can be marked `#[reflect(Clone)]`. This makes the
`#[reflect(clone)]` attribute redundant. However, I think it's safer to
keep it marked in the case that the `Clone` impl/derive is ever removed.
I'm open to removing them, though, if people disagree.
4. Finally, I added `#[reflect(Clone)]` on all types that are also
`Clone`. While not strictly necessary, it enables us to reduce the
generated output since we can just call `Clone::clone` directly instead
of calling `PartialReflect::reflect_clone` on each variant/field. It
also means we benefit from any optimizations or customizations made in
the `Clone` impl, including directly dereferencing `Copy` values and
increasing reference counters.

Along with that change I also took the liberty of adding any missing
registrations that I saw could be applied to the type as well, such as
`Default`, `PartialEq`, and `Hash`. There were hundreds of these to
edit, though, so it's possible I missed quite a few.

That last commit is **_massive_**. There were nearly 700 types to
update. So it's recommended to review the first three before moving onto
that last one.

Additionally, I can break the last commit off into its own PR or into
smaller PRs, but I figured this would be the easiest way of doing it
(and in a timely manner since I unfortunately don't have as much time as
I used to for code contributions).

## Testing

You can test locally with a `cargo check`:

```
cargo check --workspace --all-features
```
2025-03-17 18:32:35 +00:00
Patrick Walton
4880a231de
Implement occlusion culling for directional light shadow maps. (#17951)
Two-phase occlusion culling can be helpful for shadow maps just as it
can for a prepass, in order to reduce vertex and alpha mask fragment
shading overhead. This patch implements occlusion culling for shadow
maps from directional lights, when the `OcclusionCulling` component is
present on the entities containing the lights. Shadow maps from point
lights are deferred to a follow-up patch. Much of this patch involves
expanding the hierarchical Z-buffer to cover shadow maps in addition to
standard view depth buffers.

The `scene_viewer` example has been updated to add `OcclusionCulling` to
the directional light that it creates.

This improved the performance of the rend3 sci-fi test scene when
enabling shadows.
2025-02-21 05:56:15 +00:00
Patrick Walton
0517b9621b
Fix motion vector computation after #17688. (#17717)
PR #17688 broke motion vector computation, and therefore motion blur,
because it enabled retention of `MeshInputUniform`s, and
`MeshInputUniform`s contain the indices of the previous frame's
transform and the previous frame's skinned mesh joint matrices. On frame
N, if a `MeshInputUniform` is retained on GPU from the previous frame,
the `previous_input_index` and `previous_skin_index` would refer to the
indices for frame N - 2, not the index for frame N - 1.

This patch fixes the problems. It solves these issues in two different
ways, one for transforms and one for skins:

1. To fix transforms, this patch supplies the *frame index* to the
shader as part of the view uniforms, and specifies which frame index
each mesh's previous transform refers to. So, in the situation described
above, the frame index would be N, the previous frame index would be N -
1, and the `previous_input_frame_number` would be N - 2. The shader can
now detect this situation and infer that the mesh has been retained, and
can therefore conclude that the mesh's transform hasn't changed.

2. To fix skins, this patch replaces the explicit `previous_skin_index`
with an invariant that the index of the joints for the current frame and
the index of the joints for the previous frame are the same. This means
that the `MeshInputUniform` never has to be updated even if the skin is
animated. The downside is that we have to copy joint matrices from the
previous frame's buffer to the current frame's buffer in
`extract_skins`.

The rationale behind (2) is that we currently have no mechanism to
detect when joints that affect a skin have been updated, short of
comparing all the transforms and setting a flag for
`extract_meshes_for_gpu_building` to consume, which would regress
performance as we want `extract_skins` and
`extract_meshes_for_gpu_building` to be able to run in parallel.

To test this change, use `cargo run --example motion_blur`.
2025-02-18 09:34:19 +00:00
Patrick Walton
8976a45199
Retain skins from frame to frame. (#17818)
Currently, Bevy rebuilds the buffer containing all the transforms for
joints every frame, during the extraction phase. This is inefficient in
cases in which many skins are present in the scene and their joints
don't move, such as the Caldera test scene.

To address this problem, this commit switches skin extraction to use a
set of retained GPU buffers with allocations managed by the offset
allocator. I use fine-grained change detection in order to determine
which skins need updating. Note that the granularity is on the level of
an entire skin, not individual joints. Using the change detection at
that level would yield poor performance in common cases in which an
entire skin is animated at once. Also, this patch yields additional
performance from the fact that changing joint transforms no longer
requires the skinned mesh to be re-extracted.

Note that this optimization can be a double-edged sword. In
`many_foxes`, fine-grained change detection regressed the performance of
`extract_skins` by 3.4x. This is because every joint is updated every
frame in that example, so change detection is pointless and is pure
overhead. Because the `many_foxes` workload is actually representative
of animated scenes, this patch includes a heuristic that disables
fine-grained change detection if the number of transformed entities in
the frame exceeds a certain fraction of the total number of joints.
Currently, this threshold is set to 25%. Note that this is a crude
heuristic, because it doesn't distinguish between the number of
transformed *joints* and the number of transformed *entities*; however,
it should be good enough to yield the optimum code path most of the
time.

Finally, this patch fixes a bug whereby skinned meshes are actually
being incorrectly retained if the buffer offsets of the joints of those
skinned meshes changes from frame to frame. To fix this without
retaining skins, we would have to re-extract every skinned mesh every
frame. Doing this was a significant regression on Caldera. With this PR,
by contrast, mesh joints stay at the same buffer offset, so we don't
have to update the `MeshInputUniform` containing the buffer offset every
frame. This also makes PR #17717 easier to implement, because that PR
uses the buffer offset from the previous frame, and the logic for
calculating that is simplified if the previous frame's buffer offset is
guaranteed to be identical to that of the current frame.

On Caldera, this patch reduces the time spent in `extract_skins` from
1.79 ms to near zero. On `many_foxes`, this patch regresses the
performance of `extract_skins` by approximately 10%-25%, depending on
the number of foxes. This has only a small impact on frame rate.
2025-02-18 00:56:04 +00:00
Patrick Walton
8f36106f9e
Split out the IndirectParametersMetadata into CPU-populated and GPU-populated buffers. (#17863)
The GPU can fill out many of the fields in `IndirectParametersMetadata`
using information it already has:

* `early_instance_count` and `late_instance_count` are always
initialized to zero.

* `mesh_index` is already present in the work item buffer as the
`input_index` of the first work item in each batch.

This patch moves these fields to a separate buffer, the *GPU indirect
parameters metadata* buffer. That way, it avoids having to write them on
CPU during `batch_and_prepare_binned_render_phase`. This effectively
reduces the number of bits that that function must write per mesh from
160 to 64 (in addition to the 64 bits per mesh *instance*).

Additionally, this PR refactors `UntypedPhaseIndirectParametersBuffers`
to add another layer, `MeshClassIndirectParametersBuffers`, which allows
abstracting over the buffers corresponding indexed and non-indexed
meshes. This patch doesn't make much use of this abstraction, but
forthcoming patches will, and it's overall a cleaner approach.

This didn't seem to have much of an effect by itself on
`batch_and_prepare_binned_render_phase` time, but subsequent PRs
dependent on this PR yield roughly a 2× speedup.
2025-02-18 00:53:44 +00:00
charlotte
a861452d68
Add user supplied mesh tag (#17648)
# Objective

Because of mesh preprocessing, users cannot rely on
`@builtin(instance_index)` in order to reference external data, as the
instance index is not stable, either from frame to frame or relative to
the total spawn order of mesh instances.

## Solution

Add a user supplied mesh index that can be used for referencing external
data when drawing instanced meshes.

Closes #13373

## Testing

Benchmarked `many_cubes` showing no difference in total frame time.

## Showcase



https://github.com/user-attachments/assets/80620147-aafc-4d9d-a8ee-e2149f7c8f3b

---------

Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
2025-02-10 22:38:13 +00:00
JMS55
669d139c13
Upgrade to wgpu v24 (#17542)
Didn't remove WgpuWrapper. Not sure if it's needed or not still.

## Testing

- Did you test these changes? If so, how? Example runner
- Are there any parts that need more testing? Web (portable atomics
thingy?), DXC.

## Migration Guide
- Bevy has upgraded to [wgpu
v24](https://github.com/gfx-rs/wgpu/blob/trunk/CHANGELOG.md#v2400-2025-01-15).
- When using the DirectX 12 rendering backend, the new priority system
for choosing a shader compiler is as follows:
- If the `WGPU_DX12_COMPILER` environment variable is set at runtime, it
is used
- Else if the new `statically-linked-dxc` feature is enabled, a custom
version of DXC will be statically linked into your app at compile time.
- Else Bevy will look in the app's working directory for
`dxcompiler.dll` and `dxil.dll` at runtime.
- Else if they are missing, Bevy will fall back to FXC (not recommended)

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
2025-02-09 19:40:53 +00:00
Sludge
989f547080
Weak handle migration (#17695)
# Objective

- Make use of the new `weak_handle!` macro added in
https://github.com/bevyengine/bevy/pull/17384

## Solution

- Migrate bevy from `Handle::weak_from_u128` to the new `weak_handle!`
macro that takes a random UUID
- Deprecate `Handle::weak_from_u128`, since there are no remaining use
cases that can't also be addressed by constructing the type manually

## Testing

- `cargo run -p ci -- test`

---

## Migration Guide

Replace `Handle::weak_from_u128` with `weak_handle!` and a random UUID.
2025-02-05 22:44:20 +00:00
Patrick Walton
dda97880c4
Implement experimental GPU two-phase occlusion culling for the standard 3D mesh pipeline. (#17413)
*Occlusion culling* allows the GPU to skip the vertex and fragment
shading overhead for objects that can be quickly proved to be invisible
because they're behind other geometry. A depth prepass already
eliminates most fragment shading overhead for occluded objects, but the
vertex shading overhead, as well as the cost of testing and rejecting
fragments against the Z-buffer, is presently unavoidable for standard
meshes. We currently perform occlusion culling only for meshlets. But
other meshes, such as skinned meshes, can benefit from occlusion culling
too in order to avoid the transform and skinning overhead for unseen
meshes.

This commit adapts the same [*two-phase occlusion culling*] technique
that meshlets use to Bevy's standard 3D mesh pipeline when the new
`OcclusionCulling` component, as well as the `DepthPrepass` component,
are present on the camera. It has these steps:

1. *Early depth prepass*: We use the hierarchical Z-buffer from the
previous frame to cull meshes for the initial depth prepass, effectively
rendering only the meshes that were visible in the last frame.

2. *Early depth downsample*: We downsample the depth buffer to create
another hierarchical Z-buffer, this time with the current view
transform.

3. *Late depth prepass*: We use the new hierarchical Z-buffer to test
all meshes that weren't rendered in the early depth prepass. Any meshes
that pass this check are rendered.

4. *Late depth downsample*: Again, we downsample the depth buffer to
create a hierarchical Z-buffer in preparation for the early depth
prepass of the next frame. This step is done after all the rendering, in
order to account for custom phase items that might write to the depth
buffer.

Note that this patch has no effect on the per-mesh CPU overhead for
occluded objects, which remains high for a GPU-driven renderer due to
the lack of `cold-specialization` and retained bins. If
`cold-specialization` and retained bins weren't on the horizon, then a
more traditional approach like potentially visible sets (PVS) or low-res
CPU rendering would probably be more efficient than the GPU-driven
approach that this patch implements for most scenes. However, at this
point the amount of effort required to implement a PVS baking tool or a
low-res CPU renderer would probably be greater than landing
`cold-specialization` and retained bins, and the GPU driven approach is
the more modern one anyway. It does mean that the performance
improvements from occlusion culling as implemented in this patch *today*
are likely to be limited, because of the high CPU overhead for occluded
meshes.

Note also that this patch currently doesn't implement occlusion culling
for 2D objects or shadow maps. Those can be addressed in a follow-up.
Additionally, note that the techniques in this patch require compute
shaders, which excludes support for WebGL 2.

This PR is marked experimental because of known precision issues with
the downsampling approach when applied to non-power-of-two framebuffer
sizes (i.e. most of them). These precision issues can, in rare cases,
cause objects to be judged occluded that in fact are not. (I've never
seen this in practice, but I know it's possible; it tends to be likelier
to happen with small meshes.) As a follow-up to this patch, we desire to
switch to the [SPD-based hi-Z buffer shader from the Granite engine],
which doesn't suffer from these problems, at which point we should be
able to graduate this feature from experimental status. I opted not to
include that rewrite in this patch for two reasons: (1) @JMS55 is
planning on doing the rewrite to coincide with the new availability of
image atomic operations in Naga; (2) to reduce the scope of this patch.

A new example, `occlusion_culling`, has been added. It demonstrates
objects becoming quickly occluded and disoccluded by dynamic geometry
and shows the number of objects that are actually being rendered. Also,
a new `--occlusion-culling` switch has been added to `scene_viewer`, in
order to make it easy to test this patch with large scenes like Bistro.

[*two-phase occlusion culling*]:
https://medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501

[Aaltonen SIGGRAPH 2015]:

https://www.advances.realtimerendering.com/s2015/aaltonenhaar_siggraph2015_combined_final_footer_220dpi.pdf

[Some literature]:

https://gist.github.com/reduz/c5769d0e705d8ab7ac187d63be0099b5?permalink_comment_id=5040452#gistcomment-5040452

[SPD-based hi-Z buffer shader from the Granite engine]:
https://github.com/Themaister/Granite/blob/master/assets/shaders/post/hiz.comp

## Migration guide

* When enqueuing a custom mesh pipeline, work item buffers are now
created with
`bevy::render::batching::gpu_preprocessing::get_or_create_work_item_buffer`,
not `PreprocessWorkItemBuffers::new`. See the
`specialized_mesh_pipeline` example.

## Showcase

Occlusion culling example:
![Screenshot 2025-01-15
175051](https://github.com/user-attachments/assets/1544f301-68a3-45f8-84a6-7af3ad431258)

Bistro zoomed out, before occlusion culling:
![Screenshot 2025-01-16
185425](https://github.com/user-attachments/assets/5114bbdf-5dec-4de9-b17e-7aa77e7b61ed)

Bistro zoomed out, after occlusion culling:
![Screenshot 2025-01-16
184949](https://github.com/user-attachments/assets/9dd67713-656c-4276-9768-6d261ca94300)

In this scene, occlusion culling reduces the number of meshes Bevy has
to render from 1591 to 585.
2025-01-27 05:02:46 +00:00