cf6c65522f
99 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
![]() |
cf6c65522f
|
Derived Default for all public unit components. (#17139)
Derived `Default` for all public unit structs that already derive from `Component`. This allows them to be used more easily as required components. To avoid clutter in tests/examples, only public components were affected, but this could easily be expanded to affect all unit components. Fixes #17052. |
||
![]() |
00722b8d0f
|
Make indirect drawing opt-out instead of opt-in, enabling multidraw by default. (#16757)
This patch replaces the undocumented `NoGpuCulling` component with a new component, `NoIndirectDrawing`, effectively turning indirect drawing on by default. Indirect mode is needed for the recently-landed multidraw feature (#16427). Since multidraw is such a win for performance, when that feature is supported the small performance tax that indirect mode incurs is virtually always worth paying. To ensure that custom drawing code such as that in the `custom_shader_instancing` example continues to function, this commit additionally makes GPU culling take the `NoFrustumCulling` component into account. This PR is an alternative to #16670 that doesn't break the `custom_shader_instancing` example. **PR #16755 should land first in order to avoid breaking deferred rendering, as multidraw currently breaks it**. ## Migration Guide * Indirect drawing (GPU culling) is now enabled by default, so the `GpuCulling` component is no longer available. To disable indirect mode, which may be useful with custom render nodes, add the new `NoIndirectDrawing` component to your camera. |
||
![]() |
711246aa34
|
Update hashbrown to 0.15 (#15801)
Updating dependencies; adopted version of #15696. (Supercedes #15696.) Long answer: hashbrown is no longer using ahash by default, meaning that we can't use the default-hasher methods with ahasher. So, we have to use the longer-winded versions instead. This takes the opportunity to also switch our default hasher as well, but without actually enabling the default-hasher feature for hashbrown, meaning that we'll be able to change our hasher more easily at the cost of all of these method calls being obnoxious forever. One large change from 0.15 is that `insert_unique_unchecked` is now `unsafe`, and for cases where unsafe code was denied at the crate level, I replaced it with `insert`. ## Migration Guide `bevy_utils` has updated its version of `hashbrown` to 0.15 and now defaults to `foldhash` instead of `ahash`. This means that if you've hard-coded your hasher to `bevy_utils::AHasher` or separately used the `ahash` crate in your code, you may need to switch to `foldhash` to ensure that everything works like it does in Bevy. |
||
![]() |
701ccdec51
|
add docs to clip_from_view (#16373)
more docs |
||
![]() |
c6fe275b21
|
add docs to view uniform frustum field (#16369)
just some docs to save future me some clicking around |
||
![]() |
40640fdf42
|
Don't reëxport bevy_image from bevy_render (#16163)
# Objective Fixes #15940 ## Solution Remove the `pub use` and fix the compile errors. Make `bevy_image` available as `bevy::image`. ## Testing Feature Frenzy would be good here! Maybe I'll learn how to use it if I have some time this weekend, or maybe a reviewer can use it. ## Migration Guide Use `bevy_image` instead of `bevy_render::texture` items. --------- Co-authored-by: chompaa <antony.m.3012@gmail.com> Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
![]() |
46566980a6
|
Fix and improve MSAA documentation (#16196)
# Objective #14273 changed `Msaa` to be a component rather than a resource. However, the documentation still says that it is a resource. This tripped me up during migration to 0.15 until I looked at the type definition. Additionally, the docs have some unnecessary repetition and some grammar mistakes, and they don't link to camera documentation. ## Solution Fix up the docs! |
||
![]() |
2ec164d279
|
Clear view attachments before resizing window surfaces (#15087)
# Objective - Fixes #15077 ## Solution - Clears `ViewTargetAttachments` resource every frame before `create_surfaces` system instead, which was previously done after `extract_windows`. ## Testing - Confirmed that examples no longer panic on window resizing with DX12 backend. - `screenshot` example keeps working after this change. |
||
![]() |
d70595b667
|
Add core and alloc over std Lints (#15281)
# Objective - Fixes #6370 - Closes #6581 ## Solution - Added the following lints to the workspace: - `std_instead_of_core` - `std_instead_of_alloc` - `alloc_instead_of_core` - Used `cargo +nightly fmt` with [item level use formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Item%5C%3A) to split all `use` statements into single items. - Used `cargo clippy --workspace --all-targets --all-features --fix --allow-dirty` to _attempt_ to resolve the new linting issues, and intervened where the lint was unable to resolve the issue automatically (usually due to needing an `extern crate alloc;` statement in a crate root). - Manually removed certain uses of `std` where negative feature gating prevented `--all-features` from finding the offending uses. - Used `cargo +nightly fmt` with [crate level use formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Crate%5C%3A) to re-merge all `use` statements matching Bevy's previous styling. - Manually fixed cases where the `fmt` tool could not re-merge `use` statements due to conditional compilation attributes. ## Testing - Ran CI locally ## Migration Guide The MSRV is now 1.81. Please update to this version or higher. ## Notes - This is a _massive_ change to try and push through, which is why I've outlined the semi-automatic steps I used to create this PR, in case this fails and someone else tries again in the future. - Making this change has no impact on user code, but does mean Bevy contributors will be warned to use `core` and `alloc` instead of `std` where possible. - This lint is a critical first step towards investigating `no_std` options for Bevy. --------- Co-authored-by: François Mockers <francois.mockers@vleue.com> |
||
![]() |
efda7f3f9c
|
Simpler lint fixes: makes ci lints work but disables a lint for now (#15376)
Takes the first two commits from #15375 and adds suggestions from this comment: https://github.com/bevyengine/bevy/pull/15375#issuecomment-2366968300 See #15375 for more reasoning/motivation. ## Rebasing (rerunning) ```rust git switch simpler-lint-fixes git reset --hard main cargo fmt --all -- --unstable-features --config normalize_comments=true,imports_granularity=Crate cargo fmt --all git add --update git commit --message "rustfmt" cargo clippy --workspace --all-targets --all-features --fix cargo fmt --all -- --unstable-features --config normalize_comments=true,imports_granularity=Crate cargo fmt --all git add --update git commit --message "clippy" git cherry-pick e6c0b94f6795222310fb812fa5c4512661fc7887 ``` |
||
![]() |
274c97d415
|
Reflect derived traits on all components and resources: bevy_render (#15226)
Addresses https://github.com/bevyengine/bevy/issues/15187 for bevy_render |
||
![]() |
f0560b8e78
|
Ensure more explicit system ordering for preparing view target. (#15000)
Fixes #14993 (maybe). Adds a system ordering constraint that was missed in the refactor in #14833. The theory here is that the single threaded forces a topology that causes the prepare system to run before `prepare_windows` in a way that causes issues. For whatever reason, this appears to be unlikely when multi-threading is enabled. |
||
![]() |
d9527c101c
|
Rewrite screenshots. (#14833)
# Objective Rewrite screenshotting to be able to accept any `RenderTarget`. Closes #12478 ## Solution Previously, screenshotting relied on setting a variety of state on the requested window. When extracted, the window's `swap_chain_texture_view` property would be swapped out with a texture_view created that frame for the screenshot pipeline to write back to the cpu. Besides being tightly coupled to window in a way that prevented screenshotting other render targets, this approach had the drawback of relying on the implicit state of `swap_chain_texture_view` being returned from a `NormalizedRenderTarget` when view targets were prepared. Because property is set every frame for windows, that wasn't a problem, but poses a problem for render target images. Namely, to do the equivalent trick, we'd have to replace the `GpuImage`'s texture view, and somehow restore it later. As such, this PR creates a new `prepare_view_textures` system which runs before `prepare_view_targets` that allows a new `prepare_screenshots` system to be sandwiched between and overwrite the render targets texture view if a screenshot has been requested that frame for the given target. Additionally, screenshotting itself has been changed to use a component + observer pattern. We now spawn a `Screenshot` component into the world, whose lifetime is tracked with a series of marker components. When the screenshot is read back to the CPU, we send the image over a channel back to the main world where an observer fires on the screenshot entity before being despawned the next frame. This allows the user to access resources in their save callback that might be useful (e.g. uploading the screenshot over the network, etc.). ## Testing  TODO: - [x] Web - [ ] Manual texture view --- ## Showcase render to texture example: <img src="https://github.com/user-attachments/assets/612ac47b-8a24-4287-a745-3051837963b0" width=200/> web saving still works: <img src="https://github.com/user-attachments/assets/e2a15b17-1ff5-4006-ab2a-e5cc74888b9c" width=200/> ## Migration Guide `ScreenshotManager` has been removed. To take a screenshot, spawn a `Screenshot` entity with the specified render target and provide an observer targeting the `ScreenshotCaptured` event. See the `window/screenshot` example to see an example. --------- Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com> |
||
![]() |
c1fedc2e2d
|
Made ViewUniform fields public (#14482)
# Objective - Made `ViewUniform` fields public so that 3rd-parties can create this uniform. This is useful for custom pipelines that use custom views (e.g. views buffered by a particular amount, for example). |
||
![]() |
03fd1b46ef
|
Move Msaa to component (#14273)
Switches `Msaa` from being a globally configured resource to a per camera view component. Closes #7194 # Objective Allow individual views to describe their own MSAA settings. For example, when rendering to different windows or to different parts of the same view. ## Solution Make `Msaa` a component that is required on all camera bundles. ## Testing Ran a variety of examples to ensure that nothing broke. TODO: - [ ] Make sure android still works per previous comment in `extract_windows`. --- ## Migration Guide `Msaa` is no longer configured as a global resource, and should be specified on each spawned camera if a non-default setting is desired. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: François Mockers <francois.mockers@vleue.com> |
||
![]() |
856b39d821
|
Apply Clippy lints regarding lazy evaluation and closures (#14015)
# Objective - Lazily evaluate [default](https://rust-lang.github.io/rust-clippy/master/index.html#/unwrap_or_default)~~/[or](https://rust-lang.github.io/rust-clippy/master/index.html#/or_fun_call)~~ values where it makes sense - ~~`unwrap_or(foo())` -> `unwrap_or_else(|| foo())`~~ - `unwrap_or(Default::default())` -> `unwrap_or_default()` - etc. - Avoid creating [redundant closures](https://rust-lang.github.io/rust-clippy/master/index.html#/redundant_closure), even for [method calls](https://rust-lang.github.io/rust-clippy/master/index.html#/redundant_closure_for_method_calls) - `map(|something| something.into())` -> `map(Into:into)` ## Solution - Apply Clippy lints: - ~~[or_fun_call](https://rust-lang.github.io/rust-clippy/master/index.html#/or_fun_call)~~ - [unwrap_or_default](https://rust-lang.github.io/rust-clippy/master/index.html#/unwrap_or_default) - [redundant_closure_for_method_calls](https://rust-lang.github.io/rust-clippy/master/index.html#/redundant_closure_for_method_calls) ([redundant closures](https://rust-lang.github.io/rust-clippy/master/index.html#/redundant_closure) is already enabled) ## Testing - Tested on Windows 11 (`stable-x86_64-pc-windows-gnu`, 1.79.0) - Bevy compiles without errors or warnings and examples seem to work as intended - `cargo clippy` ✅ - `cargo run -p ci -- compile` ✅ --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
44db8b7fac
|
Allow phase items not associated with meshes to be binned. (#14029)
As reported in #14004, many third-party plugins, such as Hanabi, enqueue entities that don't have meshes into render phases. However, the introduction of indirect mode added a dependency on mesh-specific data, breaking this workflow. This is because GPU preprocessing requires that the render phases manage indirect draw parameters, which don't apply to objects that aren't meshes. The existing code skips over binned entities that don't have indirect draw parameters, which causes the rendering to be skipped for such objects. To support this workflow, this commit adds a new field, `non_mesh_items`, to `BinnedRenderPhase`. This field contains a simple list of (bin key, entity) pairs. After drawing batchable and unbatchable objects, the non-mesh items are drawn one after another. Bevy itself doesn't enqueue any items into this list; it exists solely for the application and/or plugins to use. Additionally, this commit switches the asset ID in the standard bin keys to be an untyped asset ID rather than that of a mesh. This allows more flexibility, allowing bins to be keyed off any type of asset. This patch adds a new example, `custom_phase_item`, which simultaneously serves to demonstrate how to use this new feature and to act as a regression test so this doesn't break again. Fixes #14004. ## Changelog ### Added * `BinnedRenderPhase` now contains a `non_mesh_items` field for plugins to add custom items to. |
||
![]() |
027f8e21ec
|
Allow mix of hdr and non-hdr cameras to same render target (#13419)
Changes: - Track whether an output texture has been written to yet and only clear it on the first write. - Use `ClearColorConfig` on `CameraOutputMode` instead of a raw `LoadOp`. - Track whether a output texture has been seen when specializing the upscaling pipeline and use alpha blending for extra cameras rendering to that texture that do not specify an explicit blend mode. Fixes #6754 ## Testing Tested against provided test case in issue:  --- ## Changelog - Allow cameras rendering to the same output texture with mixed hdr to work correctly. ## Migration Guide - - Change `CameraOutputMode` to use `ClearColorConfig` instead of `LoadOp`. |
||
![]() |
9b9d3d81cb
|
Normalise matrix naming (#13489)
# Objective - Fixes #10909 - Fixes #8492 ## Solution - Name all matrices `x_from_y`, for example `world_from_view`. ## Testing - I've tested most of the 3D examples. The `lighting` example particularly should hit a lot of the changes and appears to run fine. --- ## Changelog - Renamed matrices across the engine to follow a `y_from_x` naming, making the space conversion more obvious. ## Migration Guide - `Frustum`'s `from_view_projection`, `from_view_projection_custom_far` and `from_view_projection_no_far` were renamed to `from_clip_from_world`, `from_clip_from_world_custom_far` and `from_clip_from_world_no_far`. - `ComputedCameraValues::projection_matrix` was renamed to `clip_from_view`. - `CameraProjection::get_projection_matrix` was renamed to `get_clip_from_view` (this affects implementations on `Projection`, `PerspectiveProjection` and `OrthographicProjection`). - `ViewRangefinder3d::from_view_matrix` was renamed to `from_world_from_view`. - `PreviousViewData`'s members were renamed to `view_from_world` and `clip_from_world`. - `ExtractedView`'s `projection`, `transform` and `view_projection` were renamed to `clip_from_view`, `world_from_view` and `clip_from_world`. - `ViewUniform`'s `view_proj`, `unjittered_view_proj`, `inverse_view_proj`, `view`, `inverse_view`, `projection` and `inverse_projection` were renamed to `clip_from_world`, `unjittered_clip_from_world`, `world_from_clip`, `world_from_view`, `view_from_world`, `clip_from_view` and `view_from_clip`. - `GpuDirectionalCascade::view_projection` was renamed to `clip_from_world`. - `MeshTransforms`' `transform` and `previous_transform` were renamed to `world_from_local` and `previous_world_from_local`. - `MeshUniform`'s `transform`, `previous_transform`, `inverse_transpose_model_a` and `inverse_transpose_model_b` were renamed to `world_from_local`, `previous_world_from_local`, `local_from_world_transpose_a` and `local_from_world_transpose_b` (the `Mesh` type in WGSL mirrors this, however `transform` and `previous_transform` were named `model` and `previous_model`). - `Mesh2dTransforms::transform` was renamed to `world_from_local`. - `Mesh2dUniform`'s `transform`, `inverse_transpose_model_a` and `inverse_transpose_model_b` were renamed to `world_from_local`, `local_from_world_transpose_a` and `local_from_world_transpose_b` (the `Mesh2d` type in WGSL mirrors this). - In WGSL, in `bevy_pbr::mesh_functions`, `get_model_matrix` and `get_previous_model_matrix` were renamed to `get_world_from_local` and `get_previous_world_from_local`. - In WGSL, `bevy_sprite::mesh2d_functions::get_model_matrix` was renamed to `get_world_from_local`. |
||
![]() |
4c3b7679ec
|
#12502 Remove limit on RenderLayers. (#13317)
# Objective Remove the limit of `RenderLayer` by using a growable mask using `SmallVec`. Changes adopted from @UkoeHB's initial PR here https://github.com/bevyengine/bevy/pull/12502 that contained additional changes related to propagating render layers. Changes ## Solution The main thing needed to unblock this is removing `RenderLayers` from our shader code. This primarily affects `DirectionalLight`. We are now computing a `skip` field on the CPU that is then used to skip the light in the shader. ## Testing Checked a variety of examples and did a quick benchmark on `many_cubes`. There were some existing problems identified during the development of the original pr (see: https://discord.com/channels/691052431525675048/1220477928605749340/1221190112939872347). This PR shouldn't change any existing behavior besides removing the layer limit (sans the comment in migration about `all` layers no longer being possible). --- ## Changelog Removed the limit on `RenderLayers` by using a growable bitset that only allocates when layers greater than 64 are used. ## Migration Guide - `RenderLayers::all()` no longer exists. Entities expecting to be visible on all layers, e.g. lights, should compute the active layers that are in use. --------- Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com> |
||
![]() |
a22ecede49
|
Only create changed buffer if it already exists (#13242)
# Objective - `DynamicUniformBuffer` tries to create a buffer as soon as the changed flag is set to true. This doesn't work correctly when the buffer wasn't already created. This currently creates a crash because it's trying to create a buffer of size 0 if the flag is set but there's no buffer yet. ## Solution - Don't create a changed buffer until there's data that needs to be written to a buffer. ## Testing - run `cargo run --example scene_viewer` and see that it doesn't crash anymore Fixes #13235 |
||
![]() |
d390420093
|
Implement Auto Exposure plugin (#12792)
# Objective - Add auto exposure/eye adaptation to the bevy render pipeline. - Support features that users might expect from other engines: - Metering masks - Compensation curves - Smooth exposure transitions This PR is based on an implementation I already built for a personal project before https://github.com/bevyengine/bevy/pull/8809 was submitted, so I wasn't able to adopt that PR in the proper way. I've still drawn inspiration from it, so @fintelia should be credited as well. ## Solution An auto exposure compute shader builds a 64 bin histogram of the scene's luminance, and then adjusts the exposure based on that histogram. Using a histogram allows the system to ignore outliers like shadows and specular highlights, and it allows to give more weight to certain areas based on a mask. --- ## Changelog - Added: AutoExposure plugin that allows to adjust a camera's exposure based on it's scene's luminance. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
31835ff76d
|
Implement visibility ranges, also known as hierarchical levels of detail (HLODs). (#12916)
Implement visibility ranges, also known as hierarchical levels of detail (HLODs). This commit introduces a new component, `VisibilityRange`, which allows developers to specify camera distances in which meshes are to be shown and hidden. Hiding meshes happens early in the rendering pipeline, so this feature can be used for level of detail optimization. Additionally, this feature is properly evaluated per-view, so different views can show different levels of detail. This feature differs from proper mesh LODs, which can be implemented later. Engines generally implement true mesh LODs later in the pipeline; they're typically more efficient than HLODs with GPU-driven rendering. However, mesh LODs are more limited than HLODs, because they require the lower levels of detail to be meshes with the same vertex layout and shader (and perhaps the same material) as the original mesh. Games often want to use objects other than meshes to replace distant models, such as *octahedral imposters* or *billboard imposters*. The reason why the feature is called *hierarchical level of detail* is that HLODs can replace multiple meshes with a single mesh when the camera is far away. This can be useful for reducing drawcall count. Note that `VisibilityRange` doesn't automatically propagate down to children; it must be placed on every mesh. Crossfading between different levels of detail is supported, using the standard 4x4 ordered dithering pattern from [1]. The shader code to compute the dithering patterns should be well-optimized. The dithering code is only active when visibility ranges are in use for the mesh in question, so that we don't lose early Z. Cascaded shadow maps show the HLOD level of the view they're associated with. Point light and spot light shadow maps, which have no CSMs, display all HLOD levels that are visible in any view. To support this efficiently and avoid doing visibility checks multiple times, we precalculate all visible HLOD levels for each entity with a `VisibilityRange` during the `check_visibility_range` system. A new example, `visibility_range`, has been added to the tree, as well as a new low-poly version of the flight helmet model to go with it. It demonstrates use of the visibility range feature to provide levels of detail. [1]: https://en.wikipedia.org/wiki/Ordered_dithering#Threshold_map [^1]: Unreal doesn't have a feature that exactly corresponds to visibility ranges, but Unreal's HLOD system serves roughly the same purpose. ## Changelog ### Added * A new `VisibilityRange` component is available to conditionally enable entity visibility at camera distances, with optional crossfade support. This can be used to implement different levels of detail (LODs). ## Screenshots High-poly model:  Low-poly model up close:  Crossfading between the two:  --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
![]() |
961b24deaf
|
Implement filmic color grading. (#13121)
This commit expands Bevy's existing tonemapping feature to a complete set of filmic color grading tools, matching those of engines like Unity, Unreal, and Godot. The following features are supported: * White point adjustment. This is inspired by Unity's implementation of the feature, but simplified and optimized. *Temperature* and *tint* control the adjustments to the *x* and *y* chromaticity values of [CIE 1931]. Following Unity, the adjustments are made relative to the [D65 standard illuminant] in the [LMS color space]. * Hue rotation. This simply converts the RGB value to [HSV], alters the hue, and converts back. * Color correction. This allows the *gamma*, *gain*, and *lift* values to be adjusted according to the standard [ASC CDL combined function]. * Separate color correction for shadows, midtones, and highlights. Blender's source code was used as a reference for the implementation of this. The midtone ranges can be adjusted by the user. To avoid abrupt color changes, a small crossfade is used between the different sections of the image, again following Blender's formulas. A new example, `color_grading`, has been added, offering a GUI to change all the color grading settings. It uses the same test scene as the existing `tonemapping` example, which has been factored out into a shared glTF scene. [CIE 1931]: https://en.wikipedia.org/wiki/CIE_1931_color_space [D65 standard illuminant]: https://en.wikipedia.org/wiki/Standard_illuminant#Illuminant_series_D [LMS color space]: https://en.wikipedia.org/wiki/LMS_color_space [HSV]: https://en.wikipedia.org/wiki/HSL_and_HSV [ASC CDL combined function]: https://en.wikipedia.org/wiki/ASC_CDL#Combined_Function ## Changelog ### Added * Many new filmic color grading options have been added to the `ColorGrading` component. ## Migration Guide * `ColorGrading::gamma` and `ColorGrading::pre_saturation` are now set separately for the `shadows`, `midtones`, and `highlights` sections. You can migrate code with the `ColorGrading::all_sections` and `ColorGrading::all_sections_mut` functions, which access and/or update all sections at once. * `ColorGrading::post_saturation` and `ColorGrading::exposure` are now fields of `ColorGrading::global`. ## Screenshots   |
||
![]() |
16531fb3e3
|
Implement GPU frustum culling. (#12889)
This commit implements opt-in GPU frustum culling, built on top of the infrastructure in https://github.com/bevyengine/bevy/pull/12773. To enable it on a camera, add the `GpuCulling` component to it. To additionally disable CPU frustum culling, add the `NoCpuCulling` component. Note that adding `GpuCulling` without `NoCpuCulling` *currently* does nothing useful. The reason why `GpuCulling` doesn't automatically imply `NoCpuCulling` is that I intend to follow this patch up with GPU two-phase occlusion culling, and CPU frustum culling plus GPU occlusion culling seems like a very commonly-desired mode. Adding the `GpuCulling` component to a view puts that view into *indirect mode*. This mode makes all drawcalls indirect, relying on the mesh preprocessing shader to allocate instances dynamically. In indirect mode, the `PreprocessWorkItem` `output_index` points not to a `MeshUniform` instance slot but instead to a set of `wgpu` `IndirectParameters`, from which it allocates an instance slot dynamically if frustum culling succeeds. Batch building has been updated to allocate and track indirect parameter slots, and the AABBs are now supplied to the GPU as `MeshCullingData`. A small amount of code relating to the frustum culling has been borrowed from meshlets and moved into `maths.wgsl`. Note that standard Bevy frustum culling uses AABBs, while meshlets use bounding spheres; this means that not as much code can be shared as one might think. This patch doesn't provide any way to perform GPU culling on shadow maps, to avoid making this patch bigger than it already is. That can be a followup. ## Changelog ### Added * Frustum culling can now optionally be done on the GPU. To enable it, add the `GpuCulling` component to a camera. * To disable CPU frustum culling, add `NoCpuCulling` to a camera. Note that `GpuCulling` doesn't automatically imply `NoCpuCulling`. |
||
![]() |
ab7cbfa8fc
|
Consolidate Render(Ui)Materials(2d) into RenderAssets (#12827)
# Objective - Replace `RenderMaterials` / `RenderMaterials2d` / `RenderUiMaterials` with `RenderAssets` to enable implementing changes to one thing, `RenderAssets`, that applies to all use cases rather than duplicating changes everywhere for multiple things that should be one thing. - Adopts #8149 ## Solution - Make RenderAsset generic over the destination type rather than the source type as in #8149 - Use `RenderAssets<PreparedMaterial<M>>` etc for render materials --- ## Changelog - Changed: - The `RenderAsset` trait is now implemented on the destination type. Its `SourceAsset` associated type refers to the type of the source asset. - `RenderMaterials`, `RenderMaterials2d`, and `RenderUiMaterials` have been replaced by `RenderAssets<PreparedMaterial<M>>` and similar. ## Migration Guide - `RenderAsset` is now implemented for the destination type rather that the source asset type. The source asset type is now the `RenderAsset` trait's `SourceAsset` associated type. |
||
![]() |
01649f13e2
|
Refactor App and SubApp internals for better separation (#9202)
# Objective This is a necessary precursor to #9122 (this was split from that PR to reduce the amount of code to review all at once). Moving `!Send` resource ownership to `App` will make it unambiguously `!Send`. `SubApp` must be `Send`, so it can't wrap `App`. ## Solution Refactor `App` and `SubApp` to not have a recursive relationship. Since `SubApp` no longer wraps `App`, once `!Send` resources are moved out of `World` and into `App`, `SubApp` will become unambiguously `Send`. There could be less code duplication between `App` and `SubApp`, but that would break `App` method chaining. ## Changelog - `SubApp` no longer wraps `App`. - `App` fields are no longer publicly accessible. - `App` can no longer be converted into a `SubApp`. - Various methods now return references to a `SubApp` instead of an `App`. ## Migration Guide - To construct a sub-app, use `SubApp::new()`. `App` can no longer convert into `SubApp`. - If you implemented a trait for `App`, you may want to implement it for `SubApp` as well. - If you're accessing `app.world` directly, you now have to use `app.world()` and `app.world_mut()`. - `App::sub_app` now returns `&SubApp`. - `App::sub_app_mut` now returns `&mut SubApp`. - `App::get_sub_app` now returns `Option<&SubApp>.` - `App::get_sub_app_mut` now returns `Option<&mut SubApp>.` |
||
![]() |
40f82b867b
|
Reflect default in some types on bevy_render (#12580)
# Objective - Many types in bevy_render doesn't reflect Default even if it could. ## Solution - Reflect it. --- --------- Co-authored-by: Pablo Reinhardt <pabloreinhardt@gmail.com> |
||
![]() |
8ec65525ab
|
Port bevy_core_pipeline to LinearRgba (#12116)
# Objective - We should move towards a consistent use of the new `bevy_color` crate. - As discussed in #12089, splitting this work up into small pieces makes it easier to review. ## Solution - Port all uses of `LegacyColor` in the `bevy_core_pipeline` to `LinearRgba` - `LinearRgba` is the correct type to use for internal rendering types - Added `LinearRgba::BLACK` and `WHITE` (used during migration) - Add `LinearRgba::grey` to more easily construct balanced grey colors (used during migration) - Add a conversion from `LinearRgba` to `wgpu::Color`. The converse was not done at this time, as this is typically a user error. I did not change the field type of the clear color on the cameras: as this is user-facing, this should be done in concert with the other configurable fields. ## Migration Guide `ColorAttachment` now stores a `LinearRgba` color, rather than a Bevy 0.13 `Color`. `set_blend_constant` now takes a `LinearRgba` argument, rather than a Bevy 0.13 `Color`. --------- Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com> |
||
![]() |
dc25edd0a2
|
Fix MSAA writeback when 3 or more cameras have the same target. (#11968)
# Objective If multiple cameras render to the same target with MSAA enabled, only the first and the last camera output will appear in the final output*. This is because each camera maintains a separate flag to track the active main texture. The first camera renders to texture A and all subsequent cameras first write-back from A and then render into texture B. Hence, camera 3 onwards will overwrite the work of the previous camera. \* This would manifest slightly differently if there were other calls to post_process_write() in a more complex setup. The is a functional regression from Bevy 0.12. ## Solution The flag which tracks the active main texture should be shared between cameras with the same `NormalizedRenderTarget`. Add the `Arc<AtomicUsize>` to the existing per-target cache. |
||
![]() |
9505f6e6a9
|
Support optional clear color in ColorAttachment. (#11884)
This represents when the user has configured `ClearColorConfig::None` in their application. If the clear color is `None`, we will always `Load` instead of attempting to clear the attachment on the first call. Fixes #11883. |
||
![]() |
dd619a1087
|
New Exposure and Lighting Defaults (and calibrate examples) (#11868)
# Objective After adding configurable exposure, we set the default ev100 value to `7` (indoor). This brought us out of sync with Blender's configuration and defaults. This PR changes the default to `9.7` (bright indoor or very overcast outdoors), as I calibrated in #11577. This feels like a very reasonable default. The other changes generally center around tweaking Bevy's lighting defaults and examples to play nicely with this number, alongside a few other tweaks and improvements. Note that for artistic reasons I have reverted some examples, which changed to directional lights in #11581, back to point lights. Fixes #11577 --- ## Changelog - Changed `Exposure::ev100` from `7` to `9.7` to better match Blender - Renamed `ExposureSettings` to `Exposure` - `Camera3dBundle` now includes `Exposure` for discoverability - Bumped `FULL_DAYLIGHT ` and `DIRECT_SUNLIGHT` to represent the middle-to-top of those ranges instead of near the bottom - Added new `AMBIENT_DAYLIGHT` constant and set that as the new `DirectionalLight` default illuminance. - `PointLight` and `SpotLight` now have a default `intensity` of 1,000,000 lumens. This makes them actually useful in the context of the new "semi-outdoor" exposure and puts them in the "cinema lighting" category instead of the "common household light" category. They are also reasonably close to the Blender default. - `AmbientLight` default has been bumped from `20` to `80`. ## Migration Guide - The increased `Exposure::ev100` means that all existing 3D lighting will need to be adjusted to match (DirectionalLights, PointLights, SpotLights, EnvironmentMapLights, etc). Or alternatively, you can adjust the `Exposure::ev100` on your cameras to work nicely with your current lighting values. If you are currently relying on default intensity values, you might need to change the intensity to achieve the same effect. Note that in Bevy 0.12, point/spot lights had a different hard coded ev100 value than directional lights. In Bevy 0.13, they use the same ev100, so if you have both in your scene, the _scale_ between these light types has changed and you will likely need to adjust one or both of them. |
||
![]() |
e169b2b217
|
Missing registrations (#11736)
# Objective During my exploratory work on the remote editor, I found a couple of types that were either not registered, or that were missing `ReflectDefault`. ## Solution - Added registration and `ReflectDefault` where applicable - (Drive by fix) Moved `Option<f32>` registration to `bevy_core` instead of `bevy_ui`, along with similar types. --- ## Changelog - Fixed: Registered `FogSettings`, `FogFalloff`, `ParallaxMappingMethod`, `OpaqueRendererMethod` structs for reflection - Fixed: Registered `ReflectDefault` trait for `ColorGrading` and `CascadeShadowConfig` structs |
||
![]() |
a796d53a05
|
Meshlet prep (#11442)
# Objective - Prep for https://github.com/bevyengine/bevy/pull/10164 - Make deferred_lighting_pass_id a ColorAttachment - Correctly extract shadow view frusta so that the view uniforms get populated - Make some needed things public - Misc formatting |
||
![]() |
7125dcb268
|
Customizable camera main texture usage (#11412)
# Objective - Some users want to change the default texture usage of the main camera but they are currently hardcoded ## Solution - Add a component that is used to configure the main texture usage field --- ## Changelog Added `CameraMainTextureUsage` Added `CameraMainTextureUsage` to `Camera3dBundle` and `Camera2dBundle` ## Migration Guide Add `main_texture_usages: Default::default()` to your camera bundle. # Notes Inspired by: #6815 |
||
![]() |
fcd7c0fc3d
|
Exposure settings (adopted) (#11347)
Rebased and finished version of https://github.com/bevyengine/bevy/pull/8407. Huge thanks to @GitGhillie for adjusting all the examples, and the many other people who helped write this PR (@superdump , @coreh , among others) :) Fixes https://github.com/bevyengine/bevy/issues/8369 --- ## Changelog - Added a `brightness` control to `Skybox`. - Added an `intensity` control to `EnvironmentMapLight`. - Added `ExposureSettings` and `PhysicalCameraParameters` for controlling exposure of 3D cameras. - Removed the baked-in `DirectionalLight` exposure Bevy previously hardcoded internally. ## Migration Guide - If using a `Skybox` or `EnvironmentMapLight`, use the new `brightness` and `intensity` controls to adjust their strength. - All 3D scene will now have different apparent brightnesses due to Bevy implementing proper exposure controls. You will have to adjust the intensity of your lights and/or your camera exposure via the new `ExposureSettings` component to compensate. --------- Co-authored-by: Robert Swain <robert.swain@gmail.com> Co-authored-by: GitGhillie <jillisnoordhoek@gmail.com> Co-authored-by: Marco Buono <thecoreh@gmail.com> Co-authored-by: vero <email@atlasdostal.com> Co-authored-by: atlas dostal <rodol@rivalrebels.com> |
||
![]() |
a657478675
|
resolve all internal ambiguities (#10411)
- ignore all ambiguities that are not a problem - remove `.before(Assets::<Image>::track_assets),` that points into a different schedule (-> should this be caught?) - add some explicit orderings: - run `poll_receivers` and `update_accessibility_nodes` after `window_closed` in `bevy_winit::accessibility` - run `bevy_ui::accessibility::calc_bounds` after `CameraUpdateSystem` - run ` bevy_text::update_text2d_layout` and `bevy_ui::text_system` after `font_atlas_set::remove_dropped_font_atlas_sets` - add `app.ignore_ambiguity(a, b)` function for cases where you want to ignore an ambiguity between two independent plugins `A` and `B` - add `IgnoreAmbiguitiesPlugin` in `DefaultPlugins` that allows cross-crate ambiguities like `bevy_animation`/`bevy_ui` - Fixes https://github.com/bevyengine/bevy/issues/9511 ## Before **Render**  **PostUpdate**  ## After **Render**  **PostUpdate**  --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com> Co-authored-by: François <mockersf@gmail.com> |
||
![]() |
70b0eacc3b
|
Keep track of when a texture is first cleared (#10325)
# Objective - Custom render passes, or future passes in the engine (such as https://github.com/bevyengine/bevy/pull/10164) need a better way to know and indicate to the core passes whether the view color/depth/prepass attachments have been cleared or not yet this frame, to know if they should clear it themselves or load it. ## Solution - For all render targets (depth textures, shadow textures, prepass textures, main textures) use an atomic bool to track whether or not each texture has been cleared this frame. Abstracted away in the new ColorAttachment and DepthAttachment wrappers. --- ## Changelog - Changed `ViewTarget::get_color_attachment()`, removed arguments. - Changed `ViewTarget::get_unsampled_color_attachment()`, removed arguments. - Removed `Camera3d::clear_color`. - Removed `Camera2d::clear_color`. - Added `Camera::clear_color`. - Added `ExtractedCamera::clear_color`. - Added `ColorAttachment` and `DepthAttachment` wrappers. - Moved `ClearColor` and `ClearColorConfig` from `bevy::core_pipeline::clear_color` to `bevy::render::camera`. - Core render passes now track when a texture is first bound as an attachment in order to decide whether to clear or load it. ## Migration Guide - Remove arguments to `ViewTarget::get_color_attachment()` and `ViewTarget::get_unsampled_color_attachment()`. - Configure clear color on `Camera` instead of on `Camera3d` and `Camera2d`. - Moved `ClearColor` and `ClearColorConfig` from `bevy::core_pipeline::clear_color` to `bevy::render::camera`. - `ViewDepthTexture` must now be created via the `new()` method --------- Co-authored-by: vero <email@atlasdostal.com> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
67d92e9b85
|
light renderlayers (#10742)
# Objective add `RenderLayers` awareness to lights. lights default to `RenderLayers::layer(0)`, and must intersect the camera entity's `RenderLayers` in order to affect the camera's output. note that lights already use renderlayers to filter meshes for shadow casting. this adds filtering lights per view based on intersection of camera layers and light layers. fixes #3462 ## Solution PointLights and SpotLights are assigned to individual views in `assign_lights_to_clusters`, so we simply cull the lights which don't match the view layers in that function. DirectionalLights are global, so we - add the light layers to the `DirectionalLight` struct - add the view layers to the `ViewUniform` struct - check for intersection before processing the light in `apply_pbr_lighting` potential issue: when mesh/light layers are smaller than the view layers weird results can occur. e.g: camera = layers 1+2 light = layers 1 mesh = layers 2 the mesh does not cast shadows wrt the light as (1 & 2) == 0. the light affects the view as (1+2 & 1) != 0. the view renders the mesh as (1+2 & 2) != 0. so the mesh is rendered and lit, but does not cast a shadow. this could be fixed (so that the light would not affect the mesh in that view) by adding the light layers to the point and spot light structs, but i think the setup is pretty unusual, and space is at a premium in those structs (adding 4 bytes more would reduce the webgl point+spot light max count to 240 from 256). I think typical usage is for cameras to have a single layer, and meshes/lights to maybe have multiple layers to render to e.g. minimaps as well as primary views. if there is a good use case for the above setup and we should support it, please let me know. --- ## Migration Guide Lights no longer affect all `RenderLayers` by default, now like cameras and meshes they default to `RenderLayers::layer(0)`. To recover the previous behaviour and have all lights affect all views, add a `RenderLayers::all()` component to the light entity. |
||
![]() |
3628e09045
|
Add frustum to shader View (#10306)
# Objective - Work towards GPU-driven culling (https://github.com/bevyengine/bevy/pull/10164) ## Solution - Pass the view frustum to the shader view uniform --- ## Changelog - View Frustums are now extracted to the render world and made available to shaders |
||
![]() |
12032cd296
|
Directly copy data into uniform buffers (#9865)
# Objective This is a minimally disruptive version of #8340. I attempted to update it, but failed due to the scope of the changes added in #8204. Fixes #8307. Partially addresses #4642. As seen in https://github.com/bevyengine/bevy/issues/8284, we're actually copying data twice in Prepare stage systems. Once into a CPU-side intermediate scratch buffer, and once again into a mapped buffer. This is inefficient and effectively doubles the time spent and memory allocated to run these systems. ## Solution Skip the scratch buffer entirely and use `wgpu::Queue::write_buffer_with` to directly write data into mapped buffers. Separately, this also directly uses `wgpu::Limits::min_uniform_buffer_offset_alignment` to set up the alignment when writing to the buffers. Partially addressing the issue raised in #4642. Storage buffers and the abstractions built on top of `DynamicUniformBuffer` will need to come in followup PRs. This may not have a noticeable performance difference in this PR, as the only first-party systems affected by this are view related, and likely are not going to be particularly heavy. --- ## Changelog Added: `DynamicUniformBuffer::get_writer`. Added: `DynamicUniformBufferWriter`. |
||
![]() |
5eb292dc10
|
Bevy Asset V2 (#8624)
# Bevy Asset V2 Proposal ## Why Does Bevy Need A New Asset System? Asset pipelines are a central part of the gamedev process. Bevy's current asset system is missing a number of features that make it non-viable for many classes of gamedev. After plenty of discussions and [a long community feedback period](https://github.com/bevyengine/bevy/discussions/3972), we've identified a number missing features: * **Asset Preprocessing**: it should be possible to "preprocess" / "compile" / "crunch" assets at "development time" rather than when the game starts up. This enables offloading expensive work from deployed apps, faster asset loading, less runtime memory usage, etc. * **Per-Asset Loader Settings**: Individual assets cannot define their own loaders that override the defaults. Additionally, they cannot provide per-asset settings to their loaders. This is a huge limitation, as many asset types don't provide all information necessary for Bevy _inside_ the asset. For example, a raw PNG image says nothing about how it should be sampled (ex: linear vs nearest). * **Asset `.meta` files**: assets should have configuration files stored adjacent to the asset in question, which allows the user to configure asset-type-specific settings. These settings should be accessible during the pre-processing phase. Modifying a `.meta` file should trigger a re-processing / re-load of the asset. It should be possible to configure asset loaders from the meta file. * **Processed Asset Hot Reloading**: Changes to processed assets (or their dependencies) should result in re-processing them and re-loading the results in live Bevy Apps. * **Asset Dependency Tracking**: The current bevy_asset has no good way to wait for asset dependencies to load. It punts this as an exercise for consumers of the loader apis, which is unreasonable and error prone. There should be easy, ergonomic ways to wait for assets to load and block some logic on an asset's entire dependency tree loading. * **Runtime Asset Loading**: it should be (optionally) possible to load arbitrary assets dynamically at runtime. This necessitates being able to deploy and run the asset server alongside Bevy Apps on _all platforms_. For example, we should be able to invoke the shader compiler at runtime, stream scenes from sources like the internet, etc. To keep deployed binaries (and startup times) small, the runtime asset server configuration should be configurable with different settings compared to the "pre processor asset server". * **Multiple Backends**: It should be possible to load assets from arbitrary sources (filesystems, the internet, remote asset serves, etc). * **Asset Packing**: It should be possible to deploy assets in compressed "packs", which makes it easier and more efficient to distribute assets with Bevy Apps. * **Asset Handoff**: It should be possible to hold a "live" asset handle, which correlates to runtime data, without actually holding the asset in memory. Ex: it must be possible to hold a reference to a GPU mesh generated from a "mesh asset" without keeping the mesh data in CPU memory * **Per-Platform Processed Assets**: Different platforms and app distributions have different capabilities and requirements. Some platforms need lower asset resolutions or different asset formats to operate within the hardware constraints of the platform. It should be possible to define per-platform asset processing profiles. And it should be possible to deploy only the assets required for a given platform. These features have architectural implications that are significant enough to require a full rewrite. The current Bevy Asset implementation got us this far, but it can take us no farther. This PR defines a brand new asset system that implements most of these features, while laying the foundations for the remaining features to be built. ## Bevy Asset V2 Here is a quick overview of the features introduced in this PR. * **Asset Preprocessing**: Preprocess assets at development time into more efficient (and configurable) representations * **Dependency Aware**: Dependencies required to process an asset are tracked. If an asset's processed dependency changes, it will be reprocessed * **Hot Reprocessing/Reloading**: detect changes to asset source files, reprocess them if they have changed, and then hot-reload them in Bevy Apps. * **Only Process Changes**: Assets are only re-processed when their source file (or meta file) has changed. This uses hashing and timestamps to avoid processing assets that haven't changed. * **Transactional and Reliable**: Uses write-ahead logging (a technique commonly used by databases) to recover from crashes / forced-exits. Whenever possible it avoids full-reprocessing / only uncompleted transactions will be reprocessed. When the processor is running in parallel with a Bevy App, processor asset writes block Bevy App asset reads. Reading metadata + asset bytes is guaranteed to be transactional / correctly paired. * **Portable / Run anywhere / Database-free**: The processor does not rely on an in-memory database (although it uses some database techniques for reliability). This is important because pretty much all in-memory databases have unsupported platforms or build complications. * **Configure Processor Defaults Per File Type**: You can say "use this processor for all files of this type". * **Custom Processors**: The `Processor` trait is flexible and unopinionated. It can be implemented by downstream plugins. * **LoadAndSave Processors**: Most asset processing scenarios can be expressed as "run AssetLoader A, save the results using AssetSaver X, and then load the result using AssetLoader B". For example, load this png image using `PngImageLoader`, which produces an `Image` asset and then save it using `CompressedImageSaver` (which also produces an `Image` asset, but in a compressed format), which takes an `Image` asset as input. This means if you have an `AssetLoader` for an asset, you are already half way there! It also means that you can share AssetSavers across multiple loaders. Because `CompressedImageSaver` accepts Bevy's generic Image asset as input, it means you can also use it with some future `JpegImageLoader`. * **Loader and Saver Settings**: Asset Loaders and Savers can now define their own settings types, which are passed in as input when an asset is loaded / saved. Each asset can define its own settings. * **Asset `.meta` files**: configure asset loaders, their settings, enable/disable processing, and configure processor settings * **Runtime Asset Dependency Tracking** Runtime asset dependencies (ex: if an asset contains a `Handle<Image>`) are tracked by the asset server. An event is emitted when an asset and all of its dependencies have been loaded * **Unprocessed Asset Loading**: Assets do not require preprocessing. They can be loaded directly. A processed asset is just a "normal" asset with some extra metadata. Asset Loaders don't need to know or care about whether or not an asset was processed. * **Async Asset IO**: Asset readers/writers use async non-blocking interfaces. Note that because Rust doesn't yet support async traits, there is a bit of manual Boxing / Future boilerplate. This will hopefully be removed in the near future when Rust gets async traits. * **Pluggable Asset Readers and Writers**: Arbitrary asset source readers/writers are supported, both by the processor and the asset server. * **Better Asset Handles** * **Single Arc Tree**: Asset Handles now use a single arc tree that represents the lifetime of the asset. This makes their implementation simpler, more efficient, and allows us to cheaply attach metadata to handles. Ex: the AssetPath of a handle is now directly accessible on the handle itself! * **Const Typed Handles**: typed handles can be constructed in a const context. No more weird "const untyped converted to typed at runtime" patterns! * **Handles and Ids are Smaller / Faster To Hash / Compare**: Typed `Handle<T>` is now much smaller in memory and `AssetId<T>` is even smaller. * **Weak Handle Usage Reduction**: In general Handles are now considered to be "strong". Bevy features that previously used "weak `Handle<T>`" have been ported to `AssetId<T>`, which makes it statically clear that the features do not hold strong handles (while retaining strong type information). Currently Handle::Weak still exists, but it is very possible that we can remove that entirely. * **Efficient / Dense Asset Ids**: Assets now have efficient dense runtime asset ids, which means we can avoid expensive hash lookups. Assets are stored in Vecs instead of HashMaps. There are now typed and untyped ids, which means we no longer need to store dynamic type information in the ID for typed handles. "AssetPathId" (which was a nightmare from a performance and correctness standpoint) has been entirely removed in favor of dense ids (which are retrieved for a path on load) * **Direct Asset Loading, with Dependency Tracking**: Assets that are defined at runtime can still have their dependencies tracked by the Asset Server (ex: if you create a material at runtime, you can still wait for its textures to load). This is accomplished via the (currently optional) "asset dependency visitor" trait. This system can also be used to define a set of assets to load, then wait for those assets to load. * **Async folder loading**: Folder loading also uses this system and immediately returns a handle to the LoadedFolder asset, which means folder loading no longer blocks on directory traversals. * **Improved Loader Interface**: Loaders now have a specific "top level asset type", which makes returning the top-level asset simpler and statically typed. * **Basic Image Settings and Processing**: Image assets can now be processed into the gpu-friendly Basic Universal format. The ImageLoader now has a setting to define what format the image should be loaded as. Note that this is just a minimal MVP ... plenty of additional work to do here. To demo this, enable the `basis-universal` feature and turn on asset processing. * **Simpler Audio Play / AudioSink API**: Asset handle providers are cloneable, which means the Audio resource can mint its own handles. This means you can now do `let sink_handle = audio.play(music)` instead of `let sink_handle = audio_sinks.get_handle(audio.play(music))`. Note that this might still be replaced by https://github.com/bevyengine/bevy/pull/8424. **Removed Handle Casting From Engine Features**: Ex: FontAtlases no longer use casting between handle types ## Using The New Asset System ### Normal Unprocessed Asset Loading By default the `AssetPlugin` does not use processing. It behaves pretty much the same way as the old system. If you are defining a custom asset, first derive `Asset`: ```rust #[derive(Asset)] struct Thing { value: String, } ``` Initialize the asset: ```rust app.init_asset:<Thing>() ``` Implement a new `AssetLoader` for it: ```rust #[derive(Default)] struct ThingLoader; #[derive(Serialize, Deserialize, Default)] pub struct ThingSettings { some_setting: bool, } impl AssetLoader for ThingLoader { type Asset = Thing; type Settings = ThingSettings; fn load<'a>( &'a self, reader: &'a mut Reader, settings: &'a ThingSettings, load_context: &'a mut LoadContext, ) -> BoxedFuture<'a, Result<Thing, anyhow::Error>> { Box::pin(async move { let mut bytes = Vec::new(); reader.read_to_end(&mut bytes).await?; // convert bytes to value somehow Ok(Thing { value }) }) } fn extensions(&self) -> &[&str] { &["thing"] } } ``` Note that this interface will get much cleaner once Rust gets support for async traits. `Reader` is an async futures_io::AsyncRead. You can stream bytes as they come in or read them all into a `Vec<u8>`, depending on the context. You can use `let handle = load_context.load(path)` to kick off a dependency load, retrieve a handle, and register the dependency for the asset. Then just register the loader in your Bevy app: ```rust app.init_asset_loader::<ThingLoader>() ``` Now just add your `Thing` asset files into the `assets` folder and load them like this: ```rust fn system(asset_server: Res<AssetServer>) { let handle = Handle<Thing> = asset_server.load("cool.thing"); } ``` You can check load states directly via the asset server: ```rust if asset_server.load_state(&handle) == LoadState::Loaded { } ``` You can also listen for events: ```rust fn system(mut events: EventReader<AssetEvent<Thing>>, handle: Res<SomeThingHandle>) { for event in events.iter() { if event.is_loaded_with_dependencies(&handle) { } } } ``` Note the new `AssetEvent::LoadedWithDependencies`, which only fires when the asset is loaded _and_ all dependencies (and their dependencies) have loaded. Unlike the old asset system, for a given asset path all `Handle<T>` values point to the same underlying Arc. This means Handles can cheaply hold more asset information, such as the AssetPath: ```rust // prints the AssetPath of the handle info!("{:?}", handle.path()) ``` ### Processed Assets Asset processing can be enabled via the `AssetPlugin`. When developing Bevy Apps with processed assets, do this: ```rust app.add_plugins(DefaultPlugins.set(AssetPlugin::processed_dev())) ``` This runs the `AssetProcessor` in the background with hot-reloading. It reads assets from the `assets` folder, processes them, and writes them to the `.imported_assets` folder. Asset loads in the Bevy App will wait for a processed version of the asset to become available. If an asset in the `assets` folder changes, it will be reprocessed and hot-reloaded in the Bevy App. When deploying processed Bevy apps, do this: ```rust app.add_plugins(DefaultPlugins.set(AssetPlugin::processed())) ``` This does not run the `AssetProcessor` in the background. It behaves like `AssetPlugin::unprocessed()`, but reads assets from `.imported_assets`. When the `AssetProcessor` is running, it will populate sibling `.meta` files for assets in the `assets` folder. Meta files for assets that do not have a processor configured look like this: ```rust ( meta_format_version: "1.0", asset: Load( loader: "bevy_render::texture::image_loader::ImageLoader", settings: ( format: FromExtension, ), ), ) ``` This is metadata for an image asset. For example, if you have `assets/my_sprite.png`, this could be the metadata stored at `assets/my_sprite.png.meta`. Meta files are totally optional. If no metadata exists, the default settings will be used. In short, this file says "load this asset with the ImageLoader and use the file extension to determine the image type". This type of meta file is supported in all AssetPlugin modes. If in `Unprocessed` mode, the asset (with the meta settings) will be loaded directly. If in `ProcessedDev` mode, the asset file will be copied directly to the `.imported_assets` folder. The meta will also be copied directly to the `.imported_assets` folder, but with one addition: ```rust ( meta_format_version: "1.0", processed_info: Some(( hash: 12415480888597742505, full_hash: 14344495437905856884, process_dependencies: [], )), asset: Load( loader: "bevy_render::texture::image_loader::ImageLoader", settings: ( format: FromExtension, ), ), ) ``` `processed_info` contains `hash` (a direct hash of the asset and meta bytes), `full_hash` (a hash of `hash` and the hashes of all `process_dependencies`), and `process_dependencies` (the `path` and `full_hash` of every process_dependency). A "process dependency" is an asset dependency that is _directly_ used when processing the asset. Images do not have process dependencies, so this is empty. When the processor is enabled, you can use the `Process` metadata config: ```rust ( meta_format_version: "1.0", asset: Process( processor: "bevy_asset::processor::process::LoadAndSave<bevy_render::texture::image_loader::ImageLoader, bevy_render::texture::compressed_image_saver::CompressedImageSaver>", settings: ( loader_settings: ( format: FromExtension, ), saver_settings: ( generate_mipmaps: true, ), ), ), ) ``` This configures the asset to use the `LoadAndSave` processor, which runs an AssetLoader and feeds the result into an AssetSaver (which saves the given Asset and defines a loader to load it with). (for terseness LoadAndSave will likely get a shorter/friendlier type name when [Stable Type Paths](#7184) lands). `LoadAndSave` is likely to be the most common processor type, but arbitrary processors are supported. `CompressedImageSaver` saves an `Image` in the Basis Universal format and configures the ImageLoader to load it as basis universal. The `AssetProcessor` will read this meta, run it through the LoadAndSave processor, and write the basis-universal version of the image to `.imported_assets`. The final metadata will look like this: ```rust ( meta_format_version: "1.0", processed_info: Some(( hash: 905599590923828066, full_hash: 9948823010183819117, process_dependencies: [], )), asset: Load( loader: "bevy_render::texture::image_loader::ImageLoader", settings: ( format: Format(Basis), ), ), ) ``` To try basis-universal processing out in Bevy examples, (for example `sprite.rs`), change `add_plugins(DefaultPlugins)` to `add_plugins(DefaultPlugins.set(AssetPlugin::processed_dev()))` and run with the `basis-universal` feature enabled: `cargo run --features=basis-universal --example sprite`. To create a custom processor, there are two main paths: 1. Use the `LoadAndSave` processor with an existing `AssetLoader`. Implement the `AssetSaver` trait, register the processor using `asset_processor.register_processor::<LoadAndSave<ImageLoader, CompressedImageSaver>>(image_saver.into())`. 2. Implement the `Process` trait directly and register it using: `asset_processor.register_processor(thing_processor)`. You can configure default processors for file extensions like this: ```rust asset_processor.set_default_processor::<ThingProcessor>("thing") ``` There is one more metadata type to be aware of: ```rust ( meta_format_version: "1.0", asset: Ignore, ) ``` This will ignore the asset during processing / prevent it from being written to `.imported_assets`. The AssetProcessor stores a transaction log at `.imported_assets/log` and uses it to gracefully recover from unexpected stops. This means you can force-quit the processor (and Bevy Apps running the processor in parallel) at arbitrary times! `.imported_assets` is "local state". It should _not_ be checked into source control. It should also be considered "read only". In practice, you _can_ modify processed assets and processed metadata if you really need to test something. But those modifications will not be represented in the hashes of the assets, so the processed state will be "out of sync" with the source assets. The processor _will not_ fix this for you. Either revert the change after you have tested it, or delete the processed files so they can be re-populated. ## Open Questions There are a number of open questions to be discussed. We should decide if they need to be addressed in this PR and if so, how we will address them: ### Implied Dependencies vs Dependency Enumeration There are currently two ways to populate asset dependencies: * **Implied via AssetLoaders**: if an AssetLoader loads an asset (and retrieves a handle), a dependency is added to the list. * **Explicit via the optional Asset::visit_dependencies**: if `server.load_asset(my_asset)` is called, it will call `my_asset.visit_dependencies`, which will grab dependencies that have been manually defined for the asset via the Asset trait impl (which can be derived). This means that defining explicit dependencies is optional for "loaded assets". And the list of dependencies is always accurate because loaders can only produce Handles if they register dependencies. If an asset was loaded with an AssetLoader, it only uses the implied dependencies. If an asset was created at runtime and added with `asset_server.load_asset(MyAsset)`, it will use `Asset::visit_dependencies`. However this can create a behavior mismatch between loaded assets and equivalent "created at runtime" assets if `Assets::visit_dependencies` doesn't exactly match the dependencies produced by the AssetLoader. This behavior mismatch can be resolved by completely removing "implied loader dependencies" and requiring `Asset::visit_dependencies` to supply dependency data. But this creates two problems: * It makes defining loaded assets harder and more error prone: Devs must remember to manually annotate asset dependencies with `#[dependency]` when deriving `Asset`. For more complicated assets (such as scenes), the derive likely wouldn't be sufficient and a manual `visit_dependencies` impl would be required. * Removes the ability to immediately kick off dependency loads: When AssetLoaders retrieve a Handle, they also immediately kick off an asset load for the handle, which means it can start loading in parallel _before_ the asset finishes loading. For large assets, this could be significant. (although this could be mitigated for processed assets if we store dependencies in the processed meta file and load them ahead of time) ### Eager ProcessorDev Asset Loading I made a controversial call in the interest of fast startup times ("time to first pixel") for the "processor dev mode configuration". When initializing the AssetProcessor, current processed versions of unchanged assets are yielded immediately, even if their dependencies haven't been checked yet for reprocessing. This means that non-current-state-of-filesystem-but-previously-valid assets might be returned to the App first, then hot-reloaded if/when their dependencies change and the asset is reprocessed. Is this behavior desirable? There is largely one alternative: do not yield an asset from the processor to the app until all of its dependencies have been checked for changes. In some common cases (load dependency has not changed since last run) this will increase startup time. The main question is "by how much" and is that slower startup time worth it in the interest of only yielding assets that are true to the current state of the filesystem. Should this be configurable? I'm starting to think we should only yield an asset after its (historical) dependencies have been checked for changes + processed as necessary, but I'm curious what you all think. ### Paths Are Currently The Only Canonical ID / Do We Want Asset UUIDs? In this implementation AssetPaths are the only canonical asset identifier (just like the previous Bevy Asset system and Godot). Moving assets will result in re-scans (and currently reprocessing, although reprocessing can easily be avoided with some changes). Asset renames/moves will break code and assets that rely on specific paths, unless those paths are fixed up. Do we want / need "stable asset uuids"? Introducing them is very possible: 1. Generate a UUID and include it in .meta files 2. Support UUID in AssetPath 3. Generate "asset indices" which are loaded on startup and map UUIDs to paths. 4 (maybe). Consider only supporting UUIDs for processed assets so we can generate quick-to-load indices instead of scanning meta files. The main "pro" is that assets referencing UUIDs don't need to be migrated when a path changes. The main "con" is that UUIDs cannot be "lazily resolved" like paths. They need a full view of all assets to answer the question "does this UUID exist". Which means UUIDs require the AssetProcessor to fully finish startup scans before saying an asset doesnt exist. And they essentially require asset pre-processing to use in apps, because scanning all asset metadata files at runtime to resolve a UUID is not viable for medium-to-large apps. It really requires a pre-generated UUID index, which must be loaded before querying for assets. I personally think this should be investigated in a separate PR. Paths aren't going anywhere ... _everyone_ uses filesystems (and filesystem-like apis) to manage their asset source files. I consider them permanent canonical asset information. Additionally, they behave well for both processed and unprocessed asset modes. Given that Bevy is supporting both, this feels like the right canonical ID to start with. UUIDS (and maybe even other indexed-identifier types) can be added later as necessary. ### Folder / File Naming Conventions All asset processing config currently lives in the `.imported_assets` folder. The processor transaction log is in `.imported_assets/log`. Processed assets are added to `.imported_assets/Default`, which will make migrating to processed asset profiles (ex: a `.imported_assets/Mobile` profile) a non-breaking change. It also allows us to create top-level files like `.imported_assets/log` without it being interpreted as an asset. Meta files currently have a `.meta` suffix. Do we like these names and conventions? ### Should the `AssetPlugin::processed_dev` configuration enable `watch_for_changes` automatically? Currently it does (which I think makes sense), but it does make it the only configuration that enables watch_for_changes by default. ### Discuss on_loaded High Level Interface: This PR includes a very rough "proof of concept" `on_loaded` system adapter that uses the `LoadedWithDependencies` event in combination with `asset_server.load_asset` dependency tracking to support this pattern ```rust fn main() { App::new() .init_asset::<MyAssets>() .add_systems(Update, on_loaded(create_array_texture)) .run(); } #[derive(Asset, Clone)] struct MyAssets { #[dependency] picture_of_my_cat: Handle<Image>, #[dependency] picture_of_my_other_cat: Handle<Image>, } impl FromWorld for ArrayTexture { fn from_world(world: &mut World) -> Self { picture_of_my_cat: server.load("meow.png"), picture_of_my_other_cat: server.load("meeeeeeeow.png"), } } fn spawn_cat(In(my_assets): In<MyAssets>, mut commands: Commands) { commands.spawn(SpriteBundle { texture: my_assets.picture_of_my_cat.clone(), ..default() }); commands.spawn(SpriteBundle { texture: my_assets.picture_of_my_other_cat.clone(), ..default() }); } ``` The implementation is _very_ rough. And it is currently unsafe because `bevy_ecs` doesn't expose some internals to do this safely from inside `bevy_asset`. There are plenty of unanswered questions like: * "do we add a Loadable" derive? (effectively automate the FromWorld implementation above) * Should `MyAssets` even be an Asset? (largely implemented this way because it elegantly builds on `server.load_asset(MyAsset { .. })` dependency tracking). We should think hard about what our ideal API looks like (and if this is a pattern we want to support). Not necessarily something we need to solve in this PR. The current `on_loaded` impl should probably be removed from this PR before merging. ## Clarifying Questions ### What about Assets as Entities? This Bevy Asset V2 proposal implementation initially stored Assets as ECS Entities. Instead of `AssetId<T>` + the `Assets<T>` resource it used `Entity` as the asset id and Asset values were just ECS components. There are plenty of compelling reasons to do this: 1. Easier to inline assets in Bevy Scenes (as they are "just" normal entities + components) 2. More flexible queries: use the power of the ECS to filter assets (ex: `Query<Mesh, With<Tree>>`). 3. Extensible. Users can add arbitrary component data to assets. 4. Things like "component visualization tools" work out of the box to visualize asset data. However Assets as Entities has a ton of caveats right now: * We need to be able to allocate entity ids without a direct World reference (aka rework id allocator in Entities ... i worked around this in my prototypes by just pre allocating big chunks of entities) * We want asset change events in addition to ECS change tracking ... how do we populate them when mutations can come from anywhere? Do we use Changed queries? This would require iterating over the change data for all assets every frame. Is this acceptable or should we implement a new "event based" component change detection option? * Reconciling manually created assets with asset-system managed assets has some nuance (ex: are they "loaded" / do they also have that component metadata?) * "how do we handle "static" / default entity handles" (ties in to the Entity Indices discussion: https://github.com/bevyengine/bevy/discussions/8319). This is necessary for things like "built in" assets and default handles in things like SpriteBundle. * Storing asset information as a component makes it easy to "invalidate" asset state by removing the component (or forcing modifications). Ideally we have ways to lock this down (some combination of Rust type privacy and ECS validation) In practice, how we store and identify assets is a reasonably superficial change (porting off of Assets as Entities and implementing dedicated storage + ids took less than a day). So once we sort out the remaining challenges the flip should be straightforward. Additionally, I do still have "Assets as Entities" in my commit history, so we can reuse that work. I personally think "assets as entities" is a good endgame, but it also doesn't provide _significant_ value at the moment and it certainly isn't ready yet with the current state of things. ### Why not Distill? [Distill](https://github.com/amethyst/distill) is a high quality fully featured asset system built in Rust. It is very natural to ask "why not just use Distill?". It is also worth calling out that for awhile, [we planned on adopting Distill / I signed off on it](https://github.com/bevyengine/bevy/issues/708). However I think Bevy has a number of constraints that make Distill adoption suboptimal: * **Architectural Simplicity:** * Distill's processor requires an in-memory database (lmdb) and RPC networked API (using Cap'n Proto). Each of these introduces API complexity that increases maintenance burden and "code grokability". Ignoring tests, documentation, and examples, Distill has 24,237 lines of Rust code (including generated code for RPC + database interactions). If you ignore generated code, it has 11,499 lines. * Bevy builds the AssetProcessor and AssetServer using pluggable AssetReader/AssetWriter Rust traits with simple io interfaces. They do not necessitate databases or RPC interfaces (although Readers/Writers could use them if that is desired). Bevy Asset V2 (at the time of writing this PR) is 5,384 lines of Rust code (ignoring tests, documentation, and examples). Grain of salt: Distill does have more features currently (ex: Asset Packing, GUIDS, remote-out-of-process asset processor). I do plan to implement these features in Bevy Asset V2 and I personally highly doubt they will meaningfully close the 6115 lines-of-code gap. * This complexity gap (which while illustrated by lines of code, is much bigger than just that) is noteworthy to me. Bevy should be hackable and there are pillars of Distill that are very hard to understand and extend. This is a matter of opinion (and Bevy Asset V2 also has complicated areas), but I think Bevy Asset V2 is much more approachable for the average developer. * Necessary disclaimer: counting lines of code is an extremely rough complexity metric. Read the code and form your own opinions. * **Optional Asset Processing:** Not all Bevy Apps (or Bevy App developers) need / want asset preprocessing. Processing increases the complexity of the development environment by introducing things like meta files, imported asset storage, running processors in the background, waiting for processing to finish, etc. Distill _requires_ preprocessing to work. With Bevy Asset V2 processing is fully opt-in. The AssetServer isn't directly aware of asset processors at all. AssetLoaders only care about converting bytes to runtime Assets ... they don't know or care if the bytes were pre-processed or not. Processing is "elegantly" (forgive my self-congratulatory phrasing) layered on top and builds on the existing Asset system primitives. * **Direct Filesystem Access to Processed Asset State:** Distill stores processed assets in a database. This makes debugging / inspecting the processed outputs harder (either requires special tooling to query the database or they need to be "deployed" to be inspected). Bevy Asset V2, on the other hand, stores processed assets in the filesystem (by default ... this is configurable). This makes interacting with the processed state more natural. Note that both Godot and Unity's new asset system store processed assets in the filesystem. * **Portability**: Because Distill's processor uses lmdb and RPC networking, it cannot be run on certain platforms (ex: lmdb is a non-rust dependency that cannot run on the web, some platforms don't support running network servers). Bevy should be able to process assets everywhere (ex: run the Bevy Editor on the web, compile + process shaders on mobile, etc). Distill does partially mitigate this problem by supporting "streaming" assets via the RPC protocol, but this is not a full solve from my perspective. And Bevy Asset V2 can (in theory) also stream assets (without requiring RPC, although this isn't implemented yet) Note that I _do_ still think Distill would be a solid asset system for Bevy. But I think the approach in this PR is a better solve for Bevy's specific "asset system requirements". ### Doesn't async-fs just shim requests to "sync" `std::fs`? What is the point? "True async file io" has limited / spotty platform support. async-fs (and the rust async ecosystem generally ... ex Tokio) currently use async wrappers over std::fs that offload blocking requests to separate threads. This may feel unsatisfying, but it _does_ still provide value because it prevents our task pools from blocking on file system operations (which would prevent progress when there are many tasks to do, but all threads in a pool are currently blocking on file system ops). Additionally, using async APIs for our AssetReaders and AssetWriters also provides value because we can later add support for "true async file io" for platforms that support it. _And_ we can implement other "true async io" asset backends (such as networked asset io). ## Draft TODO - [x] Fill in missing filesystem event APIs: file removed event (which is expressed as dangling RenameFrom events in some cases), file/folder renamed event - [x] Assets without loaders are not moved to the processed folder. This breaks things like referenced `.bin` files for GLTFs. This should be configurable per-non-asset-type. - [x] Initial implementation of Reflect and FromReflect for Handle. The "deserialization" parity bar is low here as this only worked with static UUIDs in the old impl ... this is a non-trivial problem. Either we add a Handle::AssetPath variant that gets "upgraded" to a strong handle on scene load or we use a separate AssetRef type for Bevy scenes (which is converted to a runtime Handle on load). This deserves its own discussion in a different pr. - [x] Populate read_asset_bytes hash when run by the processor (a bit of a special case .. when run by the processor the processed meta will contain the hash so we don't need to compute it on the spot, but we don't want/need to read the meta when run by the main AssetServer) - [x] Delay hot reloading: currently filesystem events are handled immediately, which creates timing issues in some cases. For example hot reloading images can sometimes break because the image isn't finished writing. We should add a delay, likely similar to the [implementation in this PR](https://github.com/bevyengine/bevy/pull/8503). - [x] Port old platform-specific AssetIo implementations to the new AssetReader interface (currently missing Android and web) - [x] Resolve on_loaded unsafety (either by removing the API entirely or removing the unsafe) - [x] Runtime loader setting overrides - [x] Remove remaining unwraps that should be error-handled. There are number of TODOs here - [x] Pretty AssetPath Display impl - [x] Document more APIs - [x] Resolve spurious "reloading because it has changed" events (to repro run load_gltf with `processed_dev()`) - [x] load_dependency hot reloading currently only works for processed assets. If processing is disabled, load_dependency changes are not hot reloaded. - [x] Replace AssetInfo dependency load/fail counters with `loading_dependencies: HashSet<UntypedAssetId>` to prevent reloads from (potentially) breaking counters. Storing this will also enable "dependency reloaded" events (see [Next Steps](#next-steps)) - [x] Re-add filesystem watcher cargo feature gate (currently it is not optional) - [ ] Migration Guide - [ ] Changelog ## Followup TODO - [ ] Replace "eager unchanged processed asset loading" behavior with "don't returned unchanged processed asset until dependencies have been checked". - [ ] Add true `Ignore` AssetAction that does not copy the asset to the imported_assets folder. - [ ] Finish "live asset unloading" (ex: free up CPU asset memory after uploading an image to the GPU), rethink RenderAssets, and port renderer features. The `Assets` collection uses `Option<T>` for asset storage to support its removal. (1) the Option might not actually be necessary ... might be able to just remove from the collection entirely (2) need to finalize removal apis - [ ] Try replacing the "channel based" asset id recycling with something a bit more efficient (ex: we might be able to use raw atomic ints with some cleverness) - [ ] Consider adding UUIDs to processed assets (scoped just to helping identify moved assets ... not exposed to load queries ... see [Next Steps](#next-steps)) - [ ] Store "last modified" source asset and meta timestamps in processed meta files to enable skipping expensive hashing when the file wasn't changed - [ ] Fix "slow loop" handle drop fix - [ ] Migrate to TypeName - [x] Handle "loader preregistration". See #9429 ## Next Steps * **Configurable per-type defaults for AssetMeta**: It should be possible to add configuration like "all png image meta should default to using nearest sampling" (currently this hard-coded per-loader/processor Settings::default() impls). Also see the "Folder Meta" bullet point. * **Avoid Reprocessing on Asset Renames / Moves**: See the "canonical asset ids" discussion in [Open Questions](#open-questions) and the relevant bullet point in [Draft TODO](#draft-todo). Even without canonical ids, folder renames could avoid reprocessing in some cases. * **Multiple Asset Sources**: Expand AssetPath to support "asset source names" and support multiple AssetReaders in the asset server (ex: `webserver://some_path/image.png` backed by an Http webserver AssetReader). The "default" asset reader would use normal `some_path/image.png` paths. Ideally this works in combination with multiple AssetWatchers for hot-reloading * **Stable Type Names**: this pr removes the TypeUuid requirement from assets in favor of `std::any::type_name`. This makes defining assets easier (no need to generate a new uuid / use weird proc macro syntax). It also makes reading meta files easier (because things have "friendly names"). We also use type names for components in scene files. If they are good enough for components, they are good enough for assets. And consistency across Bevy pillars is desirable. However, `std::any::type_name` is not guaranteed to be stable (although in practice it is). We've developed a [stable type path](https://github.com/bevyengine/bevy/pull/7184) to resolve this, which should be adopted when it is ready. * **Command Line Interface**: It should be possible to run the asset processor in a separate process from the command line. This will also require building a network-server-backed AssetReader to communicate between the app and the processor. We've been planning to build a "bevy cli" for awhile. This seems like a good excuse to build it. * **Asset Packing**: This is largely an additive feature, so it made sense to me to punt this until we've laid the foundations in this PR. * **Per-Platform Processed Assets**: It should be possible to generate assets for multiple platforms by supporting multiple "processor profiles" per asset (ex: compress with format X on PC and Y on iOS). I think there should probably be arbitrary "profiles" (which can be separate from actual platforms), which are then assigned to a given platform when generating the final asset distribution for that platform. Ex: maybe devs want a "Mobile" profile that is shared between iOS and Android. Or a "LowEnd" profile shared between web and mobile. * **Versioning and Migrations**: Assets, Loaders, Savers, and Processors need to have versions to determine if their schema is valid. If an asset / loader version is incompatible with the current version expected at runtime, the processor should be able to migrate them. I think we should try using Bevy Reflect for this, as it would allow us to load the old version as a dynamic Reflect type without actually having the old Rust type. It would also allow us to define "patches" to migrate between versions (Bevy Reflect devs are currently working on patching). The `.meta` file already has its own format version. Migrating that to new versions should also be possible. * **Real Copy-on-write AssetPaths**: Rust's actual Cow (clone-on-write type) currently used by AssetPath can still result in String clones that aren't actually necessary (cloning an Owned Cow clones the contents). Bevy's asset system requires cloning AssetPaths in a number of places, which result in actual clones of the internal Strings. This is not efficient. AssetPath internals should be reworked to exhibit truer cow-like-behavior that reduces String clones to the absolute minimum. * **Consider processor-less processing**: In theory the AssetServer could run processors "inline" even if the background AssetProcessor is disabled. If we decide this is actually desirable, we could add this. But I don't think its a priority in the short or medium term. * **Pre-emptive dependency loading**: We could encode dependencies in processed meta files, which could then be used by the Asset Server to kick of dependency loads as early as possible (prior to starting the actual asset load). Is this desirable? How much time would this save in practice? * **Optimize Processor With UntypedAssetIds**: The processor exclusively uses AssetPath to identify assets currently. It might be possible to swap these out for UntypedAssetIds in some places, which are smaller / cheaper to hash and compare. * **One to Many Asset Processing**: An asset source file that produces many assets currently must be processed into a single "processed" asset source. If labeled assets can be written separately they can each have their own configured savers _and_ they could be loaded more granularly. Definitely worth exploring! * **Automatically Track "Runtime-only" Asset Dependencies**: Right now, tracking "created at runtime" asset dependencies requires adding them via `asset_server.load_asset(StandardMaterial::default())`. I think with some cleverness we could also do this for `materials.add(StandardMaterial::default())`, making tracking work "everywhere". There are challenges here relating to change detection / ensuring the server is made aware of dependency changes. This could be expensive in some cases. * **"Dependency Changed" events**: Some assets have runtime artifacts that need to be re-generated when one of their dependencies change (ex: regenerate a material's bind group when a Texture needs to change). We are generating the dependency graph so we can definitely produce these events. Buuuuut generating these events will have a cost / they could be high frequency for some assets, so we might want this to be opt-in for specific cases. * **Investigate Storing More Information In Handles**: Handles can now store arbitrary information, which makes it cheaper and easier to access. How much should we move into them? Canonical asset load states (via atomics)? (`handle.is_loaded()` would be very cool). Should we store the entire asset and remove the `Assets<T>` collection? (`Arc<RwLock<Option<Image>>>`?) * **Support processing and loading files without extensions**: This is a pretty arbitrary restriction and could be supported with very minimal changes. * **Folder Meta**: It would be nice if we could define per folder processor configuration defaults (likely in a `.meta` or `.folder_meta` file). Things like "default to linear filtering for all Images in this folder". * **Replace async_broadcast with event-listener?** This might be approximately drop-in for some uses and it feels more light weight * **Support Running the AssetProcessor on the Web**: Most of the hard work is done here, but there are some easy straggling TODOs (make the transaction log an interface instead of a direct file writer so we can write a web storage backend, implement an AssetReader/AssetWriter that reads/writes to something like LocalStorage). * **Consider identifying and preventing circular dependencies**: This is especially important for "processor dependencies", as processing will silently never finish in these cases. * **Built-in/Inlined Asset Hot Reloading**: This PR regresses "built-in/inlined" asset hot reloading (previously provided by the DebugAssetServer). I'm intentionally punting this because I think it can be cleanly implemented with "multiple asset sources" by registering a "debug asset source" (ex: `debug://bevy_pbr/src/render/pbr.wgsl` asset paths) in combination with an AssetWatcher for that asset source and support for "manually loading pats with asset bytes instead of AssetReaders". The old DebugAssetServer was quite nasty and I'd love to avoid that hackery going forward. * **Investigate ways to remove double-parsing meta files**: Parsing meta files currently involves parsing once with "minimal" versions of the meta file to extract the type name of the loader/processor config, then parsing again to parse the "full" meta. This is suboptimal. We should be able to define custom deserializers that (1) assume the loader/processor type name comes first (2) dynamically looks up the loader/processor registrations to deserialize settings in-line (similar to components in the bevy scene format). Another alternative: deserialize as dynamic Reflect objects and then convert. * **More runtime loading configuration**: Support using the Handle type as a hint to select an asset loader (instead of relying on AssetPath extensions) * **More high level Processor trait implementations**: For example, it might be worth adding support for arbitrary chains of "asset transforms" that modify an in-memory asset representation between loading and saving. (ex: load a Mesh, run a `subdivide_mesh` transform, followed by a `flip_normals` transform, then save the mesh to an efficient compressed format). * **Bevy Scene Handle Deserialization**: (see the relevant [Draft TODO item](#draft-todo) for context) * **Explore High Level Load Interfaces**: See [this discussion](#discuss-on_loaded-high-level-interface) for one prototype. * **Asset Streaming**: It would be great if we could stream Assets (ex: stream a long video file piece by piece) * **ID Exchanging**: In this PR Asset Handles/AssetIds are bigger than they need to be because they have a Uuid enum variant. If we implement an "id exchanging" system that trades Uuids for "efficient runtime ids", we can cut down on the size of AssetIds, making them more efficient. This has some open design questions, such as how to spawn entities with "default" handle values (as these wouldn't have access to the exchange api in the current system). * **Asset Path Fixup Tooling**: Assets that inline asset paths inside them will break when an asset moves. The asset system provides the functionality to detect when paths break. We should build a framework that enables formats to define "path migrations". This is especially important for scene files. For editor-generated files, we should also consider using UUIDs (see other bullet point) to avoid the need to migrate in these cases. --------- Co-authored-by: BeastLe9enD <beastle9end@outlook.de> Co-authored-by: Mike <mike.hsu@gmail.com> Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com> |
||
![]() |
02b520b4e8
|
Split ComputedVisibility into two components to allow for accurate change detection and speed up visibility propagation (#9497)
# Objective Fix #8267. Fixes half of #7840. The `ComputedVisibility` component contains two flags: hierarchy visibility, and view visibility (whether its visible to any cameras). Due to the modular and open-ended way that view visibility is computed, it triggers change detection every single frame, even when the value does not change. Since hierarchy visibility is stored in the same component as view visibility, this means that change detection for inherited visibility is completely broken. At the company I work for, this has become a real issue. We are using change detection to only re-render scenes when necessary. The broken state of change detection for computed visibility means that we have to to rely on the non-inherited `Visibility` component for now. This is workable in the early stages of our project, but since we will inevitably want to use the hierarchy, we will have to either: 1. Roll our own solution for computed visibility. 2. Fix the issue for everyone. ## Solution Split the `ComputedVisibility` component into two: `InheritedVisibilty` and `ViewVisibility`. This allows change detection to behave properly for `InheritedVisibility`. View visiblity is still erratic, although it is less useful to be able to detect changes for this flavor of visibility. Overall, this actually simplifies the API. Since the visibility system consists of self-explaining components, it is much easier to document the behavior and usage. This approach is more modular and "ECS-like" -- one could strip out the `ViewVisibility` component entirely if it's not needed, and rely only on inherited visibility. --- ## Changelog - `ComputedVisibility` has been removed in favor of: `InheritedVisibility` and `ViewVisiblity`. ## Migration Guide The `ComputedVisibilty` component has been split into `InheritedVisiblity` and `ViewVisibility`. Replace any usages of `ComputedVisibility::is_visible_in_hierarchy` with `InheritedVisibility::get`, and replace `ComputedVisibility::is_visible_in_view` with `ViewVisibility::get`. ```rust // Before: commands.spawn(VisibilityBundle { visibility: Visibility::Inherited, computed_visibility: ComputedVisibility::default(), }); // After: commands.spawn(VisibilityBundle { visibility: Visibility::Inherited, inherited_visibility: InheritedVisibility::default(), view_visibility: ViewVisibility::default(), }); ``` ```rust // Before: fn my_system(q: Query<&ComputedVisibilty>) { for vis in &q { if vis.is_visible_in_hierarchy() { // After: fn my_system(q: Query<&InheritedVisibility>) { for inherited_visibility in &q { if inherited_visibility.get() { ``` ```rust // Before: fn my_system(q: Query<&ComputedVisibilty>) { for vis in &q { if vis.is_visible_in_view() { // After: fn my_system(q: Query<&ViewVisibility>) { for view_visibility in &q { if view_visibility.get() { ``` ```rust // Before: fn my_system(mut q: Query<&mut ComputedVisibilty>) { for vis in &mut q { vis.set_visible_in_view(); // After: fn my_system(mut q: Query<&mut ViewVisibility>) { for view_visibility in &mut q { view_visibility.set(); ``` --------- Co-authored-by: Robert Swain <robert.swain@gmail.com> |
||
![]() |
4f1d9a6315
|
Reorder render sets, refactor bevy_sprite to take advantage (#9236)
This is a continuation of this PR: #8062 # Objective - Reorder render schedule sets to allow data preparation when phase item order is known to support improved batching - Part of the batching/instancing etc plan from here: https://github.com/bevyengine/bevy/issues/89#issuecomment-1379249074 - The original idea came from @inodentry and proved to be a good one. Thanks! - Refactor `bevy_sprite` and `bevy_ui` to take advantage of the new ordering ## Solution - Move `Prepare` and `PrepareFlush` after `PhaseSortFlush` - Add a `PrepareAssets` set that runs in parallel with other systems and sets in the render schedule. - Put prepare_assets systems in the `PrepareAssets` set - If explicit dependencies are needed on Mesh or Material RenderAssets then depend on the appropriate system. - Add `ManageViews` and `ManageViewsFlush` sets between `ExtractCommands` and Queue - Move `queue_mesh*_bind_group` to the Prepare stage - Rename them to `prepare_` - Put systems that prepare resources (buffers, textures, etc.) into a `PrepareResources` set inside `Prepare` - Put the `prepare_..._bind_group` systems into a `PrepareBindGroup` set after `PrepareResources` - Move `prepare_lights` to the `ManageViews` set - `prepare_lights` creates views and this must happen before `Queue` - This system needs refactoring to stop handling all responsibilities - Gather lights, sort, and create shadow map views. Store sorted light entities in a resource - Remove `BatchedPhaseItem` - Replace `batch_range` with `batch_size` representing how many items to skip after rendering the item or to skip the item entirely if `batch_size` is 0. - `queue_sprites` has been split into `queue_sprites` for queueing phase items and `prepare_sprites` for batching after the `PhaseSort` - `PhaseItem`s are still inserted in `queue_sprites` - After sorting adjacent compatible sprite phase items are accumulated into `SpriteBatch` components on the first entity of each batch, containing a range of vertex indices. The associated `PhaseItem`'s `batch_size` is updated appropriately. - `SpriteBatch` items are then drawn skipping over the other items in the batch based on the value in `batch_size` - A very similar refactor was performed on `bevy_ui` --- ## Changelog Changed: - Reordered and reworked render app schedule sets. The main change is that data is extracted, queued, sorted, and then prepared when the order of data is known. - Refactor `bevy_sprite` and `bevy_ui` to take advantage of the reordering. ## Migration Guide - Assets such as materials and meshes should now be created in `PrepareAssets` e.g. `prepare_assets<Mesh>` - Queueing entities to `RenderPhase`s continues to be done in `Queue` e.g. `queue_sprites` - Preparing resources (textures, buffers, etc.) should now be done in `PrepareResources`, e.g. `prepare_prepass_textures`, `prepare_mesh_uniforms` - Prepare bind groups should now be done in `PrepareBindGroups` e.g. `prepare_mesh_bind_group` - Any batching or instancing can now be done in `Prepare` where the order of the phase items is known e.g. `prepare_sprites` ## Next Steps - Introduce some generic mechanism to ensure items that can be batched are grouped in the phase item order, currently you could easily have `[sprite at z 0, mesh at z 0, sprite at z 0]` preventing batching. - Investigate improved orderings for building the MeshUniform buffer - Implementing batching across the rest of bevy --------- Co-authored-by: Robert Swain <robert.swain@gmail.com> Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com> |
||
![]() |
5fac1fe0a9
|
Fix temporal jitter bug (#9462)
* Fixed jitter being applied in the wrong coordinate space, leading to aliasing. * Fixed incorrectly using the cached view_proj instead of account for temporal jitter. * Added a diagram to ensure the coordinate space is clear. Before:  After:  |
||
![]() |
724e69bff4
|
Bias texture mipmaps (#7614)
# Objective - Closes #7323 - Reduce texture blurriness for TAA ## Solution - Add a `MipBias` component and view uniform. - Switch material `textureSample()` calls to `textureSampleBias()`. - Add a `-1.0` bias to TAA. --- ## Changelog - Added `MipBias` camera component, mostly for internal use. --------- Co-authored-by: François <mockersf@gmail.com> Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
![]() |
f18f28874a
|
Allow tuples and single plugins in add_plugins , deprecate add_plugin (#8097)
# Objective - Better consistency with `add_systems`. - Deprecating `add_plugin` in favor of a more powerful `add_plugins`. - Allow passing `Plugin` to `add_plugins`. - Allow passing tuples to `add_plugins`. ## Solution - `App::add_plugins` now takes an `impl Plugins` parameter. - `App::add_plugin` is deprecated. - `Plugins` is a new sealed trait that is only implemented for `Plugin`, `PluginGroup` and tuples over `Plugins`. - All examples, benchmarks and tests are changed to use `add_plugins`, using tuples where appropriate. --- ## Changelog ### Changed - `App::add_plugins` now accepts all types that implement `Plugins`, which is implemented for: - Types that implement `Plugin`. - Types that implement `PluginGroup`. - Tuples (up to 16 elements) over types that implement `Plugins`. - Deprecated `App::add_plugin` in favor of `App::add_plugins`. ## Migration Guide - Replace `app.add_plugin(plugin)` calls with `app.add_plugins(plugin)`. --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
![]() |
6ce4bf5181
|
Add RenderTarget::TextureView (#8042)
# Objective We can currently set `camera.target` to either an `Image` or `Window`. For OpenXR & WebXR we need to be able to render to a `TextureView`. This partially addresses #115 as with the addition we can create internal and external xr crates. ## Solution A `TextureView` item is added to the `RenderTarget` enum. It holds an id which is looked up by a `ManualTextureViews` resource, much like how `Assets<Image>` works. I believe this approach was first used by @kcking in their [xr fork]( |
||
![]() |
af9c945f40
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)

# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](
|
||
![]() |
c1fd505f9c
|
Implement Reflect on NoFrustumCulling (#8801)
# Objective `NoFrustumCulling` doesn't implement `Reflect`, while nothing prevents it from implementing it. ## Solution Implement `Reflect` for it. --- ## Changelog - Add `Reflect` derive to `NoFrustrumCulling`. - Add `FromReflect` derive to `Visibility`. |