release-0.15.1
192 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
![]() |
5a37e8accc
|
Cosmetic tweaks to query_gltf_primitives (#16102)
# Objective This example is really confusing to look at and tell at a glance whether it's broken or not. It's displaying a strange shape -- a cube with two vertices stretched in a couple dimensions at an odd angle, and doing its vertex position modification in a way where the intent isn't obvious. ## Solution - Change the gltf geometry so that the object is a recognizable regular shape - Change the vertex modification so that the entire cube top is being "lifted" from the cube - Adjust colors, lighting, and camera location so we can see what's going on - Also remove some irrelevant shadow and environment map setup ## Before  ## After <img width="1280" alt="image" src="https://github.com/user-attachments/assets/59cab60d-efbc-47c3-8688-e4544b462421"> |
||
![]() |
16b39c2b36
|
examples(shaders/glsl): Update GLSL Shader Example Camera View uniform (#15865)
# Objective The Custom Material GLSL shader example has an old version of the camera view uniform structure. This PR updates the example GLSL custom material shader to have the latest structure. ## Solution I was running into issues using the camera world position (it wasn't changing) and someone in discord pointed me to the source of truth. `crates/bevy_render/src/view/view.wgsl` After using this latest uniform structure in my project I'm now able to work with the camera position in my shader. ## Testing I tested this change by running the example with: ```bash cargo run --features shader_format_glsl --example shader_material_glsl ``` <img width="1392" alt="image" src="https://github.com/user-attachments/assets/39fc82ec-ff3b-4864-ad73-05f3a25db483"> --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
![]() |
754194fd24
|
Fix invalid scene file for scene example (#15829)
# Objective Fixes #15828 ## Solution Ran the example to generate `load_scene_example-new.scn.ron` and replaced `load_scene_example.scn.ron` with the contents. ## Testing `cargo run --example scene` |
||
![]() |
320d53c1d2
|
Vary transforms for custom_skinned_mesh example (#15710)
# Objective Enhance the [custom skinned mesh example](https://bevyengine.org/examples/animation/custom-skinned-mesh/) to show some variety and clarify what the transform does to the mesh. ## Solution https://github.com/user-attachments/assets/c919db74-6e77-4f33-ba43-0f40a88042b3 Add variety and clarity with the following changes: - vary transform changes, - use a UV texture, - and show transform changes via gizmos. (Maybe it'd be worth turning on wireframe rendering to show what happens to the mesh. I think it'd be nice visually but might make the code a little noisy.) ## Testing I exercised it on my x86 macOS computer. It'd be good to have it validated on Windows, Linux, and WASM. --- ## Showcase - Custom skinned mesh example varies the transforms changes and uses a UV test texture. |
||
![]() |
0094bcbc07
|
Implement additive blending for animation graphs. (#15631)
*Additive blending* is an ubiquitous feature in game engines that allows animations to be concatenated instead of blended. The canonical use case is to allow a character to hold a weapon while performing arbitrary poses. For example, if you had a character that needed to be able to walk or run while attacking with a weapon, the typical workflow is to have an additive blend node that combines walking and running animation clips with an animation clip of one of the limbs performing a weapon attack animation. This commit adds support for additive blending to Bevy. It builds on top of the flexible infrastructure in #15589 and introduces a new type of node, the *add node*. Like blend nodes, add nodes combine the animations of their children according to their weights. Unlike blend nodes, however, add nodes don't normalize the weights to 1.0. The `animation_masks` example has been overhauled to demonstrate the use of additive blending in combination with masks. There are now controls to choose an animation clip for every limb of the fox individually. This patch also fixes a bug whereby masks were incorrectly accumulated with `insert()` during the graph threading phase, which could cause corruption of computed masks in some cases. Note that the `clip` field has been replaced with an `AnimationNodeType` enum, which breaks `animgraph.ron` files. The `Fox.animgraph.ron` asset has been updated to the new format. Closes #14395. ## Showcase https://github.com/user-attachments/assets/52dfe05f-fdb3-477a-9462-ec150f93df33 ## Migration Guide * The `animgraph.ron` format has changed to accommodate the new *additive blending* feature. You'll need to change `clip` fields to instances of the new `AnimationNodeType` enum. |
||
![]() |
40c26f80aa
|
Gpu readback (#15419)
# Objective Adds a new `Readback` component to request for readback of a `Handle<Image>` or `Handle<ShaderStorageBuffer>` to the CPU in a future frame. ## Solution We track the `Readback` component and allocate a target buffer to write the gpu resource into and map it back asynchronously, which then fires a trigger on the entity in the main world. This proccess is asynchronous, and generally takes a few frames. ## Showcase ```rust let mut buffer = ShaderStorageBuffer::from(vec![0u32; 16]); buffer.buffer_description.usage |= BufferUsages::COPY_SRC; let buffer = buffers.add(buffer); commands .spawn(Readback::buffer(buffer.clone())) .observe(|trigger: Trigger<ReadbackComplete>| { info!("Buffer data from previous frame {:?}", trigger.event()); }); ``` --------- Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com> Co-authored-by: IceSentry <IceSentry@users.noreply.github.com> |
||
![]() |
78a3aae81b
|
feat(gltf): add name component to gltf mesh primitive (#13912)
# Objective - fixes https://github.com/bevyengine/bevy/issues/13473 ## Solution - When a single mesh is assigned multiple materials, it is divided into several primitive nodes, with each primitive assigned a unique material. Presently, these primitives are named using the format Mesh.index, which complicates querying. To improve this, we can assign a specific name to each primitive based on the material’s name, since each primitive corresponds to one material exclusively. ## Testing - I have included a simple example which shows how to query a material and mesh part based on the new name component. ## Changelog - adds `GltfMaterialName` component to the mesh entity of the gltf primitive node. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
2ae5a21009
|
Implement percentage-closer soft shadows (PCSS). (#13497)
[*Percentage-closer soft shadows*] are a technique from 2004 that allow shadows to become blurrier farther from the objects that cast them. It works by introducing a *blocker search* step that runs before the normal shadow map sampling. The blocker search step detects the difference between the depth of the fragment being rasterized and the depth of the nearby samples in the depth buffer. Larger depth differences result in a larger penumbra and therefore a blurrier shadow. To enable PCSS, fill in the `soft_shadow_size` value in `DirectionalLight`, `PointLight`, or `SpotLight`, as appropriate. This shadow size value represents the size of the light and should be tuned as appropriate for your scene. Higher values result in a wider penumbra (i.e. blurrier shadows). When using PCSS, temporal shadow maps (`ShadowFilteringMethod::Temporal`) are recommended. If you don't use `ShadowFilteringMethod::Temporal` and instead use `ShadowFilteringMethod::Gaussian`, Bevy will use the same technique as `Temporal`, but the result won't vary over time. This produces a rather noisy result. Doing better would likely require downsampling the shadow map, which would be complex and slower (and would require PR #13003 to land first). In addition to PCSS, this commit makes the near Z plane for the shadow map configurable on a per-light basis. Previously, it had been hardcoded to 0.1 meters. This change was necessary to make the point light shadow map in the example look reasonable, as otherwise the shadows appeared far too aliased. A new example, `pcss`, has been added. It demonstrates the percentage-closer soft shadow technique with directional lights, point lights, spot lights, non-temporal operation, and temporal operation. The assets are my original work. Both temporal and non-temporal shadows are rather noisy in the example, and, as mentioned before, this is unavoidable without downsampling the depth buffer, which we can't do yet. Note also that the shadows don't look particularly great for point lights; the example simply isn't an ideal scene for them. Nevertheless, I felt that the benefits of the ability to do a side-by-side comparison of directional and point lights outweighed the unsightliness of the point light shadows in that example, so I kept the point light feature in. Fixes #3631. [*Percentage-closer soft shadows*]: https://developer.download.nvidia.com/shaderlibrary/docs/shadow_PCSS.pdf ## Changelog ### Added * Percentage-closer soft shadows (PCSS) are now supported, allowing shadows to become blurrier as they stretch away from objects. To use them, set the `soft_shadow_size` field in `DirectionalLight`, `PointLight`, or `SpotLight`, as applicable. * The near Z value for shadow maps is now customizable via the `shadow_map_near_z` field in `DirectionalLight`, `PointLight`, and `SpotLight`. ## Screenshots PCSS off:  PCSS on:  --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com> |
||
![]() |
09d2292016
|
Add a border to the UI material example (#15120)
# Objective There aren't any examples of how to draw a ui material with borders. ## Solution Add border rendering to the `ui_material` example's shader. ## Showcase <img width="395" alt="bordermat" src="https://github.com/user-attachments/assets/109c59c1-f54b-4542-96f7-acff63f5057f"> --------- Co-authored-by: charlotte <charlotte.c.mcelwain@gmail.com> |
||
![]() |
8ac745ab10
|
UI texture slice texture flipping reimplementation (#15034)
# Objective Fixes #15032 ## Solution Reimplement support for the `flip_x` and `flip_y` fields. This doesn't flip the border geometry, I'm not really sure whether that is desirable or not. Also fixes a bug that was causing the side and center slices to tile incorrectly. ### Testing ``` cargo run --example ui_texture_slice_flip_and_tile ``` ## Showcase <img width="787" alt="nearest" src="https://github.com/user-attachments/assets/bc044bae-1748-42ba-92b5-0500c87264f6"> With tiling need to use nearest filtering to avoid bleeding between the slices. --------- Co-authored-by: Jan Hohenheim <jan@hohenheim.ch> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
d2624765d0
|
Implement animation masks, allowing fine control of the targets that animations affect. (#15013)
This commit adds support for *masks* to the animation graph. A mask is a set of animation targets (bones) that neither a node nor its descendants are allowed to animate. Animation targets can be assigned one or more *mask group*s, which are specific to a single graph. If a node masks out any mask group that an animation target belongs to, animation curves for that target will be ignored during evaluation. The canonical use case for masks is to support characters holding objects. Typically, character animations will contain hand animations in the case that the character's hand is empty. (For example, running animations may close a character's fingers into a fist.) However, when the character is holding an object, the animation must be altered so that the hand grips the object. Bevy currently has no convenient way to handle this. The only workaround that I can see is to have entirely separate animation clips for characters' hands and bodies and keep them in sync, which is burdensome and doesn't match artists' expectations from other engines, which all effectively have support for masks. However, with mask group support, this task is simple. We assign each hand to a mask group and parent all character animations to a node. When a character grasps an object in hand, we position the fingers as appropriate and then enable the mask group for that hand in that node. This allows the character's animations to run normally, while the object remains correctly attached to the hand. Note that even with this PR, we won't have support for running separate animations for a character's hand and the rest of the character. This is because we're missing additive blending: there's no way to combine the two masked animations together properly. I intend that to be a follow-up PR. The major engines all have support for masks, though the workflow varies from engine to engine: * Unity has support for masks [essentially as implemented here], though with layers instead of a tree. However, when using the Mecanim ("Humanoid") feature, precise control over bones is lost in favor of predefined muscle groups. * Unreal has a feature named [*layered blend per bone*]. This allows for separate blend weights for different bones, effectively achieving masks. I believe that the combination of blend nodes and masks make Bevy's animation graph as expressible as that of Unreal, once we have support for additive blending, though you may have to use more nodes than you would in Unreal. Moreover, separating out the concepts of "blend weight" and "which bones this node applies to" seems like a cleaner design than what Unreal has. * Godot's `AnimationTree` has the notion of [*blend filters*], which are essentially the same as masks as implemented in this PR. Additionally, this patch fixes a bug with weight evaluation whereby weights weren't properly propagated down to grandchildren, because the weight evaluation for a node only checked its parent's weight, not its evaluated weight. I considered submitting this as a separate PR, but given that this PR refactors that code entirely to support masks and weights under a unified "evaluated node" concept, I simply included the fix here. A new example, `animation_masks`, has been added. It demonstrates how to toggle masks on and off for specific portions of a skin. This is part of #14395, but I'm going to defer closing that issue until we have additive blending. [essentially as implemented here]: https://docs.unity3d.com/560/Documentation/Manual/class-AvatarMask.html [*layered blend per bone*]: https://dev.epicgames.com/documentation/en-us/unreal-engine/using-layered-animations-in-unreal-engine [*blend filters*]: https://docs.godotengine.org/en/stable/tutorials/animation/animation_tree.html ## Migration Guide * The serialized format of animation graphs has changed with the addition of animation masks. To upgrade animation graph RON files, add `mask` and `mask_groups` fields as appropriate. (They can be safely set to zero.) |
||
![]() |
a4640046fc
|
Adds ShaderStorageBuffer asset (#14663)
Adds a new `Handle<Storage>` asset type that can be used as a render asset, particularly for use with `AsBindGroup`. Closes: #13658 # Objective Allow users to create storage buffers in the main world without having to access the `RenderDevice`. While this resource is technically available, it's bad form to use in the main world and requires mixing rendering details with main world code. Additionally, this makes storage buffers easier to use with `AsBindGroup`, particularly in the following scenarios: - Sharing the same buffers between a compute stage and material shader. We already have examples of this for storage textures (see game of life example) and these changes allow a similar pattern to be used with storage buffers. - Preventing repeated gpu upload (see the previous easier to use `Vec` `AsBindGroup` option). - Allow initializing custom materials using `Default`. Previously, the lack of a `Default` implement for the raw `wgpu::Buffer` type made implementing a `AsBindGroup + Default` bound difficult in the presence of buffers. ## Solution Adds a new `Handle<Storage>` asset type that is prepared into a `GpuStorageBuffer` render asset. This asset can either be initialized with a `Vec<u8>` of properly aligned data or with a size hint. Users can modify the underlying `wgpu::BufferDescriptor` to provide additional usage flags. ## Migration Guide The `AsBindGroup` `storage` attribute has been modified to reference the new `Handle<Storage>` asset instead. Usages of Vec` should be converted into assets instead. --------- Co-authored-by: IceSentry <IceSentry@users.noreply.github.com> |
||
![]() |
510fce9af3
|
Allow fog density texture to be scrolled over time with an offset (#14868)
# Objective - The goal of this PR is to make it possible to move the density texture of a `FogVolume` over time in order to create dynamic effects like fog moving in the wind. - You could theoretically move the `FogVolume` itself, but this is not ideal, because the `FogVolume` AABB would eventually leave the area. If you want an area to remain foggy while also creating the impression that the fog is moving in the wind, a scrolling density texture is a better solution. ## Solution - The PR adds a `density_texture_offset` field to the `FogVolume` component. This offset is in the UVW coordinates of the density texture, meaning that a value of `(0.5, 0.0, 0.0)` moves the 3d texture by half along the x-axis. - Values above 1.0 are wrapped, a 1.5 offset is the same as a 0.5 offset. This makes it so that the density texture wraps around on the other side, meaning that a repeating 3d noise texture can seamlessly scroll forever. It also makes it easy to move the density texture over time by simply increasing the offset every frame. ## Testing - A `scrolling_fog` example has been added to demonstrate the feature. It uses the offset to scroll a repeating 3d noise density texture to create the impression of fog moving in the wind. - The camera is looking at a pillar with the sun peaking behind it. This highlights the effect the changing density has on the volumetric lighting interactions. - Temporal anti-aliasing combined with the `jitter` option of `VolumetricFogSettings` is used to improve the quality of the effect. --- ## Showcase https://github.com/user-attachments/assets/3aa50ebd-771c-4c99-ab5d-255c0c3be1a8 |
||
![]() |
bfcb19a871
|
Add example showing how to use SpecializedMeshPipeline (#14370)
# Objective - A lot of mid-level rendering apis are hard to figure out because they don't have any examples - SpecializedMeshPipeline can be really useful in some cases when you want more flexibility than a Material without having to go to low level apis. ## Solution - Add an example showing how to make a custom `SpecializedMeshPipeline`. ## Testing - Did you test these changes? If so, how? - Are there any parts that need more testing? - How can other people (reviewers) test your changes? Is there anything specific they need to know? - If relevant, what platforms did you test these changes on, and are there any important ones you can't test? --- ## Showcase The examples just spawns 3 triangles in a triangle pattern.  --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
20c6bcdba4
|
Allow volumetric fog to be localized to specific, optionally voxelized, regions. (#14099)
Currently, volumetric fog is global and affects the entire scene uniformly. This is inadequate for many use cases, such as local smoke effects. To address this problem, this commit introduces *fog volumes*, which are axis-aligned bounding boxes (AABBs) that specify fog parameters inside their boundaries. Such volumes can also specify a *density texture*, a 3D texture of voxels that specifies the density of the fog at each point. To create a fog volume, add a `FogVolume` component to an entity (which is included in the new `FogVolumeBundle` convenience bundle). Like light probes, a fog volume is conceptually a 1×1×1 cube centered on the origin; a transform can be used to position and resize this region. Many of the fields on the existing `VolumetricFogSettings` have migrated to the new `FogVolume` component. `VolumetricFogSettings` on a camera is still needed to enable volumetric fog. However, by itself `VolumetricFogSettings` is no longer sufficient to enable volumetric fog; a `FogVolume` must be present. Applications that wish to retain the old global fog behavior can simply surround the scene with a large fog volume. By way of implementation, this commit converts the volumetric fog shader from a full-screen shader to one applied to a mesh. The strategy is different depending on whether the camera is inside or outside the fog volume. If the camera is inside the fog volume, the mesh is simply a plane scaled to the viewport, effectively falling back to a full-screen pass. If the camera is outside the fog volume, the mesh is a cube transformed to coincide with the boundaries of the fog volume's AABB. Importantly, in the latter case, only the front faces of the cuboid are rendered. Instead of treating the boundaries of the fog as a sphere centered on the camera position, as we did prior to this patch, we raytrace the far planes of the AABB to determine the portion of each ray contained within the fog volume. We then raymarch in shadow map space as usual. If a density texture is present, we modulate the fixed density value with the trilinearly-interpolated value from that texture. Furthermore, this patch introduces optional jitter to fog volumes, intended for use with TAA. This modifies the position of the ray from frame to frame using interleaved gradient noise, in order to reduce aliasing artifacts. Many implementations of volumetric fog in games use this technique. Note that this patch makes no attempt to write a motion vector; this is because when a view ray intersects multiple voxels there's no single direction of motion. Consequently, fog volumes can have ghosting artifacts, but because fog is "ghostly" by its nature, these artifacts are less objectionable than they would be for opaque objects. A new example, `fog_volumes`, has been added. It demonstrates a single fog volume containing a voxelized representation of the Stanford bunny. The existing `volumetric_fog` example has been updated to use the new local volumetrics API. ## Changelog ### Added * Local `FogVolume`s are now supported, to localize fog to specific regions. They can optionally have 3D density voxel textures for precise control over the distribution of the fog. ### Changed * `VolumetricFogSettings` on a camera no longer enables volumetric fog; instead, it simply enables the processing of `FogVolume`s within the scene. ## Migration Guide * A `FogVolume` is now necessary in order to enable volumetric fog, in addition to `VolumetricFogSettings` on the camera. Existing uses of volumetric fog can be migrated by placing a large `FogVolume` surrounding the scene. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: François Mockers <mockersf@gmail.com> |
||
![]() |
011f71a245
|
Update ui_material example to be a slider instead (#14031)
# Objective - Some people have asked how to do image masking in UI. It's pretty easy to do using a `UiMaterial` assuming you know how to write shaders. ## Solution - Update the ui_material example to show the bevy banner slowly being revealed like a progress bar ## Notes I'm not entirely sure if we want this or not. For people that would be comfortable to use this for their own games they would probably have already figured out how to do it and for people that aren't familiar with shaders this isn't really enough to make an actual slider/progress bar. --------- Co-authored-by: François Mockers <francois.mockers@vleue.com> |
||
![]() |
44db8b7fac
|
Allow phase items not associated with meshes to be binned. (#14029)
As reported in #14004, many third-party plugins, such as Hanabi, enqueue entities that don't have meshes into render phases. However, the introduction of indirect mode added a dependency on mesh-specific data, breaking this workflow. This is because GPU preprocessing requires that the render phases manage indirect draw parameters, which don't apply to objects that aren't meshes. The existing code skips over binned entities that don't have indirect draw parameters, which causes the rendering to be skipped for such objects. To support this workflow, this commit adds a new field, `non_mesh_items`, to `BinnedRenderPhase`. This field contains a simple list of (bin key, entity) pairs. After drawing batchable and unbatchable objects, the non-mesh items are drawn one after another. Bevy itself doesn't enqueue any items into this list; it exists solely for the application and/or plugins to use. Additionally, this commit switches the asset ID in the standard bin keys to be an untyped asset ID rather than that of a mesh. This allows more flexibility, allowing bins to be keyed off any type of asset. This patch adds a new example, `custom_phase_item`, which simultaneously serves to demonstrate how to use this new feature and to act as a regression test so this doesn't break again. Fixes #14004. ## Changelog ### Added * `BinnedRenderPhase` now contains a `non_mesh_items` field for plugins to add custom items to. |
||
![]() |
4d3f43131e
|
Use a ship in Transform::align example (#13935)
# Objective The documentation for [`Transform::align`](https://docs.rs/bevy/0.14.0-rc.3/bevy/transform/components/struct.Transform.html#method.align) mentions a hypothetical ship model. Showing this concretely would be a nice improvement over using a cube. > For example, if a spaceship model has its nose pointing in the X-direction in its own local coordinates and its dorsal fin pointing in the Y-direction, then align(Dir3::X, v, Dir3::Y, w) will make the spaceship’s nose point in the direction of v, while the dorsal fin does its best to point in the direction w. ## Solution This commit makes the ship less hypothetical by using a kenney ship model in the example. The local axes for the ship needed to change to accommodate the gltf, so the hypothetical in the documentation and this example's local axes don't necessarily match. Docs use `align(Dir3::X, v, Dir3::Y, w)` and this example now uses `(Vec3::NEG_Z, *first, Vec3::X, *second)`. I manually modified the `craft_speederD` Node's `translation` to be 0,0,0 in the gltf file, which means it now differs from kenney's original model. Original ship from: https://kenney.nl/assets/space-kit ## Testing ``` cargo run --example align ```    |
||
![]() |
c50a4d8821
|
Remove unused mip_bias parameter from apply_normal_mapping (#13752)
Mip bias is no longer used here |
||
![]() |
df8ccb8735
|
Implement PBR anisotropy per KHR_materials_anisotropy . (#13450)
This commit implements support for physically-based anisotropy in Bevy's `StandardMaterial`, following the specification for the [`KHR_materials_anisotropy`] glTF extension. [*Anisotropy*] (not to be confused with [anisotropic filtering]) is a PBR feature that allows roughness to vary along the tangent and bitangent directions of a mesh. In effect, this causes the specular light to stretch out into lines instead of a round lobe. This is useful for modeling brushed metal, hair, and similar surfaces. Support for anisotropy is a common feature in major game and graphics engines; Unity, Unreal, Godot, three.js, and Blender all support it to varying degrees. Two new parameters have been added to `StandardMaterial`: `anisotropy_strength` and `anisotropy_rotation`. Anisotropy strength, which ranges from 0 to 1, represents how much the roughness differs between the tangent and the bitangent of the mesh. In effect, it controls how stretched the specular highlight is. Anisotropy rotation allows the roughness direction to differ from the tangent of the model. In addition to these two fixed parameters, an *anisotropy texture* can be supplied. Such a texture should be a 3-channel RGB texture, where the red and green values specify a direction vector using the same conventions as a normal map ([0, 1] color values map to [-1, 1] vector values), and the the blue value represents the strength. This matches the format that the [`KHR_materials_anisotropy`] specification requires. Such textures should be loaded as linear and not sRGB. Note that this texture does consume one additional texture binding in the standard material shader. The glTF loader has been updated to properly parse the `KHR_materials_anisotropy` extension. A new example, `anisotropy`, has been added. This example loads and displays the barn lamp example from the [`glTF-Sample-Assets`] repository. Note that the textures were rather large, so I shrunk them down and converted them to a mixture of JPEG and KTX2 format, in the interests of saving space in the Bevy repository. [*Anisotropy*]: https://google.github.io/filament/Filament.md.html#materialsystem/anisotropicmodel [anisotropic filtering]: https://en.wikipedia.org/wiki/Anisotropic_filtering [`KHR_materials_anisotropy`]: https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_anisotropy/README.md [`glTF-Sample-Assets`]: https://github.com/KhronosGroup/glTF-Sample-Assets/ ## Changelog ### Added * Physically-based anisotropy is now available for materials, which enhances the look of surfaces such as brushed metal or hair. glTF scenes can use the new feature with the `KHR_materials_anisotropy` extension. ## Screenshots With anisotropy:  Without anisotropy:  |
||
![]() |
9b9d3d81cb
|
Normalise matrix naming (#13489)
# Objective - Fixes #10909 - Fixes #8492 ## Solution - Name all matrices `x_from_y`, for example `world_from_view`. ## Testing - I've tested most of the 3D examples. The `lighting` example particularly should hit a lot of the changes and appears to run fine. --- ## Changelog - Renamed matrices across the engine to follow a `y_from_x` naming, making the space conversion more obvious. ## Migration Guide - `Frustum`'s `from_view_projection`, `from_view_projection_custom_far` and `from_view_projection_no_far` were renamed to `from_clip_from_world`, `from_clip_from_world_custom_far` and `from_clip_from_world_no_far`. - `ComputedCameraValues::projection_matrix` was renamed to `clip_from_view`. - `CameraProjection::get_projection_matrix` was renamed to `get_clip_from_view` (this affects implementations on `Projection`, `PerspectiveProjection` and `OrthographicProjection`). - `ViewRangefinder3d::from_view_matrix` was renamed to `from_world_from_view`. - `PreviousViewData`'s members were renamed to `view_from_world` and `clip_from_world`. - `ExtractedView`'s `projection`, `transform` and `view_projection` were renamed to `clip_from_view`, `world_from_view` and `clip_from_world`. - `ViewUniform`'s `view_proj`, `unjittered_view_proj`, `inverse_view_proj`, `view`, `inverse_view`, `projection` and `inverse_projection` were renamed to `clip_from_world`, `unjittered_clip_from_world`, `world_from_clip`, `world_from_view`, `view_from_world`, `clip_from_view` and `view_from_clip`. - `GpuDirectionalCascade::view_projection` was renamed to `clip_from_world`. - `MeshTransforms`' `transform` and `previous_transform` were renamed to `world_from_local` and `previous_world_from_local`. - `MeshUniform`'s `transform`, `previous_transform`, `inverse_transpose_model_a` and `inverse_transpose_model_b` were renamed to `world_from_local`, `previous_world_from_local`, `local_from_world_transpose_a` and `local_from_world_transpose_b` (the `Mesh` type in WGSL mirrors this, however `transform` and `previous_transform` were named `model` and `previous_model`). - `Mesh2dTransforms::transform` was renamed to `world_from_local`. - `Mesh2dUniform`'s `transform`, `inverse_transpose_model_a` and `inverse_transpose_model_b` were renamed to `world_from_local`, `local_from_world_transpose_a` and `local_from_world_transpose_b` (the `Mesh2d` type in WGSL mirrors this). - In WGSL, in `bevy_pbr::mesh_functions`, `get_model_matrix` and `get_previous_model_matrix` were renamed to `get_world_from_local` and `get_previous_world_from_local`. - In WGSL, `bevy_sprite::mesh2d_functions::get_model_matrix` was renamed to `get_world_from_local`. |
||
![]() |
d26900a9ea
|
add handling of all missing gltf extras: scene, mesh & materials (#13453)
# Objective - fixes #4823 ## Solution As outlined in the discussion in the linked issue as the best current solution, this PR adds specific GltfExtras for - scenes - meshes - materials - As it is , it is not a breaking change, I hesitated to rename the current "GltfExtras" component to "PrimitiveGltfExtras", but that would result in a breaking change and might be a bit confusing as to what "primitive" that refers to. ## Testing - I included a bare-bones example & asset (exported gltf file from Blender) with gltf extras at all the relevant levels : scene, mesh, material --- ## Changelog - adds "SceneGltfExtras" injected at the scene level if any - adds "MeshGltfExtras", injected at the mesh level if any - adds "MaterialGltfExtras", injected at the mesh level if any: ie if a mesh has a material that has gltf extras, the component will be injected there. |
||
![]() |
f398674e51
|
Implement opt-in sharp screen-space reflections for the deferred renderer, with improved raymarching code. (#13418)
This commit, a revamp of #12959, implements screen-space reflections (SSR), which approximate real-time reflections based on raymarching through the depth buffer and copying samples from the final rendered frame. This patch is a relatively minimal implementation of SSR, so as to provide a flexible base on which to customize and build in the future. However, it's based on the production-quality [raymarching code by Tomasz Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db). For a general basic overview of screen-space reflections, see [1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html). The raymarching shader uses the basic algorithm of tracing forward in large steps, refining that trace in smaller increments via binary search, and then using the secant method. No temporal filtering or roughness blurring, is performed at all; for this reason, SSR currently only operates on very shiny surfaces. No acceleration via the hierarchical Z-buffer is implemented (though note that https://github.com/bevyengine/bevy/pull/12899 will add the infrastructure for this). Reflections are traced at full resolution, which is often considered slow. All of these improvements and more can be follow-ups. SSR is built on top of the deferred renderer and is currently only supported in that mode. Forward screen-space reflections are possible albeit uncommon (though e.g. *Doom Eternal* uses them); however, they require tracing from the previous frame, which would add complexity. This patch leaves the door open to implementing SSR in the forward rendering path but doesn't itself have such an implementation. Screen-space reflections aren't supported in WebGL 2, because they require sampling from the depth buffer, which Naga can't do because of a bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`; this is the same reason why depth of field is disabled on that platform). To add screen-space reflections to a camera, use the `ScreenSpaceReflectionsBundle` bundle or the `ScreenSpaceReflectionsSettings` component. In addition to `ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass` must also be present for the reflections to show up. The `ScreenSpaceReflectionsSettings` component contains several settings that artists can tweak, and also comes with sensible defaults. A new example, `ssr`, has been added. It's loosely based on the [three.js ocean sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all the assets are original. Note that the three.js demo has no screen-space reflections and instead renders a mirror world. In contrast to #12959, this demo tests not only a cube but also a more complex model (the flight helmet). ## Changelog ### Added * Screen-space reflections can be enabled for very smooth surfaces by adding the `ScreenSpaceReflections` component to a camera. Deferred rendering must be enabled for the reflections to appear.   |
||
![]() |
5a1c62faae
|
fix lava emissive strength in depth of field example (#13449)
# Objective - Since #13350 the lava is way too bright  ## Solution - Change the emissive value in the material in the glb file from 20000 to 20 ## Testing - Run the example `depth_of_field` |
||
![]() |
19bfa41768
|
Implement volumetric fog and volumetric lighting, also known as light shafts or god rays. (#13057)
This commit implements a more physically-accurate, but slower, form of fog than the `bevy_pbr::fog` module does. Notably, this *volumetric fog* allows for light beams from directional lights to shine through, creating what is known as *light shafts* or *god rays*. To add volumetric fog to a scene, add `VolumetricFogSettings` to the camera, and add `VolumetricLight` to directional lights that you wish to be volumetric. `VolumetricFogSettings` has numerous settings that allow you to define the accuracy of the simulation, as well as the look of the fog. Currently, only interaction with directional lights that have shadow maps is supported. Note that the overhead of the effect scales directly with the number of directional lights in use, so apply `VolumetricLight` sparingly for the best results. The overall algorithm, which is implemented as a postprocessing effect, is a combination of the techniques described in [Scratchapixel] and [this blog post]. It uses raymarching in screen space, transformed into shadow map space for sampling and combined with physically-based modeling of absorption and scattering. Bevy employs the widely-used [Henyey-Greenstein phase function] to model asymmetry; this essentially allows light shafts to fade into and out of existence as the user views them. Volumetric rendering is a huge subject, and I deliberately kept the scope of this commit small. Possible follow-ups include: 1. Raymarching at a lower resolution. 2. A post-processing blur (especially useful when combined with (1)). 3. Supporting point lights and spot lights. 4. Supporting lights with no shadow maps. 5. Supporting irradiance volumes and reflection probes. 6. Voxel components that reuse the volumetric fog code to create voxel shapes. 7. *Horizon: Zero Dawn*-style clouds. These are all useful, but out of scope of this patch for now, to keep things tidy and easy to review. A new example, `volumetric_fog`, has been added to demonstrate the effect. ## Changelog ### Added * A new component, `VolumetricFog`, is available, to allow for a more physically-accurate, but more resource-intensive, form of fog. * A new component, `VolumetricLight`, can be placed on directional lights to make them interact with `VolumetricFog`. Notably, this allows such lights to emit light shafts/god rays.   [Scratchapixel]: https://www.scratchapixel.com/lessons/3d-basic-rendering/volume-rendering-for-developers/intro-volume-rendering.html [this blog post]: https://www.alexandre-pestana.com/volumetric-lights/ [Henyey-Greenstein phase function]: https://www.pbr-book.org/4ed/Volume_Scattering/Phase_Functions#TheHenyeyndashGreensteinPhaseFunction |
||
![]() |
df31b808c3
|
Implement fast depth of field as a postprocessing effect. (#13009)
This commit implements the [depth of field] effect, simulating the blur of objects out of focus of the virtual lens. Either the [hexagonal bokeh] effect or a faster Gaussian blur may be used. In both cases, the implementation is a simple separable two-pass convolution. This is not the most physically-accurate real-time bokeh technique that exists; Unreal Engine has [a more accurate implementation] of "cinematic depth of field" from 2018. However, it's simple, and most engines provide something similar as a fast option, often called "mobile" depth of field. The general approach is outlined in [a blog post from 2017]. We take advantage of the fact that both Gaussian blurs and hexagonal bokeh blurs are *separable*. This means that their 2D kernels can be reduced to a small number of 1D kernels applied one after another, asymptotically reducing the amount of work that has to be done. Gaussian blurs can be accomplished by blurring horizontally and then vertically, while hexagonal bokeh blurs can be done with a vertical blur plus a diagonal blur, plus two diagonal blurs. In both cases, only two passes are needed. Bokeh requires the first pass to have a second render target and requires two subpasses in the second pass, which decreases its performance relative to the Gaussian blur. The bokeh blur is generally more aesthetically pleasing than the Gaussian blur, as it simulates the effect of a camera more accurately. The shape of the bokeh circles are determined by the number of blades of the aperture. In our case, we use a hexagon, which is usually considered specific to lower-quality cameras. (This is a downside of the fast hexagon approach compared to the higher-quality approaches.) The blur amount is generally specified by the [f-number], which we use to compute the focal length from the film size and FOV. By default, we simulate standard cinematic cameras of f/1 and [Super 35]. The developer can customize these values as desired. A new example has been added to demonstrate depth of field. It allows customization of the mode (Gaussian vs. bokeh), focal distance and f-numbers. The test scene is inspired by a [blog post on depth of field in Unity]; however, the effect is implemented in a completely different way from that blog post, and all the assets (textures, etc.) are original. Bokeh depth of field:  Gaussian depth of field:  No depth of field:  [depth of field]: https://en.wikipedia.org/wiki/Depth_of_field [hexagonal bokeh]: https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/ [a more accurate implementation]: https://epicgames.ent.box.com/s/s86j70iamxvsuu6j35pilypficznec04 [a blog post from 2017]: https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/ [f-number]: https://en.wikipedia.org/wiki/F-number [Super 35]: https://en.wikipedia.org/wiki/Super_35 [blog post on depth of field in Unity]: https://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/ ## Changelog ### Added * A depth of field postprocessing effect is now available, to simulate objects being out of focus of the camera. To use it, add `DepthOfFieldSettings` to an entity containing a `Camera3d` component. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Bram Buurlage <brambuurlage@gmail.com> |
||
![]() |
77ed72bc16
|
Implement clearcoat per the Filament and the KHR_materials_clearcoat specifications. (#13031)
Clearcoat is a separate material layer that represents a thin translucent layer of a material. Examples include (from the [Filament spec]) car paint, soda cans, and lacquered wood. This commit implements support for clearcoat following the Filament and Khronos specifications, marking the beginnings of support for multiple PBR layers in Bevy. The [`KHR_materials_clearcoat`] specification describes the clearcoat support in glTF. In Blender, applying a clearcoat to the Principled BSDF node causes the clearcoat settings to be exported via this extension. As of this commit, Bevy parses and reads the extension data when present in glTF. Note that the `gltf` crate has no support for `KHR_materials_clearcoat`; this patch therefore implements the JSON semantics manually. Clearcoat is integrated with `StandardMaterial`, but the code is behind a series of `#ifdef`s that only activate when clearcoat is present. Additionally, the `pbr_feature_layer_material_textures` Cargo feature must be active in order to enable support for clearcoat factor maps, clearcoat roughness maps, and clearcoat normal maps. This approach mirrors the same pattern used by the existing transmission feature and exists to avoid running out of texture bindings on platforms like WebGL and WebGPU. Note that constant clearcoat factors and roughness values *are* supported in the browser; only the relatively-less-common maps are disabled on those platforms. This patch refactors the lighting code in `StandardMaterial` significantly in order to better support multiple layers in a natural way. That code was due for a refactor in any case, so this is a nice improvement. A new demo, `clearcoat`, has been added. It's based on [the corresponding three.js demo], but all the assets (aside from the skybox and environment map) are my original work. [Filament spec]: https://google.github.io/filament/Filament.html#materialsystem/clearcoatmodel [`KHR_materials_clearcoat`]: https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_clearcoat/README.md [the corresponding three.js demo]: https://threejs.org/examples/webgl_materials_physical_clearcoat.html   ## Changelog ### Added * `StandardMaterial` now supports a clearcoat layer, which represents a thin translucent layer over an underlying material. * The glTF loader now supports the `KHR_materials_clearcoat` extension, representing materials with clearcoat layers. ## Migration Guide * The lighting functions in the `pbr_lighting` WGSL module now have clearcoat parameters, if `STANDARD_MATERIAL_CLEARCOAT` is defined. * The `R` reflection vector parameter has been removed from some lighting functions, as it was unused. |
||
![]() |
088960f597
|
Example with repeated texture (#13176)
# Objective Fixes #11136 . Fixes https://github.com/bevyengine/bevy/pull/11161. ## Solution - Set image sampler with repeated mode for u and v - set uv_transform of StandardMaterial to resizing params ## Testing Got this view on example run  |
||
![]() |
6027890a11
|
move wgsl color operations from bevy_pbr to bevy_render (#13209)
# Objective `bevy_pbr/utils.wgsl` shader file contains mathematical constants and color conversion functions. Both of those should be accessible without enabling `bevy_pbr` feature. For example, tonemapping can be done in non pbr scenario, and it uses color conversion functions. Fixes #13207 ## Solution * Move mathematical constants (such as PI, E) from `bevy_pbr/src/render/utils.wgsl` into `bevy_render/src/maths.wgsl` * Move color conversion functions from `bevy_pbr/src/render/utils.wgsl` into new file `bevy_render/src/color_operations.wgsl` ## Testing Ran multiple examples, checked they are working: * tonemapping * color_grading * 3d_scene * animated_material * deferred_rendering * 3d_shapes * fog * irradiance_volumes * meshlet * parallax_mapping * pbr * reflection_probes * shadow_biases * 2d_gizmos * light_gizmos --- ## Changelog * Moved mathematical constants (such as PI, E) from `bevy_pbr/src/render/utils.wgsl` into `bevy_render/src/maths.wgsl` * Moved color conversion functions from `bevy_pbr/src/render/utils.wgsl` into new file `bevy_render/src/color_operations.wgsl` ## Migration Guide In user's shader code replace usage of mathematical constants from `bevy_pbr::utils` to the usage of the same constants from `bevy_render::maths`. |
||
![]() |
d390420093
|
Implement Auto Exposure plugin (#12792)
# Objective - Add auto exposure/eye adaptation to the bevy render pipeline. - Support features that users might expect from other engines: - Metering masks - Compensation curves - Smooth exposure transitions This PR is based on an implementation I already built for a personal project before https://github.com/bevyengine/bevy/pull/8809 was submitted, so I wasn't able to adopt that PR in the proper way. I've still drawn inspiration from it, so @fintelia should be credited as well. ## Solution An auto exposure compute shader builds a 64 bin histogram of the scene's luminance, and then adjusts the exposure based on that histogram. Using a histogram allows the system to ignore outliers like shadows and specular highlights, and it allows to give more weight to certain areas based on a mask. --- ## Changelog - Added: AutoExposure plugin that allows to adjust a camera's exposure based on it's scene's luminance. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
31835ff76d
|
Implement visibility ranges, also known as hierarchical levels of detail (HLODs). (#12916)
Implement visibility ranges, also known as hierarchical levels of detail (HLODs). This commit introduces a new component, `VisibilityRange`, which allows developers to specify camera distances in which meshes are to be shown and hidden. Hiding meshes happens early in the rendering pipeline, so this feature can be used for level of detail optimization. Additionally, this feature is properly evaluated per-view, so different views can show different levels of detail. This feature differs from proper mesh LODs, which can be implemented later. Engines generally implement true mesh LODs later in the pipeline; they're typically more efficient than HLODs with GPU-driven rendering. However, mesh LODs are more limited than HLODs, because they require the lower levels of detail to be meshes with the same vertex layout and shader (and perhaps the same material) as the original mesh. Games often want to use objects other than meshes to replace distant models, such as *octahedral imposters* or *billboard imposters*. The reason why the feature is called *hierarchical level of detail* is that HLODs can replace multiple meshes with a single mesh when the camera is far away. This can be useful for reducing drawcall count. Note that `VisibilityRange` doesn't automatically propagate down to children; it must be placed on every mesh. Crossfading between different levels of detail is supported, using the standard 4x4 ordered dithering pattern from [1]. The shader code to compute the dithering patterns should be well-optimized. The dithering code is only active when visibility ranges are in use for the mesh in question, so that we don't lose early Z. Cascaded shadow maps show the HLOD level of the view they're associated with. Point light and spot light shadow maps, which have no CSMs, display all HLOD levels that are visible in any view. To support this efficiently and avoid doing visibility checks multiple times, we precalculate all visible HLOD levels for each entity with a `VisibilityRange` during the `check_visibility_range` system. A new example, `visibility_range`, has been added to the tree, as well as a new low-poly version of the flight helmet model to go with it. It demonstrates use of the visibility range feature to provide levels of detail. [1]: https://en.wikipedia.org/wiki/Ordered_dithering#Threshold_map [^1]: Unreal doesn't have a feature that exactly corresponds to visibility ranges, but Unreal's HLOD system serves roughly the same purpose. ## Changelog ### Added * A new `VisibilityRange` component is available to conditionally enable entity visibility at camera distances, with optional crossfade support. This can be used to implement different levels of detail (LODs). ## Screenshots High-poly model:  Low-poly model up close:  Crossfading between the two:  --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
![]() |
961b24deaf
|
Implement filmic color grading. (#13121)
This commit expands Bevy's existing tonemapping feature to a complete set of filmic color grading tools, matching those of engines like Unity, Unreal, and Godot. The following features are supported: * White point adjustment. This is inspired by Unity's implementation of the feature, but simplified and optimized. *Temperature* and *tint* control the adjustments to the *x* and *y* chromaticity values of [CIE 1931]. Following Unity, the adjustments are made relative to the [D65 standard illuminant] in the [LMS color space]. * Hue rotation. This simply converts the RGB value to [HSV], alters the hue, and converts back. * Color correction. This allows the *gamma*, *gain*, and *lift* values to be adjusted according to the standard [ASC CDL combined function]. * Separate color correction for shadows, midtones, and highlights. Blender's source code was used as a reference for the implementation of this. The midtone ranges can be adjusted by the user. To avoid abrupt color changes, a small crossfade is used between the different sections of the image, again following Blender's formulas. A new example, `color_grading`, has been added, offering a GUI to change all the color grading settings. It uses the same test scene as the existing `tonemapping` example, which has been factored out into a shared glTF scene. [CIE 1931]: https://en.wikipedia.org/wiki/CIE_1931_color_space [D65 standard illuminant]: https://en.wikipedia.org/wiki/Standard_illuminant#Illuminant_series_D [LMS color space]: https://en.wikipedia.org/wiki/LMS_color_space [HSV]: https://en.wikipedia.org/wiki/HSL_and_HSV [ASC CDL combined function]: https://en.wikipedia.org/wiki/ASC_CDL#Combined_Function ## Changelog ### Added * Many new filmic color grading options have been added to the `ColorGrading` component. ## Migration Guide * `ColorGrading::gamma` and `ColorGrading::pre_saturation` are now set separately for the `shadows`, `midtones`, and `highlights` sections. You can migrate code with the `ColorGrading::all_sections` and `ColorGrading::all_sections_mut` functions, which access and/or update all sections at once. * `ColorGrading::post_saturation` and `ColorGrading::exposure` are now fields of `ColorGrading::global`. ## Screenshots   |
||
![]() |
6d6810c90d
|
Meshlet continuous LOD (#12755)
Adds a basic level of detail system to meshlets. An extremely brief summary is as follows: * In `from_mesh.rs`, once we've built the first level of clusters, we group clusters, simplify the new mega-clusters, and then split the simplified groups back into regular sized clusters. Repeat several times (ideally until you can't anymore). This forms a directed acyclic graph (DAG), where the children are the meshlets from the previous level, and the parents are the more simplified versions of their children. The leaf nodes are meshlets formed from the original mesh. * In `cull_meshlets.wgsl`, each cluster selects whether to render or not based on the LOD bounding sphere (different than the culling bounding sphere) of the current meshlet, the LOD bounding sphere of its parent (the meshlet group from simplification), and the simplification error relative to its children of both the current meshlet and its parent meshlet. This kind of breaks two pass occlusion culling, which will be fixed in a future PR by using an HZB from the previous frame to get the initial list of occluders. Many, _many_ improvements to be done in the future https://github.com/bevyengine/bevy/issues/11518, not least of which is code quality and speed. I don't even expect this to work on many types of input meshes. This is just a basic implementation/draft for collaboration. Arguable how much we want to do in this PR, I'll leave that up to maintainers. I've erred on the side of "as basic as possible". References: * Slides 27-77 (video available on youtube) https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf * https://blog.traverseresearch.nl/creating-a-directed-acyclic-graph-from-a-mesh-1329e57286e5 * https://jglrxavpok.github.io/2024/01/19/recreating-nanite-lod-generation.html, https://jglrxavpok.github.io/2024/03/12/recreating-nanite-faster-lod-generation.html, https://jglrxavpok.github.io/2024/04/02/recreating-nanite-runtime-lod-selection.html, and https://github.com/jglrxavpok/Carrot * https://github.com/gents83/INOX/tree/master/crates/plugins/binarizer/src * https://cs418.cs.illinois.edu/website/text/nanite.html   --------- Co-authored-by: Ricky Taylor <rickytaylor26@gmail.com> Co-authored-by: vero <email@atlasdostal.com> Co-authored-by: François <mockersf@gmail.com> Co-authored-by: atlas dostal <rodol@rivalrebels.com> Co-authored-by: Patrick Walton <pcwalton@mimiga.net> |
||
![]() |
de3ec47f3f
|
Fix example game of life (#12897)
# Objective - Example `compute_shader_game_of_life` is random and not following the rules of the game of life: at each steps, it randomly reads some pixel of the current step and some of the previous step instead of only from the previous step - Fixes #9353 ## Solution - Adopted from #9678 - Added a switch of the texture displayed every frame otherwise the game of life looks wrong - Added a way to display the texture bigger so that I could manually check everything was right --------- Co-authored-by: Sludge <96552222+SludgePhD@users.noreply.github.com> Co-authored-by: IceSentry <IceSentry@users.noreply.github.com> |
||
![]() |
08b41878d7
|
Add gpu readback example (#12877)
# Objective
- It's pretty common for users to want to read data back from the gpu
and into the main world
## Solution
- Add a simple example that shows how to read data back from the gpu and
send it to the main world using a channel.
- The example is largely based on this wgpu example but adapted to bevy
-
|
||
![]() |
df76fd4a1b
|
Programmed soundtrack example (#12774)
Created soundtrack example, fade-in and fade-out features, added new assets, and updated credits. # Objective - Fixes #12651 ## Solution - Created a resource to hold the track list. - The audio assets are then loaded by the asset server and added to the track list. - Once the game is in a specific state, an `AudioBundle` is spawned and plays the appropriate track. - The audio volume starts at zero and is then incremented gradually until it reaches full volume. - Once the game state changes, the current track fades out, and a new one fades in at the same time, offering a relatively seamless transition. - Once a track is completely faded out, it is despawned from the app. - Game state changes are simulated through a `Timer` for simplicity. - Track change system is only run if there is a change in the `GameState` resource. - All tracks are used according to their respective licenses. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
4f20faaa43
|
Meshlet rendering (initial feature) (#10164)
# Objective - Implements a more efficient, GPU-driven (https://github.com/bevyengine/bevy/issues/1342) rendering pipeline based on meshlets. - Meshes are split into small clusters of triangles called meshlets, each of which acts as a mini index buffer into the larger mesh data. Meshlets can be compressed, streamed, culled, and batched much more efficiently than monolithic meshes.   # Misc * Future work: https://github.com/bevyengine/bevy/issues/11518 * Nanite reference: https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf Two pass occlusion culling explained very well: https://medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501 --------- Co-authored-by: Ricky Taylor <rickytaylor26@gmail.com> Co-authored-by: vero <email@atlasdostal.com> Co-authored-by: François <mockersf@gmail.com> Co-authored-by: atlas dostal <rodol@rivalrebels.com> |
||
![]() |
dfdf2b9ea4
|
Implement the AnimationGraph , allowing for multiple animations to be blended together. (#11989)
This is an implementation of RFC #51: https://github.com/bevyengine/rfcs/blob/main/rfcs/51-animation-composition.md Note that the implementation strategy is different from the one outlined in that RFC, because two-phase animation has now landed. # Objective Bevy needs animation blending. The RFC for this is [RFC 51]. ## Solution This is an implementation of the RFC. Note that the implementation strategy is different from the one outlined there, because two-phase animation has now landed. This is just a draft to get the conversation started. Currently we're missing a few things: - [x] A fully-fleshed-out mechanism for transitions - [x] A serialization format for `AnimationGraph`s - [x] Examples are broken, other than `animated_fox` - [x] Documentation --- ## Changelog ### Added * The `AnimationPlayer` has been reworked to support blending multiple animations together through an `AnimationGraph`, and as such will no longer function unless a `Handle<AnimationGraph>` has been added to the entity containing the player. See [RFC 51] for more details. * Transition functionality has moved from the `AnimationPlayer` to a new component, `AnimationTransitions`, which works in tandem with the `AnimationGraph`. ## Migration Guide * `AnimationPlayer`s can no longer play animations by themselves and need to be paired with a `Handle<AnimationGraph>`. Code that was using `AnimationPlayer` to play animations will need to create an `AnimationGraph` asset first, add a node for the clip (or clips) you want to play, and then supply the index of that node to the `AnimationPlayer`'s `play` method. * The `AnimationPlayer::play_with_transition()` method has been removed and replaced with the `AnimationTransitions` component. If you were previously using `AnimationPlayer::play_with_transition()`, add all animations that you were playing to the `AnimationGraph`, and create an `AnimationTransitions` component to manage the blending between them. [RFC 51]: https://github.com/bevyengine/rfcs/blob/main/rfcs/51-animation-composition.md --------- Co-authored-by: Rob Parrett <robparrett@gmail.com> |
||
![]() |
fc202f2e3d
|
Slicing support for texture atlas (#12059)
# Objective Follow up to #11600 and #10588 https://github.com/bevyengine/bevy/issues/11944 made clear that some people want to use slicing with texture atlases ## Changelog * Added support for `TextureAtlas` slicing and tiling. `SpriteSheetBundle` and `AtlasImageBundle` can now use `ImageScaleMode` * Added new `ui_texture_atlas_slice` example using a texture sheet <img width="798" alt="Screenshot 2024-02-23 at 11 58 35" src="https://github.com/bevyengine/bevy/assets/26703856/47a8b764-127c-4a06-893f-181703777501"> --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Pablo Reinhardt <126117294+pablo-lua@users.noreply.github.com> |
||
![]() |
d261a86b9f
|
compute shader game of life example: use R32Float instead of Rgba8Unorm (#12155)
# Objective - Fixes #9670 - Avoid a crash in CI due to ``` thread 'Compute Task Pool (0)' panicked at /Users/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wgpu-0.19.1/src/backend/wgpu_core.rs:3009:5: wgpu error: Validation Error Caused by: In Device::create_bind_group The adapter does not support read access for storages texture of format Rgba8Unorm ``` ## Solution - Use an `R32Float` texture instead of an `Rgba8Unorm` as it's a tier 1 texture format https://github.com/gpuweb/gpuweb/issues/3838 and is more supported - This should also improve support for webgpu in the next wgpu version |
||
![]() |
ab16f5ed6a
|
UI Texture 9 slice (#11600)
> Follow up to #10588 > Closes #11749 (Supersedes #11756) Enable Texture slicing for the following UI nodes: - `ImageBundle` - `ButtonBundle` <img width="739" alt="Screenshot 2024-01-29 at 13 57 43" src="https://github.com/bevyengine/bevy/assets/26703856/37675681-74eb-4689-ab42-024310cf3134"> I also added a collection of `fantazy-ui-borders` from [Kenney's](www.kenney.nl) assets, with the appropriate license (CC). If it's a problem I can use the same textures as the `sprite_slice` example # Work done Added the `ImageScaleMode` component to the targetted bundles, most of the logic is directly reused from `bevy_sprite`. The only additional internal component is the UI specific `ComputedSlices`, which does the same thing as its spritee equivalent but adapted to UI code. Again the slicing is not compatible with `TextureAtlas`, it's something I need to tackle more deeply in the future # Fixes * [x] I noticed that `TextureSlicer::compute_slices` could infinitely loop if the border was larger that the image half extents, now an error is triggered and the texture will fallback to being stretched * [x] I noticed that when using small textures with very small *tiling* options we could generate hundred of thousands of slices. Now I set a minimum size of 1 pixel per slice, which is already ridiculously small, and a warning will be sent at runtime when slice count goes above 1000 * [x] Sprite slicing with `flip_x` or `flip_y` would give incorrect results, correct flipping is now supported to both sprites and ui image nodes thanks to @odecay observation # GPU Alternative I create a separate branch attempting to implementing 9 slicing and tiling directly through the `ui.wgsl` fragment shader. It works but requires sending more data to the GPU: - slice border - tiling factors And more importantly, the actual quad *scale* which is hard to put in the shader with the current code, so that would be for a later iteration |
||
![]() |
4c15dd0fc5
|
Implement irradiance volumes. (#10268)
# Objective Bevy could benefit from *irradiance volumes*, also known as *voxel global illumination* or simply as light probes (though this term is not preferred, as multiple techniques can be called light probes). Irradiance volumes are a form of baked global illumination; they work by sampling the light at the centers of each voxel within a cuboid. At runtime, the voxels surrounding the fragment center are sampled and interpolated to produce indirect diffuse illumination. ## Solution This is divided into two sections. The first is copied and pasted from the irradiance volume module documentation and describes the technique. The second part consists of notes on the implementation. ### Overview An *irradiance volume* is a cuboid voxel region consisting of regularly-spaced precomputed samples of diffuse indirect light. They're ideal if you have a dynamic object such as a character that can move about static non-moving geometry such as a level in a game, and you want that dynamic object to be affected by the light bouncing off that static geometry. To use irradiance volumes, you need to precompute, or *bake*, the indirect light in your scene. Bevy doesn't currently come with a way to do this. Fortunately, [Blender] provides a [baking tool] as part of the Eevee renderer, and its irradiance volumes are compatible with those used by Bevy. The [`bevy-baked-gi`] project provides a tool, `export-blender-gi`, that can extract the baked irradiance volumes from the Blender `.blend` file and package them up into a `.ktx2` texture for use by the engine. See the documentation in the `bevy-baked-gi` project for more details as to this workflow. Like all light probes in Bevy, irradiance volumes are 1×1×1 cubes that can be arbitrarily scaled, rotated, and positioned in a scene with the [`bevy_transform::components::Transform`] component. The 3D voxel grid will be stretched to fill the interior of the cube, and the illumination from the irradiance volume will apply to all fragments within that bounding region. Bevy's irradiance volumes are based on Valve's [*ambient cubes*] as used in *Half-Life 2* ([Mitchell 2006], slide 27). These encode a single color of light from the six 3D cardinal directions and blend the sides together according to the surface normal. The primary reason for choosing ambient cubes is to match Blender, so that its Eevee renderer can be used for baking. However, they also have some advantages over the common second-order spherical harmonics approach: ambient cubes don't suffer from ringing artifacts, they are smaller (6 colors for ambient cubes as opposed to 9 for spherical harmonics), and evaluation is faster. A smaller basis allows for a denser grid of voxels with the same storage requirements. If you wish to use a tool other than `export-blender-gi` to produce the irradiance volumes, you'll need to pack the irradiance volumes in the following format. The irradiance volume of resolution *(Rx, Ry, Rz)* is expected to be a 3D texture of dimensions *(Rx, 2Ry, 3Rz)*. The unnormalized texture coordinate *(s, t, p)* of the voxel at coordinate *(x, y, z)* with side *S* ∈ *{-X, +X, -Y, +Y, -Z, +Z}* is as follows: ```text s = x t = y + ⎰ 0 if S ∈ {-X, -Y, -Z} ⎱ Ry if S ∈ {+X, +Y, +Z} ⎧ 0 if S ∈ {-X, +X} p = z + ⎨ Rz if S ∈ {-Y, +Y} ⎩ 2Rz if S ∈ {-Z, +Z} ``` Visually, in a left-handed coordinate system with Y up, viewed from the right, the 3D texture looks like a stacked series of voxel grids, one for each cube side, in this order: | **+X** | **+Y** | **+Z** | | ------ | ------ | ------ | | **-X** | **-Y** | **-Z** | A terminology note: Other engines may refer to irradiance volumes as *voxel global illumination*, *VXGI*, or simply as *light probes*. Sometimes *light probe* refers to what Bevy calls a reflection probe. In Bevy, *light probe* is a generic term that encompasses all cuboid bounding regions that capture indirect illumination, whether based on voxels or not. Note that, if binding arrays aren't supported (e.g. on WebGPU or WebGL 2), then only the closest irradiance volume to the view will be taken into account during rendering. [*ambient cubes*]: https://advances.realtimerendering.com/s2006/Mitchell-ShadingInValvesSourceEngine.pdf [Mitchell 2006]: https://advances.realtimerendering.com/s2006/Mitchell-ShadingInValvesSourceEngine.pdf [Blender]: http://blender.org/ [baking tool]: https://docs.blender.org/manual/en/latest/render/eevee/render_settings/indirect_lighting.html [`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi ### Implementation notes This patch generalizes light probes so as to reuse as much code as possible between irradiance volumes and the existing reflection probes. This approach was chosen because both techniques share numerous similarities: 1. Both irradiance volumes and reflection probes are cuboid bounding regions. 2. Both are responsible for providing baked indirect light. 3. Both techniques involve presenting a variable number of textures to the shader from which indirect light is sampled. (In the current implementation, this uses binding arrays.) 4. Both irradiance volumes and reflection probes require gathering and sorting probes by distance on CPU. 5. Both techniques require the GPU to search through a list of bounding regions. 6. Both will eventually want to have falloff so that we can smoothly blend as objects enter and exit the probes' influence ranges. (This is not implemented yet to keep this patch relatively small and reviewable.) To do this, we generalize most of the methods in the reflection probes patch #11366 to be generic over a trait, `LightProbeComponent`. This trait is implemented by both `EnvironmentMapLight` (for reflection probes) and `IrradianceVolume` (for irradiance volumes). Using a trait will allow us to add more types of light probes in the future. In particular, I highly suspect we will want real-time reflection planes for mirrors in the future, which can be easily slotted into this framework. ## Changelog > This section is optional. If this was a trivial fix, or has no externally-visible impact, you can delete this section. ### Added * A new `IrradianceVolume` asset type is available for baked voxelized light probes. You can bake the global illumination using Blender or another tool of your choice and use it in Bevy to apply indirect illumination to dynamic objects. |
||
![]() |
91c467ebfc
|
Gate diffuse and specular transmission behind shader defs (#11627)
# Objective - Address #10338 ## Solution - When implementing specular and diffuse transmission, I inadvertently introduced a performance regression. On high-end hardware it is barely noticeable, but **for lower-end hardware it can be pretty brutal**. If I understand it correctly, this is likely due to use of masking by the GPU to implement control flow, which means that you still pay the price for the branches you don't take; - To avoid that, this PR introduces new shader defs (controlled via `StandardMaterialKey`) that conditionally include the transmission logic, that way the shader code for both types of transmission isn't even sent to the GPU if you're not using them; - This PR also renames ~~`STANDARDMATERIAL_NORMAL_MAP`~~ to `STANDARD_MATERIAL_NORMAL_MAP` for consistency with the naming convention used elsewhere in the codebase. (Drive-by fix) --- ## Changelog - Added new shader defs, set when using transmission in the `StandardMaterial`: - `STANDARD_MATERIAL_SPECULAR_TRANSMISSION`; - `STANDARD_MATERIAL_DIFFUSE_TRANSMISSION`; - `STANDARD_MATERIAL_SPECULAR_OR_DIFFUSE_TRANSMISSION`. - Fixed performance regression caused by the introduction of transmission, by gating transmission shader logic behind the newly introduced shader defs; - Renamed ~~`STANDARDMATERIAL_NORMAL_MAP`~~ to `STANDARD_MATERIAL_NORMAL_MAP` for consistency; ## Migration Guide - If you were using `#ifdef STANDARDMATERIAL_NORMAL_MAP` on your shader code, make sure to update the name to `STANDARD_MATERIAL_NORMAL_MAP`; (with an underscore between `STANDARD` and `MATERIAL`) |
||
![]() |
afa7b5cba5
|
Added Support for Extension-less Assets (#10153)
# Objective - Addresses **Support processing and loading files without extensions** from #9714 - Addresses **More runtime loading configuration** from #9714 - Fixes #367 - Fixes #10703 ## Solution `AssetServer::load::<A>` and `AssetServer::load_with_settings::<A>` can now use the `Asset` type parameter `A` to select a registered `AssetLoader` without inspecting the provided `AssetPath`. This change cascades onto `LoadContext::load` and `LoadContext::load_with_settings`. This allows the loading of assets which have incorrect or ambiguous file extensions. ```rust // Allow the type to be inferred by context let handle = asset_server.load("data/asset_no_extension"); // Hint the type through the handle let handle: Handle<CustomAsset> = asset_server.load("data/asset_no_extension"); // Explicit through turbofish let handle = asset_server.load::<CustomAsset>("data/asset_no_extension"); ``` Since a single `AssetPath` no longer maps 1:1 with an `Asset`, I've also modified how assets are loaded to permit multiple asset types to be loaded from a single path. This allows for two different `AssetLoaders` (which return different types of assets) to both load a single path (if requested). ```rust // Uses GltfLoader let model = asset_server.load::<Gltf>("cube.gltf"); // Hypothetical Blob loader for data transmission (for example) let blob = asset_server.load::<Blob>("cube.gltf"); ``` As these changes are reflected in the `LoadContext` as well as the `AssetServer`, custom `AssetLoaders` can also take advantage of this behaviour to create more complex assets. --- ## Change Log - Updated `custom_asset` example to demonstrate extension-less assets. - Added `AssetServer::get_handles_untyped` and Added `AssetServer::get_path_ids` ## Notes As a part of that refactor, I chose to store `AssetLoader`s (within `AssetLoaders`) using a `HashMap<TypeId, ...>` instead of a `Vec<...>`. My reasoning for this was I needed to add a relationship between `Asset` `TypeId`s and the `AssetLoader`, so instead of having a `Vec` and a `HashMap`, I combined the two, removing the `usize` index from the adjacent maps. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
1e7e6c93e6
|
Fix scene example (#11289)
Since #9907 the generation starts at `1` instead of `0` so `Entity::to_bits` now returns `4294967296` (ie. `u32::MAX + 1`) as the lowest number instead of `0`. Without this change scene loading fails with this error message: `ERROR bevy_asset::server: Failed to load asset 'scenes/load_scene_example.scn.ron' with asset loader 'bevy_scene::scene_loader::SceneLoader': Could not parse RON: 8:6: Invalid generation bits` |
||
![]() |
83d6600267
|
Implement minimal reflection probes (fixed macOS, iOS, and Android). (#11366)
This pull request re-submits #10057, which was backed out for breaking macOS, iOS, and Android. I've tested this version on macOS and Android and on the iOS simulator. # Objective This pull request implements *reflection probes*, which generalize environment maps to allow for multiple environment maps in the same scene, each of which has an axis-aligned bounding box. This is a standard feature of physically-based renderers and was inspired by [the corresponding feature in Blender's Eevee renderer]. ## Solution This is a minimal implementation of reflection probes that allows artists to define cuboid bounding regions associated with environment maps. For every view, on every frame, a system builds up a list of the nearest 4 reflection probes that are within the view's frustum and supplies that list to the shader. The PBR fragment shader searches through the list, finds the first containing reflection probe, and uses it for indirect lighting, falling back to the view's environment map if none is found. Both forward and deferred renderers are fully supported. A reflection probe is an entity with a pair of components, *LightProbe* and *EnvironmentMapLight* (as well as the standard *SpatialBundle*, to position it in the world). The *LightProbe* component (along with the *Transform*) defines the bounding region, while the *EnvironmentMapLight* component specifies the associated diffuse and specular cubemaps. A frequent question is "why two components instead of just one?" The advantages of this setup are: 1. It's readily extensible to other types of light probes, in particular *irradiance volumes* (also known as ambient cubes or voxel global illumination), which use the same approach of bounding cuboids. With a single component that applies to both reflection probes and irradiance volumes, we can share the logic that implements falloff and blending between multiple light probes between both of those features. 2. It reduces duplication between the existing *EnvironmentMapLight* and these new reflection probes. Systems can treat environment maps attached to cameras the same way they treat environment maps applied to reflection probes if they wish. Internally, we gather up all environment maps in the scene and place them in a cubemap array. At present, this means that all environment maps must have the same size, mipmap count, and texture format. A warning is emitted if this restriction is violated. We could potentially relax this in the future as part of the automatic mipmap generation work, which could easily do texture format conversion as part of its preprocessing. An easy way to generate reflection probe cubemaps is to bake them in Blender and use the `export-blender-gi` tool that's part of the [`bevy-baked-gi`] project. This tool takes a `.blend` file containing baked cubemaps as input and exports cubemap images, pre-filtered with an embedded fork of the [glTF IBL Sampler], alongside a corresponding `.scn.ron` file that the scene spawner can use to recreate the reflection probes. Note that this is intentionally a minimal implementation, to aid reviewability. Known issues are: * Reflection probes are basically unsupported on WebGL 2, because WebGL 2 has no cubemap arrays. (Strictly speaking, you can have precisely one reflection probe in the scene if you have no other cubemaps anywhere, but this isn't very useful.) * Reflection probes have no falloff, so reflections will abruptly change when objects move from one bounding region to another. * As mentioned before, all cubemaps in the world of a given type (diffuse or specular) must have the same size, format, and mipmap count. Future work includes: * Blending between multiple reflection probes. * A falloff/fade-out region so that reflected objects disappear gradually instead of vanishing all at once. * Irradiance volumes for voxel-based global illumination. This should reuse much of the reflection probe logic, as they're both GI techniques based on cuboid bounding regions. * Support for WebGL 2, by breaking batches when reflection probes are used. These issues notwithstanding, I think it's best to land this with roughly the current set of functionality, because this patch is useful as is and adding everything above would make the pull request significantly larger and harder to review. --- ## Changelog ### Added * A new *LightProbe* component is available that specifies a bounding region that an *EnvironmentMapLight* applies to. The combination of a *LightProbe* and an *EnvironmentMapLight* offers *reflection probe* functionality similar to that available in other engines. [the corresponding feature in Blender's Eevee renderer]: https://docs.blender.org/manual/en/latest/render/eevee/light_probes/reflection_cubemaps.html [`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi [glTF IBL Sampler]: https://github.com/KhronosGroup/glTF-IBL-Sampler |
||
![]() |
01139b3472
|
Sprite slicing and tiling (#10588)
> Replaces #5213 # Objective Implement sprite tiling and [9 slice scaling](https://en.wikipedia.org/wiki/9-slice_scaling) for `bevy_sprite`. Allowing slice scaling and texture tiling. Basic scaling vs 9 slice scaling:  Slicing example: <img width="481" alt="Screenshot 2022-07-05 at 15 05 49" src="https://user-images.githubusercontent.com/26703856/177336112-9e961af0-c0af-4197-aec9-430c1170a79d.png"> Tiling example: <img width="1329" alt="Screenshot 2023-11-16 at 13 53 32" src="https://github.com/bevyengine/bevy/assets/26703856/14db39b7-d9e0-4bc3-ba0e-b1f2db39ae8f"> # Solution - `SpriteBundlue` now has a `scale_mode` component storing a `SpriteScaleMode` enum with three variants: - `Stretched` (default) - `Tiled` to have sprites tile horizontally and/or vertically - `Sliced` allowing 9 slicing the texture and optionally tile some sections with a `Textureslicer`. - `bevy_sprite` has two extra systems to compute a `ComputedTextureSlices` if necessary,: - One system react to changes on `Sprite`, `Handle<Image>` or `SpriteScaleMode` - The other listens to `AssetEvent<Image>` to compute slices on sprites when the texture is ready or changed - I updated the `bevy_sprite` extraction stage to extract potentially multiple textures instead of one, depending on the presence of `ComputedTextureSlices` - I added two examples showcasing the slicing and tiling feature. The addition of `ComputedTextureSlices` as a cache is to avoid querying the image data, to retrieve its dimensions, every frame in a extract or prepare stage. Also it reacts to changes so we can have stuff like this (tiling example): https://github.com/bevyengine/bevy/assets/26703856/a349a9f3-33c3-471f-8ef4-a0e5dfce3b01 # Related - [ ] Once #5103 or #10099 is merged I can enable tiling and slicing for texture sheets as ui # To discuss There is an other option, to consider slice/tiling as part of the asset, using the new asset preprocessing but I have no clue on how to do it. Also, instead of retrieving the Image dimensions, we could use the same system as the sprite sheet and have the user give the image dimensions directly (grid). But I think it's less user friendly --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: ickshonpe <david.curthoys@googlemail.com> Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com> |
||
![]() |
3d996639a0
|
Revert "Implement minimal reflection probes. (#10057)" (#11307)
# Objective - Fix working on macOS, iOS, Android on main - Fixes #11281 - Fixes #11282 - Fixes #11283 - Fixes #11299 ## Solution - Revert #10057 |
||
![]() |
54a943d232
|
Implement minimal reflection probes. (#10057)
# Objective This pull request implements *reflection probes*, which generalize environment maps to allow for multiple environment maps in the same scene, each of which has an axis-aligned bounding box. This is a standard feature of physically-based renderers and was inspired by [the corresponding feature in Blender's Eevee renderer]. ## Solution This is a minimal implementation of reflection probes that allows artists to define cuboid bounding regions associated with environment maps. For every view, on every frame, a system builds up a list of the nearest 4 reflection probes that are within the view's frustum and supplies that list to the shader. The PBR fragment shader searches through the list, finds the first containing reflection probe, and uses it for indirect lighting, falling back to the view's environment map if none is found. Both forward and deferred renderers are fully supported. A reflection probe is an entity with a pair of components, *LightProbe* and *EnvironmentMapLight* (as well as the standard *SpatialBundle*, to position it in the world). The *LightProbe* component (along with the *Transform*) defines the bounding region, while the *EnvironmentMapLight* component specifies the associated diffuse and specular cubemaps. A frequent question is "why two components instead of just one?" The advantages of this setup are: 1. It's readily extensible to other types of light probes, in particular *irradiance volumes* (also known as ambient cubes or voxel global illumination), which use the same approach of bounding cuboids. With a single component that applies to both reflection probes and irradiance volumes, we can share the logic that implements falloff and blending between multiple light probes between both of those features. 2. It reduces duplication between the existing *EnvironmentMapLight* and these new reflection probes. Systems can treat environment maps attached to cameras the same way they treat environment maps applied to reflection probes if they wish. Internally, we gather up all environment maps in the scene and place them in a cubemap array. At present, this means that all environment maps must have the same size, mipmap count, and texture format. A warning is emitted if this restriction is violated. We could potentially relax this in the future as part of the automatic mipmap generation work, which could easily do texture format conversion as part of its preprocessing. An easy way to generate reflection probe cubemaps is to bake them in Blender and use the `export-blender-gi` tool that's part of the [`bevy-baked-gi`] project. This tool takes a `.blend` file containing baked cubemaps as input and exports cubemap images, pre-filtered with an embedded fork of the [glTF IBL Sampler], alongside a corresponding `.scn.ron` file that the scene spawner can use to recreate the reflection probes. Note that this is intentionally a minimal implementation, to aid reviewability. Known issues are: * Reflection probes are basically unsupported on WebGL 2, because WebGL 2 has no cubemap arrays. (Strictly speaking, you can have precisely one reflection probe in the scene if you have no other cubemaps anywhere, but this isn't very useful.) * Reflection probes have no falloff, so reflections will abruptly change when objects move from one bounding region to another. * As mentioned before, all cubemaps in the world of a given type (diffuse or specular) must have the same size, format, and mipmap count. Future work includes: * Blending between multiple reflection probes. * A falloff/fade-out region so that reflected objects disappear gradually instead of vanishing all at once. * Irradiance volumes for voxel-based global illumination. This should reuse much of the reflection probe logic, as they're both GI techniques based on cuboid bounding regions. * Support for WebGL 2, by breaking batches when reflection probes are used. These issues notwithstanding, I think it's best to land this with roughly the current set of functionality, because this patch is useful as is and adding everything above would make the pull request significantly larger and harder to review. --- ## Changelog ### Added * A new *LightProbe* component is available that specifies a bounding region that an *EnvironmentMapLight* applies to. The combination of a *LightProbe* and an *EnvironmentMapLight* offers *reflection probe* functionality similar to that available in other engines. [the corresponding feature in Blender's Eevee renderer]: https://docs.blender.org/manual/en/latest/render/eevee/light_probes/reflection_cubemaps.html [`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi [glTF IBL Sampler]: https://github.com/KhronosGroup/glTF-IBL-Sampler |
||
![]() |
dd14f3a477
|
Implement lightmaps. (#10231)
 # Objective Lightmaps, textures that store baked global illumination, have been a mainstay of real-time graphics for decades. Bevy currently has no support for them, so this pull request implements them. ## Solution The new `Lightmap` component can be attached to any entity that contains a `Handle<Mesh>` and a `StandardMaterial`. When present, it will be applied in the PBR shader. Because multiple lightmaps are frequently packed into atlases, each lightmap may have its own UV boundaries within its texture. An `exposure` field is also provided, to control the brightness of the lightmap. Note that this PR doesn't provide any way to bake the lightmaps. That can be done with [The Lightmapper] or another solution, such as Unity's Bakery. --- ## Changelog ### Added * A new component, `Lightmap`, is available, for baked global illumination. If your mesh has a second UV channel (UV1), and you attach this component to the entity with that mesh, Bevy will apply the texture referenced in the lightmap. [The Lightmapper]: https://github.com/Naxela/The_Lightmapper --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com> |