e072625264
11 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
bf3692a011
|
Introduce support for mixed lighting by allowing lights to opt out of contributing diffuse light to lightmapped objects. (#16761)
This PR adds support for *mixed lighting* to Bevy, whereby some parts of the scene are lightmapped, while others take part in real-time lighting. (Here *real-time lighting* means lighting at runtime via the PBR shader, as opposed to precomputed light using lightmaps.) It does so by adding a new field, `affects_lightmapped_meshes` to `IrradianceVolume` and `AmbientLight`, and a corresponding field `affects_lightmapped_mesh_diffuse` to `DirectionalLight`, `PointLight`, `SpotLight`, and `EnvironmentMapLight`. By default, this value is set to true; when set to false, the light contributes nothing to the diffuse irradiance component to meshes with lightmaps. Note that specular light is unaffected. This is because the correct way to bake specular lighting is *directional lightmaps*, which we have no support for yet. There are two general ways I expect this field to be used: 1. When diffuse indirect light is baked into lightmaps, irradiance volumes and reflection probes shouldn't contribute any diffuse light to the static geometry that has a lightmap. That's because the baking tool should have already accounted for it, and in a higher-quality fashion, as lightmaps typically offer a higher effective texture resolution than the light probe does. 2. When direct diffuse light is baked into a lightmap, punctual lights shouldn't contribute any diffuse light to static geometry with a lightmap, to avoid double-counting. It may seem odd to bake *direct* light into a lightmap, as opposed to indirect light. But there is a use case: in a scene with many lights, avoiding light leaks requires shadow mapping, which quickly becomes prohibitive when many lights are involved. Baking lightmaps allows light leaks to be eliminated on static geometry. A new example, `mixed_lighting`, has been added. It demonstrates a sofa (model from the [glTF Sample Assets]) that has been lightmapped offline using [Bakery]. It has four modes: 1. In *baked* mode, all objects are locked in place, and all the diffuse direct and indirect light has been calculated ahead of time. Note that the bottom of the sphere has a red tint from the sofa, illustrating that the baking tool captured indirect light for it. 2. In *mixed direct* mode, lightmaps capturing diffuse direct and indirect light have been pre-calculated for the static objects, but the dynamic sphere has real-time lighting. Note that, because the diffuse lighting has been entirely pre-calculated for the scenery, the dynamic sphere casts no shadow. In a real app, you would typically use real-time lighting for the most important light so that dynamic objects can shadow the scenery and relegate baked lighting to the less important lights for which shadows aren't as important. Also note that there is no red tint on the sphere, because there is no global illumination applied to it. In an actual game, you could fix this problem by supplementing the lightmapped objects with an irradiance volume. 3. In *mixed indirect* mode, all direct light is calculated in real-time, and the static objects have pre-calculated indirect lighting. This corresponds to the mode that most applications are expected to use. Because direct light on the scenery is computed dynamically, shadows are fully supported. As in mixed direct mode, there is no global illumination on the sphere; in a real application, irradiance volumes could be used to supplement the lightmaps. 4. In *real-time* mode, no lightmaps are used at all, and all punctual lights are rendered in real-time. No global illumination exists. In the example, you can click around to move the sphere, unless you're in baked mode, in which case the sphere must be locked in place to be lit correctly. ## Showcase Baked mode:  Mixed direct mode:  Mixed indirect mode (default):  Real-time mode:  ## Migration guide * The `AmbientLight` resource, the `IrradianceVolume` component, and the `EnvironmentMapLight` component now have `affects_lightmapped_meshes` fields. If you don't need to use that field (for example, if you aren't using lightmaps), you can safely set the field to true. * `DirectionalLight`, `PointLight`, and `SpotLight` now have `affects_lightmapped_mesh_diffuse` fields. If you don't need to use that field (for example, if you aren't using lightmaps), you can safely set the field to true. [glTF Sample Assets]: https://github.com/KhronosGroup/glTF-Sample-Assets/tree/main [Bakery]: https://geom.io/bakery/wiki/index.php?title=Bakery_-_GPU_Lightmapper |
||
|
|
1666b1c497
|
Use frostbite's specular sampling direction for environment map light (#16711)
Adopt a slightly more accurate lighting model. Before:  After:  ## Changelog - EnvironmentMapLight lighting is now slightly more realistic for metallic materials with high roughness |
||
|
|
b7bcd313ca
|
Cluster light probes using conservative spherical bounds. (#13746)
This commit allows the Bevy renderer to use the clustering infrastructure for light probes (reflection probes and irradiance volumes) on platforms where at least 3 storage buffers are available. On such platforms (the vast majority), we stop performing brute-force searches of light probes for each fragment and instead only search the light probes with bounding spheres that intersect the current cluster. This should dramatically improve scalability of irradiance volumes and reflection probes. The primary platform that doesn't support 3 storage buffers is WebGL 2, and we continue using a brute-force search of light probes on that platform, as the UBO that stores per-cluster indices is too small to fit the light probe counts. Note, however, that that platform also doesn't support bindless textures (indeed, it would be very odd for a platform to support bindless textures but not SSBOs), so we only support one of each type of light probe per drawcall there in the first place. Consequently, this isn't a performance problem, as the search will only have one light probe to consider. (In fact, clustering would probably end up being a performance loss.) Known potential improvements include: 1. We currently cull based on a conservative bounding sphere test and not based on the oriented bounding box (OBB) of the light probe. This is improvable, but in the interests of simplicity, I opted to keep the bounding sphere test for now. The OBB improvement can be a follow-up. 2. This patch doesn't change the fact that each fragment only takes a single light probe into account. Typical light probe implementations detect the case in which multiple light probes cover the current fragment and perform some sort of weighted blend between them. As the light probe fetch function presently returns only a single light probe, implementing that feature would require more code restructuring, so I left it out for now. It can be added as a follow-up. 3. Light probe implementations typically have a falloff range. Although this is a wanted feature in Bevy, this particular commit also doesn't implement that feature, as it's out of scope. 4. This commit doesn't raise the maximum number of light probes past its current value of 8 for each type. This should be addressed later, but would possibly require more bindings on platforms with storage buffers, which would increase this patch's complexity. Even without raising the limit, this patch should constitute a significant performance improvement for scenes that get anywhere close to this limit. In the interest of keeping this patch small, I opted to leave raising the limit to a follow-up. ## Changelog ### Changed * Light probes (reflection probes and irradiance volumes) are now clustered on most platforms, improving performance when many light probes are present. --------- Co-authored-by: Benjamin Brienen <Benjamin.Brienen@outlook.com> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
|
|
9da18cce2a
|
Add support for environment map transformation (#14290)
# Objective - Fixes: https://github.com/bevyengine/bevy/issues/14036 ## Solution - Add a world space transformation for the environment sample direction. ## Testing - I have tested the newly added `transform` field using the newly added `rotate_environment_map` example. https://github.com/user-attachments/assets/2de77c65-14bc-48ee-b76a-fb4e9782dbdb ## Migration Guide - Since we have added a new filed to the `EnvironmentMapLight` struct, users will need to include `..default()` or some rotation value in their initialization code. |
||
|
|
77ed72bc16
|
Implement clearcoat per the Filament and the KHR_materials_clearcoat specifications. (#13031)
Clearcoat is a separate material layer that represents a thin translucent layer of a material. Examples include (from the [Filament spec]) car paint, soda cans, and lacquered wood. This commit implements support for clearcoat following the Filament and Khronos specifications, marking the beginnings of support for multiple PBR layers in Bevy. The [`KHR_materials_clearcoat`] specification describes the clearcoat support in glTF. In Blender, applying a clearcoat to the Principled BSDF node causes the clearcoat settings to be exported via this extension. As of this commit, Bevy parses and reads the extension data when present in glTF. Note that the `gltf` crate has no support for `KHR_materials_clearcoat`; this patch therefore implements the JSON semantics manually. Clearcoat is integrated with `StandardMaterial`, but the code is behind a series of `#ifdef`s that only activate when clearcoat is present. Additionally, the `pbr_feature_layer_material_textures` Cargo feature must be active in order to enable support for clearcoat factor maps, clearcoat roughness maps, and clearcoat normal maps. This approach mirrors the same pattern used by the existing transmission feature and exists to avoid running out of texture bindings on platforms like WebGL and WebGPU. Note that constant clearcoat factors and roughness values *are* supported in the browser; only the relatively-less-common maps are disabled on those platforms. This patch refactors the lighting code in `StandardMaterial` significantly in order to better support multiple layers in a natural way. That code was due for a refactor in any case, so this is a nice improvement. A new demo, `clearcoat`, has been added. It's based on [the corresponding three.js demo], but all the assets (aside from the skybox and environment map) are my original work. [Filament spec]: https://google.github.io/filament/Filament.html#materialsystem/clearcoatmodel [`KHR_materials_clearcoat`]: https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_clearcoat/README.md [the corresponding three.js demo]: https://threejs.org/examples/webgl_materials_physical_clearcoat.html   ## Changelog ### Added * `StandardMaterial` now supports a clearcoat layer, which represents a thin translucent layer over an underlying material. * The glTF loader now supports the `KHR_materials_clearcoat` extension, representing materials with clearcoat layers. ## Migration Guide * The lighting functions in the `pbr_lighting` WGSL module now have clearcoat parameters, if `STANDARD_MATERIAL_CLEARCOAT` is defined. * The `R` reflection vector parameter has been removed from some lighting functions, as it was unused. |
||
|
|
b6945e5332
|
Stop copying the light probe array to the stack in the shader. (#11805)
This was causing a severe performance regression when light probes were enabled. Fixes #11787. |
||
|
|
4c15dd0fc5
|
Implement irradiance volumes. (#10268)
# Objective
Bevy could benefit from *irradiance volumes*, also known as *voxel
global illumination* or simply as light probes (though this term is not
preferred, as multiple techniques can be called light probes).
Irradiance volumes are a form of baked global illumination; they work by
sampling the light at the centers of each voxel within a cuboid. At
runtime, the voxels surrounding the fragment center are sampled and
interpolated to produce indirect diffuse illumination.
## Solution
This is divided into two sections. The first is copied and pasted from
the irradiance volume module documentation and describes the technique.
The second part consists of notes on the implementation.
### Overview
An *irradiance volume* is a cuboid voxel region consisting of
regularly-spaced precomputed samples of diffuse indirect light. They're
ideal if you have a dynamic object such as a character that can move
about
static non-moving geometry such as a level in a game, and you want that
dynamic object to be affected by the light bouncing off that static
geometry.
To use irradiance volumes, you need to precompute, or *bake*, the
indirect
light in your scene. Bevy doesn't currently come with a way to do this.
Fortunately, [Blender] provides a [baking tool] as part of the Eevee
renderer, and its irradiance volumes are compatible with those used by
Bevy.
The [`bevy-baked-gi`] project provides a tool, `export-blender-gi`, that
can
extract the baked irradiance volumes from the Blender `.blend` file and
package them up into a `.ktx2` texture for use by the engine. See the
documentation in the `bevy-baked-gi` project for more details as to this
workflow.
Like all light probes in Bevy, irradiance volumes are 1×1×1 cubes that
can
be arbitrarily scaled, rotated, and positioned in a scene with the
[`bevy_transform::components::Transform`] component. The 3D voxel grid
will
be stretched to fill the interior of the cube, and the illumination from
the
irradiance volume will apply to all fragments within that bounding
region.
Bevy's irradiance volumes are based on Valve's [*ambient cubes*] as used
in
*Half-Life 2* ([Mitchell 2006], slide 27). These encode a single color
of
light from the six 3D cardinal directions and blend the sides together
according to the surface normal.
The primary reason for choosing ambient cubes is to match Blender, so
that
its Eevee renderer can be used for baking. However, they also have some
advantages over the common second-order spherical harmonics approach:
ambient cubes don't suffer from ringing artifacts, they are smaller (6
colors for ambient cubes as opposed to 9 for spherical harmonics), and
evaluation is faster. A smaller basis allows for a denser grid of voxels
with the same storage requirements.
If you wish to use a tool other than `export-blender-gi` to produce the
irradiance volumes, you'll need to pack the irradiance volumes in the
following format. The irradiance volume of resolution *(Rx, Ry, Rz)* is
expected to be a 3D texture of dimensions *(Rx, 2Ry, 3Rz)*. The
unnormalized
texture coordinate *(s, t, p)* of the voxel at coordinate *(x, y, z)*
with
side *S* ∈ *{-X, +X, -Y, +Y, -Z, +Z}* is as follows:
```text
s = x
t = y + ⎰ 0 if S ∈ {-X, -Y, -Z}
⎱ Ry if S ∈ {+X, +Y, +Z}
⎧ 0 if S ∈ {-X, +X}
p = z + ⎨ Rz if S ∈ {-Y, +Y}
⎩ 2Rz if S ∈ {-Z, +Z}
```
Visually, in a left-handed coordinate system with Y up, viewed from the
right, the 3D texture looks like a stacked series of voxel grids, one
for
each cube side, in this order:
| **+X** | **+Y** | **+Z** |
| ------ | ------ | ------ |
| **-X** | **-Y** | **-Z** |
A terminology note: Other engines may refer to irradiance volumes as
*voxel
global illumination*, *VXGI*, or simply as *light probes*. Sometimes
*light
probe* refers to what Bevy calls a reflection probe. In Bevy, *light
probe*
is a generic term that encompasses all cuboid bounding regions that
capture
indirect illumination, whether based on voxels or not.
Note that, if binding arrays aren't supported (e.g. on WebGPU or WebGL
2),
then only the closest irradiance volume to the view will be taken into
account during rendering.
[*ambient cubes*]:
https://advances.realtimerendering.com/s2006/Mitchell-ShadingInValvesSourceEngine.pdf
[Mitchell 2006]:
https://advances.realtimerendering.com/s2006/Mitchell-ShadingInValvesSourceEngine.pdf
[Blender]: http://blender.org/
[baking tool]:
https://docs.blender.org/manual/en/latest/render/eevee/render_settings/indirect_lighting.html
[`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi
### Implementation notes
This patch generalizes light probes so as to reuse as much code as
possible between irradiance volumes and the existing reflection probes.
This approach was chosen because both techniques share numerous
similarities:
1. Both irradiance volumes and reflection probes are cuboid bounding
regions.
2. Both are responsible for providing baked indirect light.
3. Both techniques involve presenting a variable number of textures to
the shader from which indirect light is sampled. (In the current
implementation, this uses binding arrays.)
4. Both irradiance volumes and reflection probes require gathering and
sorting probes by distance on CPU.
5. Both techniques require the GPU to search through a list of bounding
regions.
6. Both will eventually want to have falloff so that we can smoothly
blend as objects enter and exit the probes' influence ranges. (This is
not implemented yet to keep this patch relatively small and reviewable.)
To do this, we generalize most of the methods in the reflection probes
patch #11366 to be generic over a trait, `LightProbeComponent`. This
trait is implemented by both `EnvironmentMapLight` (for reflection
probes) and `IrradianceVolume` (for irradiance volumes). Using a trait
will allow us to add more types of light probes in the future. In
particular, I highly suspect we will want real-time reflection planes
for mirrors in the future, which can be easily slotted into this
framework.
## Changelog
> This section is optional. If this was a trivial fix, or has no
externally-visible impact, you can delete this section.
### Added
* A new `IrradianceVolume` asset type is available for baked voxelized
light probes. You can bake the global illumination using Blender or
another tool of your choice and use it in Bevy to apply indirect
illumination to dynamic objects.
|
||
|
|
45e920c2e1
|
Workaround for ICE in the DXC shader compiler in debug builds with an EnvironmentMapLight (#11487)
# Objective DXC+DX12 debug builds with an environment map have been broken since https://github.com/bevyengine/bevy/pull/11366 merged due to an internal compiler error in DXC. I tracked it down to a single `break` statement and reported it upstream (https://github.com/microsoft/DirectXShaderCompiler/issues/6183) ## Solution Workaround the ICE by setting the for loop index variable to the max value of the loop to avoid the `break` that's causing the ICE. This works because it's the last thing in the for loop. The `reflection_probes` and `pbr` examples both appear to still work correctly. |
||
|
|
83d6600267
|
Implement minimal reflection probes (fixed macOS, iOS, and Android). (#11366)
This pull request re-submits #10057, which was backed out for breaking macOS, iOS, and Android. I've tested this version on macOS and Android and on the iOS simulator. # Objective This pull request implements *reflection probes*, which generalize environment maps to allow for multiple environment maps in the same scene, each of which has an axis-aligned bounding box. This is a standard feature of physically-based renderers and was inspired by [the corresponding feature in Blender's Eevee renderer]. ## Solution This is a minimal implementation of reflection probes that allows artists to define cuboid bounding regions associated with environment maps. For every view, on every frame, a system builds up a list of the nearest 4 reflection probes that are within the view's frustum and supplies that list to the shader. The PBR fragment shader searches through the list, finds the first containing reflection probe, and uses it for indirect lighting, falling back to the view's environment map if none is found. Both forward and deferred renderers are fully supported. A reflection probe is an entity with a pair of components, *LightProbe* and *EnvironmentMapLight* (as well as the standard *SpatialBundle*, to position it in the world). The *LightProbe* component (along with the *Transform*) defines the bounding region, while the *EnvironmentMapLight* component specifies the associated diffuse and specular cubemaps. A frequent question is "why two components instead of just one?" The advantages of this setup are: 1. It's readily extensible to other types of light probes, in particular *irradiance volumes* (also known as ambient cubes or voxel global illumination), which use the same approach of bounding cuboids. With a single component that applies to both reflection probes and irradiance volumes, we can share the logic that implements falloff and blending between multiple light probes between both of those features. 2. It reduces duplication between the existing *EnvironmentMapLight* and these new reflection probes. Systems can treat environment maps attached to cameras the same way they treat environment maps applied to reflection probes if they wish. Internally, we gather up all environment maps in the scene and place them in a cubemap array. At present, this means that all environment maps must have the same size, mipmap count, and texture format. A warning is emitted if this restriction is violated. We could potentially relax this in the future as part of the automatic mipmap generation work, which could easily do texture format conversion as part of its preprocessing. An easy way to generate reflection probe cubemaps is to bake them in Blender and use the `export-blender-gi` tool that's part of the [`bevy-baked-gi`] project. This tool takes a `.blend` file containing baked cubemaps as input and exports cubemap images, pre-filtered with an embedded fork of the [glTF IBL Sampler], alongside a corresponding `.scn.ron` file that the scene spawner can use to recreate the reflection probes. Note that this is intentionally a minimal implementation, to aid reviewability. Known issues are: * Reflection probes are basically unsupported on WebGL 2, because WebGL 2 has no cubemap arrays. (Strictly speaking, you can have precisely one reflection probe in the scene if you have no other cubemaps anywhere, but this isn't very useful.) * Reflection probes have no falloff, so reflections will abruptly change when objects move from one bounding region to another. * As mentioned before, all cubemaps in the world of a given type (diffuse or specular) must have the same size, format, and mipmap count. Future work includes: * Blending between multiple reflection probes. * A falloff/fade-out region so that reflected objects disappear gradually instead of vanishing all at once. * Irradiance volumes for voxel-based global illumination. This should reuse much of the reflection probe logic, as they're both GI techniques based on cuboid bounding regions. * Support for WebGL 2, by breaking batches when reflection probes are used. These issues notwithstanding, I think it's best to land this with roughly the current set of functionality, because this patch is useful as is and adding everything above would make the pull request significantly larger and harder to review. --- ## Changelog ### Added * A new *LightProbe* component is available that specifies a bounding region that an *EnvironmentMapLight* applies to. The combination of a *LightProbe* and an *EnvironmentMapLight* offers *reflection probe* functionality similar to that available in other engines. [the corresponding feature in Blender's Eevee renderer]: https://docs.blender.org/manual/en/latest/render/eevee/light_probes/reflection_cubemaps.html [`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi [glTF IBL Sampler]: https://github.com/KhronosGroup/glTF-IBL-Sampler |
||
|
|
3d996639a0
|
Revert "Implement minimal reflection probes. (#10057)" (#11307)
# Objective - Fix working on macOS, iOS, Android on main - Fixes #11281 - Fixes #11282 - Fixes #11283 - Fixes #11299 ## Solution - Revert #10057 |
||
|
|
54a943d232
|
Implement minimal reflection probes. (#10057)
# Objective This pull request implements *reflection probes*, which generalize environment maps to allow for multiple environment maps in the same scene, each of which has an axis-aligned bounding box. This is a standard feature of physically-based renderers and was inspired by [the corresponding feature in Blender's Eevee renderer]. ## Solution This is a minimal implementation of reflection probes that allows artists to define cuboid bounding regions associated with environment maps. For every view, on every frame, a system builds up a list of the nearest 4 reflection probes that are within the view's frustum and supplies that list to the shader. The PBR fragment shader searches through the list, finds the first containing reflection probe, and uses it for indirect lighting, falling back to the view's environment map if none is found. Both forward and deferred renderers are fully supported. A reflection probe is an entity with a pair of components, *LightProbe* and *EnvironmentMapLight* (as well as the standard *SpatialBundle*, to position it in the world). The *LightProbe* component (along with the *Transform*) defines the bounding region, while the *EnvironmentMapLight* component specifies the associated diffuse and specular cubemaps. A frequent question is "why two components instead of just one?" The advantages of this setup are: 1. It's readily extensible to other types of light probes, in particular *irradiance volumes* (also known as ambient cubes or voxel global illumination), which use the same approach of bounding cuboids. With a single component that applies to both reflection probes and irradiance volumes, we can share the logic that implements falloff and blending between multiple light probes between both of those features. 2. It reduces duplication between the existing *EnvironmentMapLight* and these new reflection probes. Systems can treat environment maps attached to cameras the same way they treat environment maps applied to reflection probes if they wish. Internally, we gather up all environment maps in the scene and place them in a cubemap array. At present, this means that all environment maps must have the same size, mipmap count, and texture format. A warning is emitted if this restriction is violated. We could potentially relax this in the future as part of the automatic mipmap generation work, which could easily do texture format conversion as part of its preprocessing. An easy way to generate reflection probe cubemaps is to bake them in Blender and use the `export-blender-gi` tool that's part of the [`bevy-baked-gi`] project. This tool takes a `.blend` file containing baked cubemaps as input and exports cubemap images, pre-filtered with an embedded fork of the [glTF IBL Sampler], alongside a corresponding `.scn.ron` file that the scene spawner can use to recreate the reflection probes. Note that this is intentionally a minimal implementation, to aid reviewability. Known issues are: * Reflection probes are basically unsupported on WebGL 2, because WebGL 2 has no cubemap arrays. (Strictly speaking, you can have precisely one reflection probe in the scene if you have no other cubemaps anywhere, but this isn't very useful.) * Reflection probes have no falloff, so reflections will abruptly change when objects move from one bounding region to another. * As mentioned before, all cubemaps in the world of a given type (diffuse or specular) must have the same size, format, and mipmap count. Future work includes: * Blending between multiple reflection probes. * A falloff/fade-out region so that reflected objects disappear gradually instead of vanishing all at once. * Irradiance volumes for voxel-based global illumination. This should reuse much of the reflection probe logic, as they're both GI techniques based on cuboid bounding regions. * Support for WebGL 2, by breaking batches when reflection probes are used. These issues notwithstanding, I think it's best to land this with roughly the current set of functionality, because this patch is useful as is and adding everything above would make the pull request significantly larger and harder to review. --- ## Changelog ### Added * A new *LightProbe* component is available that specifies a bounding region that an *EnvironmentMapLight* applies to. The combination of a *LightProbe* and an *EnvironmentMapLight* offers *reflection probe* functionality similar to that available in other engines. [the corresponding feature in Blender's Eevee renderer]: https://docs.blender.org/manual/en/latest/render/eevee/light_probes/reflection_cubemaps.html [`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi [glTF IBL Sampler]: https://github.com/KhronosGroup/glTF-IBL-Sampler |