
# Objective - Add a [Deferred Renderer](https://en.wikipedia.org/wiki/Deferred_shading) to Bevy. - This allows subsequent passes to access per pixel material information before/during shading. - Accessing this per pixel material information is needed for some features, like GI. It also makes other features (ex. Decals) simpler to implement and/or improves their capability. There are multiple approaches to accomplishing this. The deferred shading approach works well given the limitations of WebGPU and WebGL2. Motivation: [I'm working on a GI solution for Bevy](https://youtu.be/eH1AkL-mwhI) # Solution - The deferred renderer is implemented with a prepass and a deferred lighting pass. - The prepass renders opaque objects into the Gbuffer attachment (`Rgba32Uint`). The PBR shader generates a `PbrInput` in mostly the same way as the forward implementation and then [packs it into the Gbuffer](ec1465559f/crates/bevy_pbr/src/render/pbr.wgsl (L168)
). - The deferred lighting pass unpacks the `PbrInput` and [feeds it into the pbr() function](ec1465559f/crates/bevy_pbr/src/deferred/deferred_lighting.wgsl (L65)
), then outputs the shaded color data. - There is now a resource [DefaultOpaqueRendererMethod](ec1465559f/crates/bevy_pbr/src/material.rs (L599)
) that can be used to set the default render method for opaque materials. If materials return `None` from [opaque_render_method()](ec1465559f/crates/bevy_pbr/src/material.rs (L131)
) the `DefaultOpaqueRendererMethod` will be used. Otherwise, custom materials can also explicitly choose to only support Deferred or Forward by returning the respective [OpaqueRendererMethod](ec1465559f/crates/bevy_pbr/src/material.rs (L603)
) - Deferred materials can be used seamlessly along with both opaque and transparent forward rendered materials in the same scene. The [deferred rendering example](https://github.com/DGriffin91/bevy/blob/deferred/examples/3d/deferred_rendering.rs) does this. - The deferred renderer does not support MSAA. If any deferred materials are used, MSAA must be disabled. Both TAA and FXAA are supported. - Deferred rendering supports WebGL2/WebGPU. ## Custom deferred materials - Custom materials can support both deferred and forward at the same time. The [StandardMaterial](ec1465559f/crates/bevy_pbr/src/render/pbr.wgsl (L166)
) does this. So does [this example](https://github.com/DGriffin91/bevy_glowy_orb_tutorial/blob/deferred/assets/shaders/glowy.wgsl#L56). - Custom deferred materials that require PBR lighting can create a `PbrInput`, write it to the deferred GBuffer and let it be rendered by the `PBRDeferredLightingPlugin`. - Custom deferred materials that require custom lighting have two options: 1. Use the base_color channel of the `PbrInput` combined with the `STANDARD_MATERIAL_FLAGS_UNLIT_BIT` flag. [Example.](https://github.com/DGriffin91/bevy_glowy_orb_tutorial/blob/deferred/assets/shaders/glowy.wgsl#L56) (If the unlit bit is set, the base_color is stored as RGB9E5 for extra precision) 2. A Custom Deferred Lighting pass can be created, either overriding the default, or running in addition. The a depth buffer is used to limit rendering to only the required fragments for each deferred lighting pass. Materials can set their respective depth id via the [deferred_lighting_pass_id](b79182d2a3/crates/bevy_pbr/src/prepass/prepass_io.wgsl (L95)
) attachment. The custom deferred lighting pass plugin can then set [its corresponding depth](ec1465559f/crates/bevy_pbr/src/deferred/deferred_lighting.wgsl (L37)
). Then with the lighting pass using [CompareFunction::Equal](ec1465559f/crates/bevy_pbr/src/deferred/mod.rs (L335)
), only the fragments with a depth that equal the corresponding depth written in the material will be rendered. Custom deferred lighting plugins can also be created to render the StandardMaterial. The default deferred lighting plugin can be bypassed with `DefaultPlugins.set(PBRDeferredLightingPlugin { bypass: true })` --------- Co-authored-by: nickrart <nickolas.g.russell@gmail.com>
343 lines
14 KiB
WebGPU Shading Language
343 lines
14 KiB
WebGPU Shading Language
#define_import_path bevy_pbr::pbr_functions
|
|
|
|
#ifdef TONEMAP_IN_SHADER
|
|
#import bevy_core_pipeline::tonemapping
|
|
#endif
|
|
|
|
#import bevy_pbr::pbr_types as pbr_types
|
|
#import bevy_pbr::pbr_bindings as pbr_bindings
|
|
#import bevy_pbr::mesh_view_bindings as view_bindings
|
|
#import bevy_pbr::mesh_view_types as mesh_view_types
|
|
#import bevy_pbr::lighting as lighting
|
|
#import bevy_pbr::clustered_forward as clustering
|
|
#import bevy_pbr::shadows as shadows
|
|
#import bevy_pbr::fog as fog
|
|
#import bevy_pbr::ambient as ambient
|
|
#ifdef ENVIRONMENT_MAP
|
|
#import bevy_pbr::environment_map
|
|
#endif
|
|
|
|
#import bevy_pbr::mesh_types MESH_FLAGS_SHADOW_RECEIVER_BIT
|
|
|
|
fn alpha_discard(material: pbr_types::StandardMaterial, output_color: vec4<f32>) -> vec4<f32> {
|
|
var color = output_color;
|
|
let alpha_mode = material.flags & pbr_types::STANDARD_MATERIAL_FLAGS_ALPHA_MODE_RESERVED_BITS;
|
|
if alpha_mode == pbr_types::STANDARD_MATERIAL_FLAGS_ALPHA_MODE_OPAQUE {
|
|
// NOTE: If rendering as opaque, alpha should be ignored so set to 1.0
|
|
color.a = 1.0;
|
|
}
|
|
|
|
#ifdef MAY_DISCARD
|
|
else if alpha_mode == pbr_types::STANDARD_MATERIAL_FLAGS_ALPHA_MODE_MASK {
|
|
if color.a >= material.alpha_cutoff {
|
|
// NOTE: If rendering as masked alpha and >= the cutoff, render as fully opaque
|
|
color.a = 1.0;
|
|
} else {
|
|
// NOTE: output_color.a < in.material.alpha_cutoff should not be rendered
|
|
discard;
|
|
}
|
|
}
|
|
#endif
|
|
|
|
return color;
|
|
}
|
|
|
|
fn prepare_world_normal(
|
|
world_normal: vec3<f32>,
|
|
double_sided: bool,
|
|
is_front: bool,
|
|
) -> vec3<f32> {
|
|
var output: vec3<f32> = world_normal;
|
|
#ifndef VERTEX_TANGENTS
|
|
#ifndef STANDARDMATERIAL_NORMAL_MAP
|
|
// NOTE: When NOT using normal-mapping, if looking at the back face of a double-sided
|
|
// material, the normal needs to be inverted. This is a branchless version of that.
|
|
output = (f32(!double_sided || is_front) * 2.0 - 1.0) * output;
|
|
#endif
|
|
#endif
|
|
return output;
|
|
}
|
|
|
|
fn apply_normal_mapping(
|
|
standard_material_flags: u32,
|
|
world_normal: vec3<f32>,
|
|
#ifdef VERTEX_TANGENTS
|
|
#ifdef STANDARDMATERIAL_NORMAL_MAP
|
|
world_tangent: vec4<f32>,
|
|
#endif
|
|
#endif
|
|
#ifdef VERTEX_UVS
|
|
uv: vec2<f32>,
|
|
#endif
|
|
mip_bias: f32,
|
|
) -> vec3<f32> {
|
|
// NOTE: The mikktspace method of normal mapping explicitly requires that the world normal NOT
|
|
// be re-normalized in the fragment shader. This is primarily to match the way mikktspace
|
|
// bakes vertex tangents and normal maps so that this is the exact inverse. Blender, Unity,
|
|
// Unreal Engine, Godot, and more all use the mikktspace method. Do not change this code
|
|
// unless you really know what you are doing.
|
|
// http://www.mikktspace.com/
|
|
var N: vec3<f32> = world_normal;
|
|
|
|
#ifdef VERTEX_TANGENTS
|
|
#ifdef STANDARDMATERIAL_NORMAL_MAP
|
|
// NOTE: The mikktspace method of normal mapping explicitly requires that these NOT be
|
|
// normalized nor any Gram-Schmidt applied to ensure the vertex normal is orthogonal to the
|
|
// vertex tangent! Do not change this code unless you really know what you are doing.
|
|
// http://www.mikktspace.com/
|
|
var T: vec3<f32> = world_tangent.xyz;
|
|
var B: vec3<f32> = world_tangent.w * cross(N, T);
|
|
#endif
|
|
#endif
|
|
|
|
#ifdef VERTEX_TANGENTS
|
|
#ifdef VERTEX_UVS
|
|
#ifdef STANDARDMATERIAL_NORMAL_MAP
|
|
// Nt is the tangent-space normal.
|
|
var Nt = textureSampleBias(pbr_bindings::normal_map_texture, pbr_bindings::normal_map_sampler, uv, mip_bias).rgb;
|
|
if (standard_material_flags & pbr_types::STANDARD_MATERIAL_FLAGS_TWO_COMPONENT_NORMAL_MAP) != 0u {
|
|
// Only use the xy components and derive z for 2-component normal maps.
|
|
Nt = vec3<f32>(Nt.rg * 2.0 - 1.0, 0.0);
|
|
Nt.z = sqrt(1.0 - Nt.x * Nt.x - Nt.y * Nt.y);
|
|
} else {
|
|
Nt = Nt * 2.0 - 1.0;
|
|
}
|
|
// Normal maps authored for DirectX require flipping the y component
|
|
if (standard_material_flags & pbr_types::STANDARD_MATERIAL_FLAGS_FLIP_NORMAL_MAP_Y) != 0u {
|
|
Nt.y = -Nt.y;
|
|
}
|
|
// NOTE: The mikktspace method of normal mapping applies maps the tangent-space normal from
|
|
// the normal map texture in this way to be an EXACT inverse of how the normal map baker
|
|
// calculates the normal maps so there is no error introduced. Do not change this code
|
|
// unless you really know what you are doing.
|
|
// http://www.mikktspace.com/
|
|
N = Nt.x * T + Nt.y * B + Nt.z * N;
|
|
#endif
|
|
#endif
|
|
#endif
|
|
|
|
return normalize(N);
|
|
}
|
|
|
|
// NOTE: Correctly calculates the view vector depending on whether
|
|
// the projection is orthographic or perspective.
|
|
fn calculate_view(
|
|
world_position: vec4<f32>,
|
|
is_orthographic: bool,
|
|
) -> vec3<f32> {
|
|
var V: vec3<f32>;
|
|
if is_orthographic {
|
|
// Orthographic view vector
|
|
V = normalize(vec3<f32>(view_bindings::view.view_proj[0].z, view_bindings::view.view_proj[1].z, view_bindings::view.view_proj[2].z));
|
|
} else {
|
|
// Only valid for a perpective projection
|
|
V = normalize(view_bindings::view.world_position.xyz - world_position.xyz);
|
|
}
|
|
return V;
|
|
}
|
|
|
|
#ifndef PREPASS_FRAGMENT
|
|
fn pbr(
|
|
in: pbr_types::PbrInput,
|
|
) -> vec4<f32> {
|
|
var output_color: vec4<f32> = in.material.base_color;
|
|
|
|
// TODO use .a for exposure compensation in HDR
|
|
let emissive = in.material.emissive;
|
|
|
|
// calculate non-linear roughness from linear perceptualRoughness
|
|
let metallic = in.material.metallic;
|
|
let perceptual_roughness = in.material.perceptual_roughness;
|
|
let roughness = lighting::perceptualRoughnessToRoughness(perceptual_roughness);
|
|
|
|
let occlusion = in.occlusion;
|
|
|
|
output_color = alpha_discard(in.material, output_color);
|
|
|
|
// Neubelt and Pettineo 2013, "Crafting a Next-gen Material Pipeline for The Order: 1886"
|
|
let NdotV = max(dot(in.N, in.V), 0.0001);
|
|
|
|
// Remapping [0,1] reflectance to F0
|
|
// See https://google.github.io/filament/Filament.html#materialsystem/parameterization/remapping
|
|
let reflectance = in.material.reflectance;
|
|
let F0 = 0.16 * reflectance * reflectance * (1.0 - metallic) + output_color.rgb * metallic;
|
|
|
|
// Diffuse strength inversely related to metallicity
|
|
let diffuse_color = output_color.rgb * (1.0 - metallic);
|
|
|
|
let R = reflect(-in.V, in.N);
|
|
|
|
let f_ab = lighting::F_AB(perceptual_roughness, NdotV);
|
|
|
|
var direct_light: vec3<f32> = vec3<f32>(0.0);
|
|
|
|
let view_z = dot(vec4<f32>(
|
|
view_bindings::view.inverse_view[0].z,
|
|
view_bindings::view.inverse_view[1].z,
|
|
view_bindings::view.inverse_view[2].z,
|
|
view_bindings::view.inverse_view[3].z
|
|
), in.world_position);
|
|
let cluster_index = clustering::fragment_cluster_index(in.frag_coord.xy, view_z, in.is_orthographic);
|
|
let offset_and_counts = clustering::unpack_offset_and_counts(cluster_index);
|
|
|
|
// Point lights (direct)
|
|
for (var i: u32 = offset_and_counts[0]; i < offset_and_counts[0] + offset_and_counts[1]; i = i + 1u) {
|
|
let light_id = clustering::get_light_id(i);
|
|
var shadow: f32 = 1.0;
|
|
if ((in.flags & MESH_FLAGS_SHADOW_RECEIVER_BIT) != 0u
|
|
&& (view_bindings::point_lights.data[light_id].flags & mesh_view_types::POINT_LIGHT_FLAGS_SHADOWS_ENABLED_BIT) != 0u) {
|
|
shadow = shadows::fetch_point_shadow(light_id, in.world_position, in.world_normal);
|
|
}
|
|
let light_contrib = lighting::point_light(in.world_position.xyz, light_id, roughness, NdotV, in.N, in.V, R, F0, f_ab, diffuse_color);
|
|
direct_light += light_contrib * shadow;
|
|
}
|
|
|
|
// Spot lights (direct)
|
|
for (var i: u32 = offset_and_counts[0] + offset_and_counts[1]; i < offset_and_counts[0] + offset_and_counts[1] + offset_and_counts[2]; i = i + 1u) {
|
|
let light_id = clustering::get_light_id(i);
|
|
|
|
var shadow: f32 = 1.0;
|
|
if ((in.flags & MESH_FLAGS_SHADOW_RECEIVER_BIT) != 0u
|
|
&& (view_bindings::point_lights.data[light_id].flags & mesh_view_types::POINT_LIGHT_FLAGS_SHADOWS_ENABLED_BIT) != 0u) {
|
|
shadow = shadows::fetch_spot_shadow(light_id, in.world_position, in.world_normal);
|
|
}
|
|
let light_contrib = lighting::spot_light(in.world_position.xyz, light_id, roughness, NdotV, in.N, in.V, R, F0, f_ab, diffuse_color);
|
|
direct_light += light_contrib * shadow;
|
|
}
|
|
|
|
// directional lights (direct)
|
|
let n_directional_lights = view_bindings::lights.n_directional_lights;
|
|
for (var i: u32 = 0u; i < n_directional_lights; i = i + 1u) {
|
|
var shadow: f32 = 1.0;
|
|
if ((in.flags & MESH_FLAGS_SHADOW_RECEIVER_BIT) != 0u
|
|
&& (view_bindings::lights.directional_lights[i].flags & mesh_view_types::DIRECTIONAL_LIGHT_FLAGS_SHADOWS_ENABLED_BIT) != 0u) {
|
|
shadow = shadows::fetch_directional_shadow(i, in.world_position, in.world_normal, view_z);
|
|
}
|
|
var light_contrib = lighting::directional_light(i, roughness, NdotV, in.N, in.V, R, F0, f_ab, diffuse_color);
|
|
#ifdef DIRECTIONAL_LIGHT_SHADOW_MAP_DEBUG_CASCADES
|
|
light_contrib = shadows::cascade_debug_visualization(light_contrib, i, view_z);
|
|
#endif
|
|
direct_light += light_contrib * shadow;
|
|
}
|
|
|
|
// Ambient light (indirect)
|
|
var indirect_light = ambient::ambient_light(in.world_position, in.N, in.V, NdotV, diffuse_color, F0, perceptual_roughness, occlusion);
|
|
|
|
// Environment map light (indirect)
|
|
#ifdef ENVIRONMENT_MAP
|
|
let environment_light = bevy_pbr::environment_map::environment_map_light(perceptual_roughness, roughness, diffuse_color, NdotV, f_ab, in.N, R, F0);
|
|
indirect_light += (environment_light.diffuse * occlusion) + environment_light.specular;
|
|
#endif
|
|
|
|
let emissive_light = emissive.rgb * output_color.a;
|
|
|
|
// Total light
|
|
output_color = vec4<f32>(
|
|
direct_light + indirect_light + emissive_light,
|
|
output_color.a
|
|
);
|
|
|
|
output_color = clustering::cluster_debug_visualization(
|
|
output_color,
|
|
view_z,
|
|
in.is_orthographic,
|
|
offset_and_counts,
|
|
cluster_index,
|
|
);
|
|
|
|
return output_color;
|
|
}
|
|
#endif // PREPASS_FRAGMENT
|
|
|
|
#ifndef PREPASS_FRAGMENT
|
|
fn apply_fog(fog_params: mesh_view_types::Fog, input_color: vec4<f32>, fragment_world_position: vec3<f32>, view_world_position: vec3<f32>) -> vec4<f32> {
|
|
let view_to_world = fragment_world_position.xyz - view_world_position.xyz;
|
|
|
|
// `length()` is used here instead of just `view_to_world.z` since that produces more
|
|
// high quality results, especially for denser/smaller fogs. we get a "curved"
|
|
// fog shape that remains consistent with camera rotation, instead of a "linear"
|
|
// fog shape that looks a bit fake
|
|
let distance = length(view_to_world);
|
|
|
|
var scattering = vec3<f32>(0.0);
|
|
if fog_params.directional_light_color.a > 0.0 {
|
|
let view_to_world_normalized = view_to_world / distance;
|
|
let n_directional_lights = view_bindings::lights.n_directional_lights;
|
|
for (var i: u32 = 0u; i < n_directional_lights; i = i + 1u) {
|
|
let light = view_bindings::lights.directional_lights[i];
|
|
scattering += pow(
|
|
max(
|
|
dot(view_to_world_normalized, light.direction_to_light),
|
|
0.0
|
|
),
|
|
fog_params.directional_light_exponent
|
|
) * light.color.rgb;
|
|
}
|
|
}
|
|
|
|
if fog_params.mode == mesh_view_types::FOG_MODE_LINEAR {
|
|
return fog::linear_fog(fog_params, input_color, distance, scattering);
|
|
} else if fog_params.mode == mesh_view_types::FOG_MODE_EXPONENTIAL {
|
|
return fog::exponential_fog(fog_params, input_color, distance, scattering);
|
|
} else if fog_params.mode == mesh_view_types::FOG_MODE_EXPONENTIAL_SQUARED {
|
|
return fog::exponential_squared_fog(fog_params, input_color, distance, scattering);
|
|
} else if fog_params.mode == mesh_view_types::FOG_MODE_ATMOSPHERIC {
|
|
return fog::atmospheric_fog(fog_params, input_color, distance, scattering);
|
|
} else {
|
|
return input_color;
|
|
}
|
|
}
|
|
#endif // PREPASS_FRAGMENT
|
|
|
|
#ifdef PREMULTIPLY_ALPHA
|
|
fn premultiply_alpha(standard_material_flags: u32, color: vec4<f32>) -> vec4<f32> {
|
|
// `Blend`, `Premultiplied` and `Alpha` all share the same `BlendState`. Depending
|
|
// on the alpha mode, we premultiply the color channels by the alpha channel value,
|
|
// (and also optionally replace the alpha value with 0.0) so that the result produces
|
|
// the desired blend mode when sent to the blending operation.
|
|
#ifdef BLEND_PREMULTIPLIED_ALPHA
|
|
// For `BlendState::PREMULTIPLIED_ALPHA_BLENDING` the blend function is:
|
|
//
|
|
// result = 1 * src_color + (1 - src_alpha) * dst_color
|
|
let alpha_mode = standard_material_flags & pbr_types::STANDARD_MATERIAL_FLAGS_ALPHA_MODE_RESERVED_BITS;
|
|
if alpha_mode == pbr_types::STANDARD_MATERIAL_FLAGS_ALPHA_MODE_ADD {
|
|
// Here, we premultiply `src_color` by `src_alpha`, and replace `src_alpha` with 0.0:
|
|
//
|
|
// src_color *= src_alpha
|
|
// src_alpha = 0.0
|
|
//
|
|
// We end up with:
|
|
//
|
|
// result = 1 * (src_alpha * src_color) + (1 - 0) * dst_color
|
|
// result = src_alpha * src_color + 1 * dst_color
|
|
//
|
|
// Which is the blend operation for additive blending
|
|
return vec4<f32>(color.rgb * color.a, 0.0);
|
|
} else {
|
|
// Here, we don't do anything, so that we get premultiplied alpha blending. (As expected)
|
|
return color.rgba;
|
|
}
|
|
#endif
|
|
// `Multiply` uses its own `BlendState`, but we still need to premultiply here in the
|
|
// shader so that we get correct results as we tweak the alpha channel
|
|
#ifdef BLEND_MULTIPLY
|
|
// The blend function is:
|
|
//
|
|
// result = dst_color * src_color + (1 - src_alpha) * dst_color
|
|
//
|
|
// We premultiply `src_color` by `src_alpha`:
|
|
//
|
|
// src_color *= src_alpha
|
|
//
|
|
// We end up with:
|
|
//
|
|
// result = dst_color * (src_color * src_alpha) + (1 - src_alpha) * dst_color
|
|
// result = src_alpha * (src_color * dst_color) + (1 - src_alpha) * dst_color
|
|
//
|
|
// Which is the blend operation for multiplicative blending with arbitrary mixing
|
|
// controlled by the source alpha channel
|
|
return vec4<f32>(color.rgb * color.a, color.a);
|
|
#endif
|
|
}
|
|
#endif
|