67744bb011
152 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
54006b107b
|
Migrate meshes and materials to required components (#15524)
# Objective A big step in the migration to required components: meshes and materials! ## Solution As per the [selected proposal](https://hackmd.io/@bevy/required_components/%2Fj9-PnF-2QKK0on1KQ29UWQ): - Deprecate `MaterialMesh2dBundle`, `MaterialMeshBundle`, and `PbrBundle`. - Add `Mesh2d` and `Mesh3d` components, which wrap a `Handle<Mesh>`. - Add `MeshMaterial2d<M: Material2d>` and `MeshMaterial3d<M: Material>`, which wrap a `Handle<M>`. - Meshes *without* a mesh material should be rendered with a default material. The existence of a material is determined by `HasMaterial2d`/`HasMaterial3d`, which is required by `MeshMaterial2d`/`MeshMaterial3d`. This gets around problems with the generics. Previously: ```rust commands.spawn(MaterialMesh2dBundle { mesh: meshes.add(Circle::new(100.0)).into(), material: materials.add(Color::srgb(7.5, 0.0, 7.5)), transform: Transform::from_translation(Vec3::new(-200., 0., 0.)), ..default() }); ``` Now: ```rust commands.spawn(( Mesh2d(meshes.add(Circle::new(100.0))), MeshMaterial2d(materials.add(Color::srgb(7.5, 0.0, 7.5))), Transform::from_translation(Vec3::new(-200., 0., 0.)), )); ``` If the mesh material is missing, previously nothing was rendered. Now, it renders a white default `ColorMaterial` in 2D and a `StandardMaterial` in 3D (this can be overridden). Below, only every other entity has a material:   Why white? This is still open for discussion, but I think white makes sense for a *default* material, while *invalid* asset handles pointing to nothing should have something like a pink material to indicate that something is broken (I don't handle that in this PR yet). This is kind of a mix of Godot and Unity: Godot just renders a white material for non-existent materials, while Unity renders nothing when no materials exist, but renders pink for invalid materials. I can also change the default material to pink if that is preferable though. ## Testing I ran some 2D and 3D examples to test if anything changed visually. I have not tested all examples or features yet however. If anyone wants to test more extensively, it would be appreciated! ## Implementation Notes - The relationship between `bevy_render` and `bevy_pbr` is weird here. `bevy_render` needs `Mesh3d` for its own systems, but `bevy_pbr` has all of the material logic, and `bevy_render` doesn't depend on it. I feel like the two crates should be refactored in some way, but I think that's out of scope for this PR. - I didn't migrate meshlets to required components yet. That can probably be done in a follow-up, as this is already a huge PR. - It is becoming increasingly clear to me that we really, *really* want to disallow raw asset handles as components. They caused me a *ton* of headache here already, and it took me a long time to find every place that queried for them or inserted them directly on entities, since there were no compiler errors for it. If we don't remove the `Component` derive, I expect raw asset handles to be a *huge* footgun for users as we transition to wrapper components, especially as handles as components have been the norm so far. I personally consider this to be a blocker for 0.15: we need to migrate to wrapper components for asset handles everywhere, and remove the `Component` derive. Also see https://github.com/bevyengine/bevy/issues/14124. --- ## Migration Guide Asset handles for meshes and mesh materials must now be wrapped in the `Mesh2d` and `MeshMaterial2d` or `Mesh3d` and `MeshMaterial3d` components for 2D and 3D respectively. Raw handles as components no longer render meshes. Additionally, `MaterialMesh2dBundle`, `MaterialMeshBundle`, and `PbrBundle` have been deprecated. Instead, use the mesh and material components directly. Previously: ```rust commands.spawn(MaterialMesh2dBundle { mesh: meshes.add(Circle::new(100.0)).into(), material: materials.add(Color::srgb(7.5, 0.0, 7.5)), transform: Transform::from_translation(Vec3::new(-200., 0., 0.)), ..default() }); ``` Now: ```rust commands.spawn(( Mesh2d(meshes.add(Circle::new(100.0))), MeshMaterial2d(materials.add(Color::srgb(7.5, 0.0, 7.5))), Transform::from_translation(Vec3::new(-200., 0., 0.)), )); ``` If the mesh material is missing, a white default material is now used. Previously, nothing was rendered if the material was missing. The `WithMesh2d` and `WithMesh3d` query filter type aliases have also been removed. Simply use `With<Mesh2d>` or `With<Mesh3d>`. --------- Co-authored-by: Tim Blackbird <justthecooldude@gmail.com> Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
|
|
de888a373d
|
Migrate lights to required components (#15554)
# Objective Another step in the migration to required components: lights! Note that this does not include `EnvironmentMapLight` or reflection probes yet, because their API hasn't been fully chosen yet. ## Solution As per the [selected proposals](https://hackmd.io/@bevy/required_components/%2FLLnzwz9XTxiD7i2jiUXkJg): - Deprecate `PointLightBundle` in favor of the `PointLight` component - Deprecate `SpotLightBundle` in favor of the `PointLight` component - Deprecate `DirectionalLightBundle` in favor of the `DirectionalLight` component ## Testing I ran some examples with lights. --- ## Migration Guide `PointLightBundle`, `SpotLightBundle`, and `DirectionalLightBundle` have been deprecated. Use the `PointLight`, `SpotLight`, and `DirectionalLight` components instead. Adding them will now insert the other components required by them automatically. |
||
|
|
56f8e526dd
|
The Cooler 'Retain Rendering World' (#15320)
- Adopted from #14449 - Still fixes #12144. ## Migration Guide The retained render world is a complex change: migrating might take one of a few different forms depending on the patterns you're using. For every example, we specify in which world the code is run. Most of the changes affect render world code, so for the average Bevy user who's using Bevy's high-level rendering APIs, these changes are unlikely to affect your code. ### Spawning entities in the render world Previously, if you spawned an entity with `world.spawn(...)`, `commands.spawn(...)` or some other method in the rendering world, it would be despawned at the end of each frame. In 0.15, this is no longer the case and so your old code could leak entities. This can be mitigated by either re-architecting your code to no longer continuously spawn entities (like you're used to in the main world), or by adding the `bevy_render::world_sync::TemporaryRenderEntity` component to the entity you're spawning. Entities tagged with `TemporaryRenderEntity` will be removed at the end of each frame (like before). ### Extract components with `ExtractComponentPlugin` ``` // main world app.add_plugins(ExtractComponentPlugin::<ComponentToExtract>::default()); ``` `ExtractComponentPlugin` has been changed to only work with synced entities. Entities are automatically synced if `ComponentToExtract` is added to them. However, entities are not "unsynced" if any given `ComponentToExtract` is removed, because an entity may have multiple components to extract. This would cause the other components to no longer get extracted because the entity is not synced. So be careful when only removing extracted components from entities in the render world, because it might leave an entity behind in the render world. The solution here is to avoid only removing extracted components and instead despawn the entire entity. ### Manual extraction using `Extract<Query<(Entity, ...)>>` ```rust // in render world, inspired by bevy_pbr/src/cluster/mod.rs pub fn extract_clusters( mut commands: Commands, views: Extract<Query<(Entity, &Clusters, &Camera)>>, ) { for (entity, clusters, camera) in &views { // some code commands.get_or_spawn(entity).insert(...); } } ``` One of the primary consequences of the retained rendering world is that there's no longer a one-to-one mapping from entity IDs in the main world to entity IDs in the render world. Unlike in Bevy 0.14, Entity 42 in the main world doesn't necessarily map to entity 42 in the render world. Previous code which called `get_or_spawn(main_world_entity)` in the render world (`Extract<Query<(Entity, ...)>>` returns main world entities). Instead, you should use `&RenderEntity` and `render_entity.id()` to get the correct entity in the render world. Note that this entity does need to be synced first in order to have a `RenderEntity`. When performing manual abstraction, this won't happen automatically (like with `ExtractComponentPlugin`) so add a `SyncToRenderWorld` marker component to the entities you want to extract. This results in the following code: ```rust // in render world, inspired by bevy_pbr/src/cluster/mod.rs pub fn extract_clusters( mut commands: Commands, views: Extract<Query<(&RenderEntity, &Clusters, &Camera)>>, ) { for (render_entity, clusters, camera) in &views { // some code commands.get_or_spawn(render_entity.id()).insert(...); } } // in main world, when spawning world.spawn(Clusters::default(), Camera::default(), SyncToRenderWorld) ``` ### Looking up `Entity` ids in the render world As previously stated, there's now no correspondence between main world and render world `Entity` identifiers. Querying for `Entity` in the render world will return the `Entity` id in the render world: query for `MainEntity` (and use its `id()` method) to get the corresponding entity in the main world. This is also a good way to tell the difference between synced and unsynced entities in the render world, because unsynced entities won't have a `MainEntity` component. --------- Co-authored-by: re0312 <re0312@outlook.com> Co-authored-by: re0312 <45868716+re0312@users.noreply.github.com> Co-authored-by: Periwink <charlesbour@gmail.com> Co-authored-by: Anselmo Sampietro <ans.samp@gmail.com> Co-authored-by: Emerson Coskey <56370779+ecoskey@users.noreply.github.com> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Christian Hughes <9044780+ItsDoot@users.noreply.github.com> |
||
|
|
d70595b667
|
Add core and alloc over std Lints (#15281)
# Objective - Fixes #6370 - Closes #6581 ## Solution - Added the following lints to the workspace: - `std_instead_of_core` - `std_instead_of_alloc` - `alloc_instead_of_core` - Used `cargo +nightly fmt` with [item level use formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Item%5C%3A) to split all `use` statements into single items. - Used `cargo clippy --workspace --all-targets --all-features --fix --allow-dirty` to _attempt_ to resolve the new linting issues, and intervened where the lint was unable to resolve the issue automatically (usually due to needing an `extern crate alloc;` statement in a crate root). - Manually removed certain uses of `std` where negative feature gating prevented `--all-features` from finding the offending uses. - Used `cargo +nightly fmt` with [crate level use formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Crate%5C%3A) to re-merge all `use` statements matching Bevy's previous styling. - Manually fixed cases where the `fmt` tool could not re-merge `use` statements due to conditional compilation attributes. ## Testing - Ran CI locally ## Migration Guide The MSRV is now 1.81. Please update to this version or higher. ## Notes - This is a _massive_ change to try and push through, which is why I've outlined the semi-automatic steps I used to create this PR, in case this fails and someone else tries again in the future. - Making this change has no impact on user code, but does mean Bevy contributors will be warned to use `core` and `alloc` instead of `std` where possible. - This lint is a critical first step towards investigating `no_std` options for Bevy. --------- Co-authored-by: François Mockers <francois.mockers@vleue.com> |
||
|
|
fd329c0426
|
Allow to expect (adopted) (#15301)
# Objective > Rust 1.81 released the #[expect(...)] attribute, which works like #[allow(...)] but throws a warning if the lint isn't raised. This is preferred to #[allow(...)] because it tells us when it can be removed. - Adopts the parts of #15118 that are complete, and updates the branch so it can be merged. - There were a few conflicts, let me know if I misjudged any of 'em. Alice's [recommendation](https://github.com/bevyengine/bevy/issues/15059#issuecomment-2349263900) seems well-taken, let's do this crate by crate now that @BD103 has done the lion's share of this! (Relates to, but doesn't yet completely finish #15059.) Crates this _doesn't_ cover: - bevy_input - bevy_gilrs - bevy_window - bevy_winit - bevy_state - bevy_render - bevy_picking - bevy_core_pipeline - bevy_sprite - bevy_text - bevy_pbr - bevy_ui - bevy_gltf - bevy_gizmos - bevy_dev_tools - bevy_internal - bevy_dylib --------- Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com> Co-authored-by: Ben Frankel <ben.frankel7@gmail.com> Co-authored-by: Antony <antony.m.3012@gmail.com> |
||
|
|
afbbbd7335
|
Rename rendering components for improved consistency and clarity (#15035)
# Objective The names of numerous rendering components in Bevy are inconsistent and a bit confusing. Relevant names include: - `AutoExposureSettings` - `AutoExposureSettingsUniform` - `BloomSettings` - `BloomUniform` (no `Settings`) - `BloomPrefilterSettings` - `ChromaticAberration` (no `Settings`) - `ContrastAdaptiveSharpeningSettings` - `DepthOfFieldSettings` - `DepthOfFieldUniform` (no `Settings`) - `FogSettings` - `SmaaSettings`, `Fxaa`, `TemporalAntiAliasSettings` (really inconsistent??) - `ScreenSpaceAmbientOcclusionSettings` - `ScreenSpaceReflectionsSettings` - `VolumetricFogSettings` Firstly, there's a lot of inconsistency between `Foo`/`FooSettings` and `FooUniform`/`FooSettingsUniform` and whether names are abbreviated or not. Secondly, the `Settings` post-fix seems unnecessary and a bit confusing semantically, since it makes it seem like the component is mostly just auxiliary configuration instead of the core *thing* that actually enables the feature. This will be an even bigger problem once bundles like `TemporalAntiAliasBundle` are deprecated in favor of required components, as users will expect a component named `TemporalAntiAlias` (or similar), not `TemporalAntiAliasSettings`. ## Solution Drop the `Settings` post-fix from the component names, and change some names to be more consistent. - `AutoExposure` - `AutoExposureUniform` - `Bloom` - `BloomUniform` - `BloomPrefilter` - `ChromaticAberration` - `ContrastAdaptiveSharpening` - `DepthOfField` - `DepthOfFieldUniform` - `DistanceFog` - `Smaa`, `Fxaa`, `TemporalAntiAliasing` (note: we might want to change to `Taa`, see "Discussion") - `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflections` - `VolumetricFog` I kept the old names as deprecated type aliases to make migration a bit less painful for users. We should remove them after the next release. (And let me know if I should just... not add them at all) I also added some very basic docs for a few types where they were missing, like on `Fxaa` and `DepthOfField`. ## Discussion - `TemporalAntiAliasing` is still inconsistent with `Smaa` and `Fxaa`. Consensus [on Discord](https://discord.com/channels/691052431525675048/743663924229963868/1280601167209955431) seemed to be that renaming to `Taa` would probably be fine, but I think it's a bit more controversial, and it would've required renaming a lot of related types like `TemporalAntiAliasNode`, `TemporalAntiAliasBundle`, and `TemporalAntiAliasPlugin`, so I think it's better to leave to a follow-up. - I think `Fog` should probably have a more specific name like `DistanceFog` considering it seems to be distinct from `VolumetricFog`. ~~This should probably be done in a follow-up though, so I just removed the `Settings` post-fix for now.~~ (done) --- ## Migration Guide Many rendering components have been renamed for improved consistency and clarity. - `AutoExposureSettings` → `AutoExposure` - `BloomSettings` → `Bloom` - `BloomPrefilterSettings` → `BloomPrefilter` - `ContrastAdaptiveSharpeningSettings` → `ContrastAdaptiveSharpening` - `DepthOfFieldSettings` → `DepthOfField` - `FogSettings` → `DistanceFog` - `SmaaSettings` → `Smaa` - `TemporalAntiAliasSettings` → `TemporalAntiAliasing` - `ScreenSpaceAmbientOcclusionSettings` → `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflectionsSettings` → `ScreenSpaceReflections` - `VolumetricFogSettings` → `VolumetricFog` --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
|
|
6ec6a55645
|
Unify crate-level preludes (#15080)
# Objective
- Crate-level prelude modules, such as `bevy_ecs::prelude`, are plagued
with inconsistency! Let's fix it!
## Solution
Format all preludes based on the following rules:
1. All preludes should have brief documentation in the format of:
> The _name_ prelude.
>
> This includes the most common types in this crate, re-exported for
your convenience.
2. All documentation should be outer, not inner. (`///` instead of
`//!`.)
3. No prelude modules should be annotated with `#[doc(hidden)]`. (Items
within them may, though I'm not sure why this was done.)
## Testing
- I manually searched for the term `mod prelude` and updated all
occurrences by hand. 🫠
---------
Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com>
|
||
|
|
4ac2a63556
|
Remove all existing system order ambiguities in DefaultPlugins (#15031)
# Objective As discussed in https://github.com/bevyengine/bevy/issues/7386, system order ambiguities within `DefaultPlugins` are a source of bugs in the engine and badly pollute diagnostic output for users. We should eliminate them! This PR is an alternative to #15027: with all external ambiguities silenced, this should be much less prone to merge conflicts and the test output should be much easier for authors to understand. Note that system order ambiguities are still permitted in the `RenderApp`: these need a bit of thought in terms of how to test them, and will be fairly involved to fix. While these aren't *good*, they'll generally only cause graphical bugs, not logic ones. ## Solution All remaining system order ambiguities have been resolved. Review this PR commit-by-commit to see how each of these problems were fixed. ## Testing `cargo run --example ambiguity_detection` passes with no panics or logging! |
||
|
|
938d810766
|
Apply unused_qualifications lint (#14828)
# Objective Fixes #14782 ## Solution Enable the lint and fix all upcoming hints (`--fix`). Also tried to figure out the false-positive (see review comment). Maybe split this PR up into multiple parts where only the last one enables the lint, so some can already be merged resulting in less many files touched / less potential for merge conflicts? Currently, there are some cases where it might be easier to read the code with the qualifier, so perhaps remove the import of it and adapt its cases? In the current stage it's just a plain adoption of the suggestions in order to have a base to discuss. ## Testing `cargo clippy` and `cargo run -p ci` are happy. |
||
|
|
20c6bcdba4
|
Allow volumetric fog to be localized to specific, optionally voxelized, regions. (#14099)
Currently, volumetric fog is global and affects the entire scene uniformly. This is inadequate for many use cases, such as local smoke effects. To address this problem, this commit introduces *fog volumes*, which are axis-aligned bounding boxes (AABBs) that specify fog parameters inside their boundaries. Such volumes can also specify a *density texture*, a 3D texture of voxels that specifies the density of the fog at each point. To create a fog volume, add a `FogVolume` component to an entity (which is included in the new `FogVolumeBundle` convenience bundle). Like light probes, a fog volume is conceptually a 1×1×1 cube centered on the origin; a transform can be used to position and resize this region. Many of the fields on the existing `VolumetricFogSettings` have migrated to the new `FogVolume` component. `VolumetricFogSettings` on a camera is still needed to enable volumetric fog. However, by itself `VolumetricFogSettings` is no longer sufficient to enable volumetric fog; a `FogVolume` must be present. Applications that wish to retain the old global fog behavior can simply surround the scene with a large fog volume. By way of implementation, this commit converts the volumetric fog shader from a full-screen shader to one applied to a mesh. The strategy is different depending on whether the camera is inside or outside the fog volume. If the camera is inside the fog volume, the mesh is simply a plane scaled to the viewport, effectively falling back to a full-screen pass. If the camera is outside the fog volume, the mesh is a cube transformed to coincide with the boundaries of the fog volume's AABB. Importantly, in the latter case, only the front faces of the cuboid are rendered. Instead of treating the boundaries of the fog as a sphere centered on the camera position, as we did prior to this patch, we raytrace the far planes of the AABB to determine the portion of each ray contained within the fog volume. We then raymarch in shadow map space as usual. If a density texture is present, we modulate the fixed density value with the trilinearly-interpolated value from that texture. Furthermore, this patch introduces optional jitter to fog volumes, intended for use with TAA. This modifies the position of the ray from frame to frame using interleaved gradient noise, in order to reduce aliasing artifacts. Many implementations of volumetric fog in games use this technique. Note that this patch makes no attempt to write a motion vector; this is because when a view ray intersects multiple voxels there's no single direction of motion. Consequently, fog volumes can have ghosting artifacts, but because fog is "ghostly" by its nature, these artifacts are less objectionable than they would be for opaque objects. A new example, `fog_volumes`, has been added. It demonstrates a single fog volume containing a voxelized representation of the Stanford bunny. The existing `volumetric_fog` example has been updated to use the new local volumetrics API. ## Changelog ### Added * Local `FogVolume`s are now supported, to localize fog to specific regions. They can optionally have 3D density voxel textures for precise control over the distribution of the fog. ### Changed * `VolumetricFogSettings` on a camera no longer enables volumetric fog; instead, it simply enables the processing of `FogVolume`s within the scene. ## Migration Guide * A `FogVolume` is now necessary in order to enable volumetric fog, in addition to `VolumetricFogSettings` on the camera. Existing uses of volumetric fog can be migrated by placing a large `FogVolume` surrounding the scene. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: François Mockers <mockersf@gmail.com> |
||
|
|
8e67aef96a
|
Register VisibleMeshEntities (#14320)
# Objective - A recent PR added this type but never registered it which breaks loading some gltf ## Solution - Register the type |
||
|
|
d7080369a7
|
Fix intra-doc links and make CI test them (#14076)
# Objective - Bevy currently has lot of invalid intra-doc links, let's fix them! - Also make CI test them, to avoid future regressions. - Helps with #1983 (but doesn't fix it, as there could still be explicit links to docs.rs that are broken) ## Solution - Make `cargo r -p ci -- doc-check` check fail on warnings (could also be changed to just some specific lints) - Manually fix all the warnings (note that in some cases it was unclear to me what the fix should have been, I'll try to highlight them in a self-review) |
||
|
|
91cd84fea7
|
Refactor check_light_mesh_visibility for performance #1 (#13905)
# Objective - first part of #13900 ## Solution - split `check_light_mesh_visibility `into `check_dir_light_mesh_visibility `and `check_point_light_mesh_visibility` for better review |
||
|
|
50ee483665
|
Meshlet misc (#13761)
- Copy module docs so that they show up in the re-export - Change meshlet_id to cluster_id in the debug visualization - Small doc tweaks |
||
|
|
ad6872275f
|
Rename "point light" to "clusterable object" in cluster contexts. (#13654)
We want to use the clustering infrastructure for light probes and decals as well, not just point lights. This patch builds on top of #13640 and performs the rename. To make this series easier to review, this patch makes no code changes. Only identifiers and comments are modified. ## Migration Guide * In the PBR shaders, `point_lights` is now known as `clusterable_objects`, `PointLight` is now known as `ClusterableObject`, and `cluster_light_index_lists` is now known as `clusterable_object_index_lists`. |
||
|
|
5c74c17c24
|
Move clustering-related types and functions into their own module. (#13640)
As a prerequisite for decals and clustering of light probes, we want clustering to operate on objects other than lights. (Currently, it only operates on point and spot lights.) This necessitates a large refactoring, so I'm splitting it up into small steps. The first such step is to separate clustering from lighting by moving clustering-related types and functions out of lighting and into their own module subtree within the `bevy_pbr` crate. (Ultimately, we may want to move it to `bevy_render`, but that requires more work and can be a followup.) No code changes have been made other than adjusting import lists and moving code. This is to make this code easy to review. Ultimately, I want to rename "light" to "clusterable object" in most cases, but doing that at the same time as moving the code would make reviewing harder. So instead I'm moving the code first and will follow this up with renaming. ## Migration Guide * Clustering-related types and functions (e.g. `assign_lights_to_clusters`) have moved under `bevy_pbr::cluster`, in preparation for the ability to cluster objects other than lights. |
||
|
|
f398674e51
|
Implement opt-in sharp screen-space reflections for the deferred renderer, with improved raymarching code. (#13418)
This commit, a revamp of #12959, implements screen-space reflections (SSR), which approximate real-time reflections based on raymarching through the depth buffer and copying samples from the final rendered frame. This patch is a relatively minimal implementation of SSR, so as to provide a flexible base on which to customize and build in the future. However, it's based on the production-quality [raymarching code by Tomasz Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db). For a general basic overview of screen-space reflections, see [1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html). The raymarching shader uses the basic algorithm of tracing forward in large steps, refining that trace in smaller increments via binary search, and then using the secant method. No temporal filtering or roughness blurring, is performed at all; for this reason, SSR currently only operates on very shiny surfaces. No acceleration via the hierarchical Z-buffer is implemented (though note that https://github.com/bevyengine/bevy/pull/12899 will add the infrastructure for this). Reflections are traced at full resolution, which is often considered slow. All of these improvements and more can be follow-ups. SSR is built on top of the deferred renderer and is currently only supported in that mode. Forward screen-space reflections are possible albeit uncommon (though e.g. *Doom Eternal* uses them); however, they require tracing from the previous frame, which would add complexity. This patch leaves the door open to implementing SSR in the forward rendering path but doesn't itself have such an implementation. Screen-space reflections aren't supported in WebGL 2, because they require sampling from the depth buffer, which Naga can't do because of a bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`; this is the same reason why depth of field is disabled on that platform). To add screen-space reflections to a camera, use the `ScreenSpaceReflectionsBundle` bundle or the `ScreenSpaceReflectionsSettings` component. In addition to `ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass` must also be present for the reflections to show up. The `ScreenSpaceReflectionsSettings` component contains several settings that artists can tweak, and also comes with sensible defaults. A new example, `ssr`, has been added. It's loosely based on the [three.js ocean sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all the assets are original. Note that the three.js demo has no screen-space reflections and instead renders a mirror world. In contrast to #12959, this demo tests not only a cube but also a more complex model (the flight helmet). ## Changelog ### Added * Screen-space reflections can be enabled for very smooth surfaces by adding the `ScreenSpaceReflections` component to a camera. Deferred rendering must be enabled for the reflections to appear.   |
||
|
|
19bfa41768
|
Implement volumetric fog and volumetric lighting, also known as light shafts or god rays. (#13057)
This commit implements a more physically-accurate, but slower, form of fog than the `bevy_pbr::fog` module does. Notably, this *volumetric fog* allows for light beams from directional lights to shine through, creating what is known as *light shafts* or *god rays*. To add volumetric fog to a scene, add `VolumetricFogSettings` to the camera, and add `VolumetricLight` to directional lights that you wish to be volumetric. `VolumetricFogSettings` has numerous settings that allow you to define the accuracy of the simulation, as well as the look of the fog. Currently, only interaction with directional lights that have shadow maps is supported. Note that the overhead of the effect scales directly with the number of directional lights in use, so apply `VolumetricLight` sparingly for the best results. The overall algorithm, which is implemented as a postprocessing effect, is a combination of the techniques described in [Scratchapixel] and [this blog post]. It uses raymarching in screen space, transformed into shadow map space for sampling and combined with physically-based modeling of absorption and scattering. Bevy employs the widely-used [Henyey-Greenstein phase function] to model asymmetry; this essentially allows light shafts to fade into and out of existence as the user views them. Volumetric rendering is a huge subject, and I deliberately kept the scope of this commit small. Possible follow-ups include: 1. Raymarching at a lower resolution. 2. A post-processing blur (especially useful when combined with (1)). 3. Supporting point lights and spot lights. 4. Supporting lights with no shadow maps. 5. Supporting irradiance volumes and reflection probes. 6. Voxel components that reuse the volumetric fog code to create voxel shapes. 7. *Horizon: Zero Dawn*-style clouds. These are all useful, but out of scope of this patch for now, to keep things tidy and easy to review. A new example, `volumetric_fog`, has been added to demonstrate the effect. ## Changelog ### Added * A new component, `VolumetricFog`, is available, to allow for a more physically-accurate, but more resource-intensive, form of fog. * A new component, `VolumetricLight`, can be placed on directional lights to make them interact with `VolumetricFog`. Notably, this allows such lights to emit light shafts/god rays.   [Scratchapixel]: https://www.scratchapixel.com/lessons/3d-basic-rendering/volume-rendering-for-developers/intro-volume-rendering.html [this blog post]: https://www.alexandre-pestana.com/volumetric-lights/ [Henyey-Greenstein phase function]: https://www.pbr-book.org/4ed/Volume_Scattering/Phase_Functions#TheHenyeyndashGreensteinPhaseFunction |
||
|
|
de9dc9c204
|
Fix CameraProjection panic and improve CameraProjectionPlugin (#11808)
# Objective Fix https://github.com/bevyengine/bevy/issues/11799 and improve `CameraProjectionPlugin` ## Solution `CameraProjectionPlugin` is now an all-in-one plugin for adding a custom `CameraProjection`. I also added `PbrProjectionPlugin` which is like `CameraProjectionPlugin` but for PBR. P.S. I'd like to get this merged after https://github.com/bevyengine/bevy/pull/11766. --- ## Changelog - Changed `CameraProjectionPlugin` to be an all-in-one plugin for adding a `CameraProjection` - Removed `VisibilitySystems::{UpdateOrthographicFrusta, UpdatePerspectiveFrusta, UpdateProjectionFrusta}`, now replaced with `VisibilitySystems::UpdateFrusta` - Added `PbrProjectionPlugin` for projection-specific PBR functionality. ## Migration Guide `VisibilitySystems`'s `UpdateOrthographicFrusta`, `UpdatePerspectiveFrusta`, and `UpdateProjectionFrusta` variants were removed, they were replaced with `VisibilitySystems::UpdateFrusta` |
||
|
|
6d6810c90d
|
Meshlet continuous LOD (#12755)
Adds a basic level of detail system to meshlets. An extremely brief summary is as follows: * In `from_mesh.rs`, once we've built the first level of clusters, we group clusters, simplify the new mega-clusters, and then split the simplified groups back into regular sized clusters. Repeat several times (ideally until you can't anymore). This forms a directed acyclic graph (DAG), where the children are the meshlets from the previous level, and the parents are the more simplified versions of their children. The leaf nodes are meshlets formed from the original mesh. * In `cull_meshlets.wgsl`, each cluster selects whether to render or not based on the LOD bounding sphere (different than the culling bounding sphere) of the current meshlet, the LOD bounding sphere of its parent (the meshlet group from simplification), and the simplification error relative to its children of both the current meshlet and its parent meshlet. This kind of breaks two pass occlusion culling, which will be fixed in a future PR by using an HZB from the previous frame to get the initial list of occluders. Many, _many_ improvements to be done in the future https://github.com/bevyengine/bevy/issues/11518, not least of which is code quality and speed. I don't even expect this to work on many types of input meshes. This is just a basic implementation/draft for collaboration. Arguable how much we want to do in this PR, I'll leave that up to maintainers. I've erred on the side of "as basic as possible". References: * Slides 27-77 (video available on youtube) https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf * https://blog.traverseresearch.nl/creating-a-directed-acyclic-graph-from-a-mesh-1329e57286e5 * https://jglrxavpok.github.io/2024/01/19/recreating-nanite-lod-generation.html, https://jglrxavpok.github.io/2024/03/12/recreating-nanite-faster-lod-generation.html, https://jglrxavpok.github.io/2024/04/02/recreating-nanite-runtime-lod-selection.html, and https://github.com/jglrxavpok/Carrot * https://github.com/gents83/INOX/tree/master/crates/plugins/binarizer/src * https://cs418.cs.illinois.edu/website/text/nanite.html   --------- Co-authored-by: Ricky Taylor <rickytaylor26@gmail.com> Co-authored-by: vero <email@atlasdostal.com> Co-authored-by: François <mockersf@gmail.com> Co-authored-by: atlas dostal <rodol@rivalrebels.com> Co-authored-by: Patrick Walton <pcwalton@mimiga.net> |
||
|
|
f68bc01544
|
Run CheckVisibility after all the other visibility system sets have… (#12962)
# Objective Make visibility system ordering explicit. Fixes #12953. ## Solution Specify `CheckVisibility` happens after all other `VisibilitySystems` sets have happened. --------- Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com> |
||
|
|
6003a317b8
|
Add Cascades to the type registry, fixing lights in glTF. (#12989)
glTF files that contain lights currently panic when loaded into Bevy, because Bevy tries to reflect on `Cascades`, which accidentally wasn't registered. |
||
|
|
7b8d502083
|
Fix beta lints (#12980)
# Objective - Fixes #12976 ## Solution This one is a doozy. - Run `cargo +beta clippy --workspace --all-targets --all-features` and fix all issues - This includes: - Moving inner attributes to be outer attributes, when the item in question has both inner and outer attributes - Use `ptr::from_ref` in more scenarios - Extend the valid idents list used by `clippy:doc_markdown` with more names - Use `Clone::clone_from` when possible - Remove redundant `ron` import - Add backticks to **so many** identifiers and items - I'm sorry whoever has to review this --- ## Changelog - Added links to more identifiers in documentation. |
||
|
|
5caf085dac
|
Divide the single VisibleEntities list into separate lists for 2D meshes, 3D meshes, lights, and UI elements, for performance. (#12582)
This commit splits `VisibleEntities::entities` into four separate lists: one for lights, one for 2D meshes, one for 3D meshes, and one for UI elements. This allows `queue_material_meshes` and similar methods to avoid examining entities that are obviously irrelevant. In particular, this separation helps scenes with many skinned meshes, as the individual bones are considered visible entities but have no rendered appearance. Internally, `VisibleEntities::entities` is a `HashMap` from the `TypeId` representing a `QueryFilter` to the appropriate `Entity` list. I had to do this because `VisibleEntities` is located within an upstream crate from the crates that provide lights (`bevy_pbr`) and 2D meshes (`bevy_sprite`). As an added benefit, this setup allows apps to provide their own types of renderable components, by simply adding a specialized `check_visibility` to the schedule. This provides a 16.23% end-to-end speedup on `many_foxes` with 10,000 foxes (24.06 ms/frame to 20.70 ms/frame). ## Migration guide * `check_visibility` and `VisibleEntities` now store the four types of renderable entities--2D meshes, 3D meshes, lights, and UI elements--separately. If your custom rendering code examines `VisibleEntities`, it will now need to specify which type of entity it's interested in using the `WithMesh2d`, `WithMesh`, `WithLight`, and `WithNode` types respectively. If your app introduces a new type of renderable entity, you'll need to add an explicit call to `check_visibility` to the schedule to accommodate your new component or components. ## Analysis `many_foxes`, 10,000 foxes: `main`:  `many_foxes`, 10,000 foxes, this branch:  `queue_material_meshes` (yellow = this branch, red = `main`):  `queue_shadows` (yellow = this branch, red = `main`):  |
||
|
|
11817f4ba4
|
Generate MeshUniforms on the GPU via compute shader where available. (#12773)
Currently, `MeshUniform`s are rather large: 160 bytes. They're also somewhat expensive to compute, because they involve taking the inverse of a 3x4 matrix. Finally, if a mesh is present in multiple views, that mesh will have a separate `MeshUniform` for each and every view, which is wasteful. This commit fixes these issues by introducing the concept of a *mesh input uniform* and adding a *mesh uniform building* compute shader pass. The `MeshInputUniform` is simply the minimum amount of data needed for the GPU to compute the full `MeshUniform`. Most of this data is just the transform and is therefore only 64 bytes. `MeshInputUniform`s are computed during the *extraction* phase, much like skins are today, in order to avoid needlessly copying transforms around on CPU. (In fact, the render app has been changed to only store the translation of each mesh; it no longer cares about any other part of the transform, which is stored only on the GPU and the main world.) Before rendering, the `build_mesh_uniforms` pass runs to expand the `MeshInputUniform`s to the full `MeshUniform`. The mesh uniform building pass does the following, all on GPU: 1. Copy the appropriate fields of the `MeshInputUniform` to the `MeshUniform` slot. If a single mesh is present in multiple views, this effectively duplicates it into each view. 2. Compute the inverse transpose of the model transform, used for transforming normals. 3. If applicable, copy the mesh's transform from the previous frame for TAA. To support this, we double-buffer the `MeshInputUniform`s over two frames and swap the buffers each frame. The `MeshInputUniform`s for the current frame contain the index of that mesh's `MeshInputUniform` for the previous frame. This commit produces wins in virtually every CPU part of the pipeline: `extract_meshes`, `queue_material_meshes`, `batch_and_prepare_render_phase`, and especially `write_batched_instance_buffer` are all faster. Shrinking the amount of CPU data that has to be shuffled around speeds up the entire rendering process. | Benchmark | This branch | `main` | Speedup | |------------------------|-------------|---------|---------| | `many_cubes -nfc` | 17.259 | 24.529 | 42.12% | | `many_cubes -nfc -vpi` | 302.116 | 312.123 | 3.31% | | `many_foxes` | 3.227 | 3.515 | 8.92% | Because mesh uniform building requires compute shader, and WebGL 2 has no compute shader, the existing CPU mesh uniform building code has been left as-is. Many types now have both CPU mesh uniform building and GPU mesh uniform building modes. Developers can opt into the old CPU mesh uniform building by setting the `use_gpu_uniform_builder` option on `PbrPlugin` to `false`. Below are graphs of the CPU portions of `many-cubes --no-frustum-culling`. Yellow is this branch, red is `main`. `extract_meshes`:  It's notable that we get a small win even though we're now writing to a GPU buffer. `queue_material_meshes`:  There's a bit of a regression here; not sure what's causing it. In any case it's very outweighed by the other gains. `batch_and_prepare_render_phase`:  There's a huge win here, enough to make batching basically drop off the profile. `write_batched_instance_buffer`:  There's a massive improvement here, as expected. Note that a lot of it simply comes from the fact that `MeshInputUniform` is `Pod`. (This isn't a maintainability problem in my view because `MeshInputUniform` is so simple: just 16 tightly-packed words.) ## Changelog ### Added * Per-mesh instance data is now generated on GPU with a compute shader instead of CPU, resulting in rendering performance improvements on platforms where compute shaders are supported. ## Migration guide * Custom render phases now need multiple systems beyond just `batch_and_prepare_render_phase`. Code that was previously creating custom render phases should now add a `BinnedRenderPhasePlugin` or `SortedRenderPhasePlugin` as appropriate instead of directly adding `batch_and_prepare_render_phase`. |
||
|
|
ab7cbfa8fc
|
Consolidate Render(Ui)Materials(2d) into RenderAssets (#12827)
# Objective - Replace `RenderMaterials` / `RenderMaterials2d` / `RenderUiMaterials` with `RenderAssets` to enable implementing changes to one thing, `RenderAssets`, that applies to all use cases rather than duplicating changes everywhere for multiple things that should be one thing. - Adopts #8149 ## Solution - Make RenderAsset generic over the destination type rather than the source type as in #8149 - Use `RenderAssets<PreparedMaterial<M>>` etc for render materials --- ## Changelog - Changed: - The `RenderAsset` trait is now implemented on the destination type. Its `SourceAsset` associated type refers to the type of the source asset. - `RenderMaterials`, `RenderMaterials2d`, and `RenderUiMaterials` have been replaced by `RenderAssets<PreparedMaterial<M>>` and similar. ## Migration Guide - `RenderAsset` is now implemented for the destination type rather that the source asset type. The source asset type is now the `RenderAsset` trait's `SourceAsset` associated type. |
||
|
|
01649f13e2
|
Refactor App and SubApp internals for better separation (#9202)
# Objective This is a necessary precursor to #9122 (this was split from that PR to reduce the amount of code to review all at once). Moving `!Send` resource ownership to `App` will make it unambiguously `!Send`. `SubApp` must be `Send`, so it can't wrap `App`. ## Solution Refactor `App` and `SubApp` to not have a recursive relationship. Since `SubApp` no longer wraps `App`, once `!Send` resources are moved out of `World` and into `App`, `SubApp` will become unambiguously `Send`. There could be less code duplication between `App` and `SubApp`, but that would break `App` method chaining. ## Changelog - `SubApp` no longer wraps `App`. - `App` fields are no longer publicly accessible. - `App` can no longer be converted into a `SubApp`. - Various methods now return references to a `SubApp` instead of an `App`. ## Migration Guide - To construct a sub-app, use `SubApp::new()`. `App` can no longer convert into `SubApp`. - If you implemented a trait for `App`, you may want to implement it for `SubApp` as well. - If you're accessing `app.world` directly, you now have to use `app.world()` and `app.world_mut()`. - `App::sub_app` now returns `&SubApp`. - `App::sub_app_mut` now returns `&mut SubApp`. - `App::get_sub_app` now returns `Option<&SubApp>.` - `App::get_sub_app_mut` now returns `Option<&mut SubApp>.` |
||
|
|
4dadebd9c4
|
Improve performance by binning together opaque items instead of sorting them. (#12453)
Today, we sort all entities added to all phases, even the phases that don't strictly need sorting, such as the opaque and shadow phases. This results in a performance loss because our `PhaseItem`s are rather large in memory, so sorting is slow. Additionally, determining the boundaries of batches is an O(n) process. This commit makes Bevy instead applicable place phase items into *bins* keyed by *bin keys*, which have the invariant that everything in the same bin is potentially batchable. This makes determining batch boundaries O(1), because everything in the same bin can be batched. Instead of sorting each entity, we now sort only the bin keys. This drops the sorting time to near-zero on workloads with few bins like `many_cubes --no-frustum-culling`. Memory usage is improved too, with batch boundaries and dynamic indices now implicit instead of explicit. The improved memory usage results in a significant win even on unbatchable workloads like `many_cubes --no-frustum-culling --vary-material-data-per-instance`, presumably due to cache effects. Not all phases can be binned; some, such as transparent and transmissive phases, must still be sorted. To handle this, this commit splits `PhaseItem` into `BinnedPhaseItem` and `SortedPhaseItem`. Most of the logic that today deals with `PhaseItem`s has been moved to `SortedPhaseItem`. `BinnedPhaseItem` has the new logic. Frame time results (in ms/frame) are as follows: | Benchmark | `binning` | `main` | Speedup | | ------------------------ | --------- | ------- | ------- | | `many_cubes -nfc -vpi` | 232.179 | 312.123 | 34.43% | | `many_cubes -nfc` | 25.874 | 30.117 | 16.40% | | `many_foxes` | 3.276 | 3.515 | 7.30% | (`-nfc` is short for `--no-frustum-culling`; `-vpi` is short for `--vary-per-instance`.) --- ## Changelog ### Changed * Render phases have been split into binned and sorted phases. Binned phases, such as the common opaque phase, achieve improved CPU performance by avoiding the sorting step. ## Migration Guide - `PhaseItem` has been split into `BinnedPhaseItem` and `SortedPhaseItem`. If your code has custom `PhaseItem`s, you will need to migrate them to one of these two types. `SortedPhaseItem` requires the fewest code changes, but you may want to pick `BinnedPhaseItem` if your phase doesn't require sorting, as that enables higher performance. ## Tracy graphs `many-cubes --no-frustum-culling`, `main` branch: <img width="1064" alt="Screenshot 2024-03-12 180037" src="https://github.com/bevyengine/bevy/assets/157897/e1180ce8-8e89-46d2-85e3-f59f72109a55"> `many-cubes --no-frustum-culling`, this branch: <img width="1064" alt="Screenshot 2024-03-12 180011" src="https://github.com/bevyengine/bevy/assets/157897/0899f036-6075-44c5-a972-44d95895f46c"> You can see that `batch_and_prepare_binned_render_phase` is a much smaller fraction of the time. Zooming in on that function, with yellow being this branch and red being `main`, we see: <img width="1064" alt="Screenshot 2024-03-12 175832" src="https://github.com/bevyengine/bevy/assets/157897/0dfc8d3f-49f4-496e-8825-a66e64d356d0"> The binning happens in `queue_material_meshes`. Again with yellow being this branch and red being `main`: <img width="1064" alt="Screenshot 2024-03-12 175755" src="https://github.com/bevyengine/bevy/assets/157897/b9b20dc1-11c8-400c-a6cc-1c2e09c1bb96"> We can see that there is a small regression in `queue_material_meshes` performance, but it's not nearly enough to outweigh the large gains in `batch_and_prepare_binned_render_phase`. --------- Co-authored-by: James Liu <contact@jamessliu.com> |
||
|
|
e62a01f403
|
Make PersistentGpuBufferable a safe trait (#12744)
# Objective Fixes #12727. All parts that `PersistentGpuBuffer` interact with should be 100% safe both on the CPU and the GPU: `Queue::write_buffer_with` zeroes out the slice being written to and when uploading to the GPU, and all slice writes are bounds checked on the CPU side. ## Solution Make `PersistentGpuBufferable` a safe trait. Enforce it's correct implementation via assertions. Re-enable `forbid(unsafe_code)` on `bevy_pbr`. |
||
|
|
4f20faaa43
|
Meshlet rendering (initial feature) (#10164)
# Objective - Implements a more efficient, GPU-driven (https://github.com/bevyengine/bevy/issues/1342) rendering pipeline based on meshlets. - Meshes are split into small clusters of triangles called meshlets, each of which acts as a mini index buffer into the larger mesh data. Meshlets can be compressed, streamed, culled, and batched much more efficiently than monolithic meshes.   # Misc * Future work: https://github.com/bevyengine/bevy/issues/11518 * Nanite reference: https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf Two pass occlusion culling explained very well: https://medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501 --------- Co-authored-by: Ricky Taylor <rickytaylor26@gmail.com> Co-authored-by: vero <email@atlasdostal.com> Co-authored-by: François <mockersf@gmail.com> Co-authored-by: atlas dostal <rodol@rivalrebels.com> |
||
|
|
f096ad4155
|
Set the logo and favicon for all of Bevy's published crates (#12696)
# Objective Currently the built docs only shows the logo and favicon for the top level `bevy` crate. This makes views like https://docs.rs/bevy_ecs/latest/bevy_ecs/ look potentially unrelated to the project at first glance. ## Solution Reproduce the docs attributes for every crate that Bevy publishes. Ideally this would be done with some workspace level Cargo.toml control, but AFAICT, such support does not exist. |
||
|
|
67cc605e9f
|
Removed Into<AssedId<T>> for Handle<T> as mentioned in #12600 (#12655)
Fixes #12600 ## Solution Removed Into<AssetId<T>> for Handle<T> as proposed in Issue conversation, fixed dependent code ## Migration guide If you use passing Handle by value as AssetId, you should pass reference or call .id() method on it Before (0.13): `assets.insert(handle, value);` After (0.14): `assets.insert(&handle, value);` or `assets.insert(handle.id(), value);` |
||
|
|
52e3f2007b
|
Add "all-features = true" to docs.rs metadata for most crates (#12366)
# Objective Fix missing `TextBundle` (and many others) which are present in the main crate as default features but optional in the sub-crate. See: - https://docs.rs/bevy/0.13.0/bevy/ui/node_bundles/index.html - https://docs.rs/bevy_ui/0.13.0/bevy_ui/node_bundles/index.html ~~There are probably other instances in other crates that I could track down, but maybe "all-features = true" should be used by default in all sub-crates? Not sure.~~ (There were many.) I only noticed this because rust-analyzer's "open docs" features takes me to the sub-crate, not the main one. ## Solution Add "all-features = true" to docs.rs metadata for crates that use features. ## Changelog ### Changed - Unified features documented on docs.rs between main crate and sub-crates |
||
|
|
9e5db9abc7
|
Clean up type registrations (#12314)
# Objective Fix #12304. Remove unnecessary type registrations thanks to #4154. ## Solution Conservatively remove type registrations. Keeping the top level components, resources, and events, but dropping everything else that is a type of a member of those types. |
||
|
|
599e5e4e76
|
Migrate from LegacyColor to bevy_color::Color (#12163)
# Objective - As part of the migration process we need to a) see the end effect of the migration on user ergonomics b) check for serious perf regressions c) actually migrate the code - To accomplish this, I'm going to attempt to migrate all of the remaining user-facing usages of `LegacyColor` in one PR, being careful to keep a clean commit history. - Fixes #12056. ## Solution I've chosen to use the polymorphic `Color` type as our standard user-facing API. - [x] Migrate `bevy_gizmos`. - [x] Take `impl Into<Color>` in all `bevy_gizmos` APIs - [x] Migrate sprites - [x] Migrate UI - [x] Migrate `ColorMaterial` - [x] Migrate `MaterialMesh2D` - [x] Migrate fog - [x] Migrate lights - [x] Migrate StandardMaterial - [x] Migrate wireframes - [x] Migrate clear color - [x] Migrate text - [x] Migrate gltf loader - [x] Register color types for reflection - [x] Remove `LegacyColor` - [x] Make sure CI passes Incidental improvements to ease migration: - added `Color::srgba_u8`, `Color::srgba_from_array` and friends - added `set_alpha`, `is_fully_transparent` and `is_fully_opaque` to the `Alpha` trait - add and immediately deprecate (lol) `Color::rgb` and friends in favor of more explicit and consistent `Color::srgb` - standardized on white and black for most example text colors - added vector field traits to `LinearRgba`: ~~`Add`, `Sub`, `AddAssign`, `SubAssign`,~~ `Mul<f32>` and `Div<f32>`. Multiplications and divisions do not scale alpha. `Add` and `Sub` have been cut from this PR. - added `LinearRgba` and `Srgba` `RED/GREEN/BLUE` - added `LinearRgba_to_f32_array` and `LinearRgba::to_u32` ## Migration Guide Bevy's color types have changed! Wherever you used a `bevy::render::Color`, a `bevy::color::Color` is used instead. These are quite similar! Both are enums storing a color in a specific color space (or to be more precise, using a specific color model). However, each of the different color models now has its own type. TODO... - `Color::rgba`, `Color::rgb`, `Color::rbga_u8`, `Color::rgb_u8`, `Color::rgb_from_array` are now `Color::srgba`, `Color::srgb`, `Color::srgba_u8`, `Color::srgb_u8` and `Color::srgb_from_array`. - `Color::set_a` and `Color::a` is now `Color::set_alpha` and `Color::alpha`. These are part of the `Alpha` trait in `bevy_color`. - `Color::is_fully_transparent` is now part of the `Alpha` trait in `bevy_color` - `Color::r`, `Color::set_r`, `Color::with_r` and the equivalents for `g`, `b` `h`, `s` and `l` have been removed due to causing silent relatively expensive conversions. Convert your `Color` into the desired color space, perform your operations there, and then convert it back into a polymorphic `Color` enum. - `Color::hex` is now `Srgba::hex`. Call `.into` or construct a `Color::Srgba` variant manually to convert it. - `WireframeMaterial`, `ExtractedUiNode`, `ExtractedDirectionalLight`, `ExtractedPointLight`, `ExtractedSpotLight` and `ExtractedSprite` now store a `LinearRgba`, rather than a polymorphic `Color` - `Color::rgb_linear` and `Color::rgba_linear` are now `Color::linear_rgb` and `Color::linear_rgba` - The various CSS color constants are no longer stored directly on `Color`. Instead, they're defined in the `Srgba` color space, and accessed via `bevy::color::palettes::css`. Call `.into()` on them to convert them into a `Color` for quick debugging use, and consider using the much prettier `tailwind` palette for prototyping. - The `LIME_GREEN` color has been renamed to `LIMEGREEN` to comply with the standard naming. - Vector field arithmetic operations on `Color` (add, subtract, multiply and divide by a f32) have been removed. Instead, convert your colors into `LinearRgba` space, and perform your operations explicitly there. This is particularly relevant when working with emissive or HDR colors, whose color channel values are routinely outside of the ordinary 0 to 1 range. - `Color::as_linear_rgba_f32` has been removed. Call `LinearRgba::to_f32_array` instead, converting if needed. - `Color::as_linear_rgba_u32` has been removed. Call `LinearRgba::to_u32` instead, converting if needed. - Several other color conversion methods to transform LCH or HSL colors into float arrays or `Vec` types have been removed. Please reimplement these externally or open a PR to re-add them if you found them particularly useful. - Various methods on `Color` such as `rgb` or `hsl` to convert the color into a specific color space have been removed. Convert into `LinearRgba`, then to the color space of your choice. - Various implicitly-converting color value methods on `Color` such as `r`, `g`, `b` or `h` have been removed. Please convert it into the color space of your choice, then check these properties. - `Color` no longer implements `AsBindGroup`. Store a `LinearRgba` internally instead to avoid conversion costs. --------- Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com> Co-authored-by: Afonso Lage <lage.afonso@gmail.com> Co-authored-by: Rob Parrett <robparrett@gmail.com> Co-authored-by: Zachary Harrold <zac@harrold.com.au> |
||
|
|
de004da8d5
|
Rename bevy_render::Color to LegacyColor (#12069)
# Objective The migration process for `bevy_color` (#12013) will be fairly involved: there will be hundreds of affected files, and a large number of APIs. ## Solution To allow us to proceed granularly, we're going to keep both `bevy_color::Color` (new) and `bevy_render::Color` (old) around until the migration is complete. However, simply doing this directly is confusing! They're both called `Color`, making it very hard to tell when a portion of the code has been ported. As discussed in #12056, by renaming the old `Color` type, we can make it easier to gradually migrate over, one API at a time. ## Migration Guide THIS MIGRATION GUIDE INTENTIONALLY LEFT BLANK. This change should not be shipped to end users: delete this section in the final migration guide! --------- Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com> |
||
|
|
328008f904
|
Move AlphaMode into bevy_render (#12012)
# Objective - Closes #11985 ## Solution - alpha.rs has been moved from bevy_pbr into bevy_render; bevy_pbr and bevy_gltf now access `AlphaMode` through bevy_render. --- ## Migration Guide In the present implementation, external consumers of `AlphaMode` will have to access it through bevy_render rather than through bevy_pbr, changing their import from `bevy_pbr::AlphaMode` to `bevy_render::alpha::AlphaMode` (or the corresponding glob import from `bevy_pbr::prelude::*` to `bevy_render::prelude::*`). ## Uncertainties Some remaining things from this that I am uncertain about: - Here, the `app.register_type<AlphaMode>()` call has been moved from `PbrPlugin` to `RenderPlugin`; I'm not sure if this is quite right, and I was unable to find any direct relationship between `PbrPlugin` and `RenderPlugin`. - `AlphaMode` was placed in the prelude of bevy_render. I'm not certain that this is actually appropriate. - bevy_pbr does not re-export `AlphaMode`, which makes this a breaking change for external consumers. Any of these things could be easily changed; I'm just not confident that I necessarily adopted the right approach in these (known) ways since this codebase and ecosystem is quite new to me. |
||
|
|
f83de49b7a
|
Rename Core Render Graph Labels (#11882)
# Objective #10644 introduced nice "statically typed" labels that replace the old strings. I would like to propose some changes to the names introduced: * `SubGraph2d` -> `Core2d` and `SubGraph3d` -> `Core3d`. The names of these graphs have been / should continue to be the "core 2d" graph not the "sub graph 2d" graph. The crate is called `bevy_core_pipeline`, the modules are still `core_2d` and `core_3d`, etc. * `Labels2d` and `Labels3d`, at the very least, should not be plural to follow naming conventions. A Label enum is not a "collection of labels", it is a _specific_ Label. However I think `Label2d` and `Label3d` is significantly less clear than `Node2d` and `Node3d`, so I propose those changes here. I've done the same for `LabelsPbr` -> `NodePbr` and `LabelsUi` -> `NodeUi` Additionally, #10644 accidentally made one of the Camera2dBundle constructors use the 3D graph instead of the 2D graph. I've fixed that here. --- ## Changelog * Renamed `SubGraph2d` -> `Core2d`, `SubGraph3d` -> `Core3d`, `Labels2d` -> `Node2d`, `Labels3d` -> `Node3d`, `LabelsUi` -> `NodeUi`, `LabelsPbr` -> `NodePbr` |
||
|
|
dc9b486650
|
Change light defaults & fix light examples (#11581)
# Objective Fix https://github.com/bevyengine/bevy/issues/11577. ## Solution Fix the examples, add a few constants to make setting light values easier, and change the default lighting settings to be more realistic. (Now designed for an overcast day instead of an indoor environment) --- I did not include any example-related changes in here. ## Changelogs (not including breaking changes) ### bevy_pbr - Added `light_consts` module (included in prelude), which contains common lux and lumen values for lights. - Added `AmbientLight::NONE` constant, which is an ambient light with a brightness of 0. - Added non-EV100 variants for `ExposureSettings`'s EV100 constants, which allow easier construction of an `ExposureSettings` from a EV100 constant. ## Breaking changes ### bevy_pbr The several default lighting values were changed: - `PointLight`'s default `intensity` is now `2000.0` - `SpotLight`'s default `intensity` is now `2000.0` - `DirectionalLight`'s default `illuminance` is now `light_consts::lux::OVERCAST_DAY` (`1000.`) - `AmbientLight`'s default `brightness` is now `20.0` |
||
|
|
e169b2b217
|
Missing registrations (#11736)
# Objective During my exploratory work on the remote editor, I found a couple of types that were either not registered, or that were missing `ReflectDefault`. ## Solution - Added registration and `ReflectDefault` where applicable - (Drive by fix) Moved `Option<f32>` registration to `bevy_core` instead of `bevy_ui`, along with similar types. --- ## Changelog - Fixed: Registered `FogSettings`, `FogFalloff`, `ParallaxMappingMethod`, `OpaqueRendererMethod` structs for reflection - Fixed: Registered `ReflectDefault` trait for `ColorGrading` and `CascadeShadowConfig` structs |
||
|
|
694c06f3d0
|
Inverse missing_docs logic (#11676)
# Objective Currently the `missing_docs` lint is allowed-by-default and enabled at crate level when their documentations is complete (see #3492). This PR proposes to inverse this logic by making `missing_docs` warn-by-default and mark crates with imcomplete docs allowed. ## Solution Makes `missing_docs` warn at workspace level and allowed at crate level when the docs is imcomplete. |
||
|
|
16d28ccb91
|
RenderGraph Labelization (#10644)
# Objective
The whole `Cow<'static, str>` naming for nodes and subgraphs in
`RenderGraph` is a mess.
## Solution
Replaces hardcoded and potentially overlapping strings for nodes and
subgraphs inside `RenderGraph` with bevy's labelsystem.
---
## Changelog
* Two new labels: `RenderLabel` and `RenderSubGraph`.
* Replaced all uses for hardcoded strings with those labels
* Moved `Taa` label from its own mod to all the other `Labels3d`
* `add_render_graph_edges` now needs a tuple of labels
* Moved `ScreenSpaceAmbientOcclusion` label from its own mod with the
`ShadowPass` label to `LabelsPbr`
* Removed `NodeId`
* Renamed `Edges.id()` to `Edges.label()`
* Removed `NodeLabel`
* Changed examples according to the new label system
* Introduced new `RenderLabel`s: `Labels2d`, `Labels3d`, `LabelsPbr`,
`LabelsUi`
* Introduced new `RenderSubGraph`s: `SubGraph2d`, `SubGraph3d`,
`SubGraphUi`
* Removed `Reflect` and `Default` derive from `CameraRenderGraph`
component struct
* Improved some error messages
## Migration Guide
For Nodes and SubGraphs, instead of using hardcoded strings, you now
pass labels, which can be derived with structs and enums.
```rs
// old
#[derive(Default)]
struct MyRenderNode;
impl MyRenderNode {
pub const NAME: &'static str = "my_render_node"
}
render_app
.add_render_graph_node::<ViewNodeRunner<MyRenderNode>>(
core_3d::graph::NAME,
MyRenderNode::NAME,
)
.add_render_graph_edges(
core_3d::graph::NAME,
&[
core_3d::graph::node::TONEMAPPING,
MyRenderNode::NAME,
core_3d::graph::node::END_MAIN_PASS_POST_PROCESSING,
],
);
// new
use bevy::core_pipeline::core_3d::graph::{Labels3d, SubGraph3d};
#[derive(Debug, Hash, PartialEq, Eq, Clone, RenderLabel)]
pub struct MyRenderLabel;
#[derive(Default)]
struct MyRenderNode;
render_app
.add_render_graph_node::<ViewNodeRunner<MyRenderNode>>(
SubGraph3d,
MyRenderLabel,
)
.add_render_graph_edges(
SubGraph3d,
(
Labels3d::Tonemapping,
MyRenderLabel,
Labels3d::EndMainPassPostProcessing,
),
);
```
### SubGraphs
#### in `bevy_core_pipeline::core_2d::graph`
| old string-based path | new label |
|-----------------------|-----------|
| `NAME` | `SubGraph2d` |
#### in `bevy_core_pipeline::core_3d::graph`
| old string-based path | new label |
|-----------------------|-----------|
| `NAME` | `SubGraph3d` |
#### in `bevy_ui::render`
| old string-based path | new label |
|-----------------------|-----------|
| `draw_ui_graph::NAME` | `graph::SubGraphUi` |
### Nodes
#### in `bevy_core_pipeline::core_2d::graph`
| old string-based path | new label |
|-----------------------|-----------|
| `node::MSAA_WRITEBACK` | `Labels2d::MsaaWriteback` |
| `node::MAIN_PASS` | `Labels2d::MainPass` |
| `node::BLOOM` | `Labels2d::Bloom` |
| `node::TONEMAPPING` | `Labels2d::Tonemapping` |
| `node::FXAA` | `Labels2d::Fxaa` |
| `node::UPSCALING` | `Labels2d::Upscaling` |
| `node::CONTRAST_ADAPTIVE_SHARPENING` |
`Labels2d::ConstrastAdaptiveSharpening` |
| `node::END_MAIN_PASS_POST_PROCESSING` |
`Labels2d::EndMainPassPostProcessing` |
#### in `bevy_core_pipeline::core_3d::graph`
| old string-based path | new label |
|-----------------------|-----------|
| `node::MSAA_WRITEBACK` | `Labels3d::MsaaWriteback` |
| `node::PREPASS` | `Labels3d::Prepass` |
| `node::DEFERRED_PREPASS` | `Labels3d::DeferredPrepass` |
| `node::COPY_DEFERRED_LIGHTING_ID` | `Labels3d::CopyDeferredLightingId`
|
| `node::END_PREPASSES` | `Labels3d::EndPrepasses` |
| `node::START_MAIN_PASS` | `Labels3d::StartMainPass` |
| `node::MAIN_OPAQUE_PASS` | `Labels3d::MainOpaquePass` |
| `node::MAIN_TRANSMISSIVE_PASS` | `Labels3d::MainTransmissivePass` |
| `node::MAIN_TRANSPARENT_PASS` | `Labels3d::MainTransparentPass` |
| `node::END_MAIN_PASS` | `Labels3d::EndMainPass` |
| `node::BLOOM` | `Labels3d::Bloom` |
| `node::TONEMAPPING` | `Labels3d::Tonemapping` |
| `node::FXAA` | `Labels3d::Fxaa` |
| `node::UPSCALING` | `Labels3d::Upscaling` |
| `node::CONTRAST_ADAPTIVE_SHARPENING` |
`Labels3d::ContrastAdaptiveSharpening` |
| `node::END_MAIN_PASS_POST_PROCESSING` |
`Labels3d::EndMainPassPostProcessing` |
#### in `bevy_core_pipeline`
| old string-based path | new label |
|-----------------------|-----------|
| `taa::draw_3d_graph::node::TAA` | `Labels3d::Taa` |
#### in `bevy_pbr`
| old string-based path | new label |
|-----------------------|-----------|
| `draw_3d_graph::node::SHADOW_PASS` | `LabelsPbr::ShadowPass` |
| `ssao::draw_3d_graph::node::SCREEN_SPACE_AMBIENT_OCCLUSION` |
`LabelsPbr::ScreenSpaceAmbientOcclusion` |
| `deferred::DEFFERED_LIGHTING_PASS` | `LabelsPbr::DeferredLightingPass`
|
#### in `bevy_render`
| old string-based path | new label |
|-----------------------|-----------|
| `main_graph::node::CAMERA_DRIVER` | `graph::CameraDriverLabel` |
#### in `bevy_ui::render`
| old string-based path | new label |
|-----------------------|-----------|
| `draw_ui_graph::node::UI_PASS` | `graph::LabelsUi::UiPass` |
---
## Future work
* Make `NodeSlot`s also use types. Ideally, we have an enum with unit
variants where every variant resembles one slot. Then to make sure you
are using the right slot enum and make rust-analyzer play nicely with
it, we should make an associated type in the `Node` trait. With today's
system, we can introduce 3rd party slots to a node, and i wasnt sure if
this was used, so I didn't do this in this PR.
## Unresolved Questions
When looking at the `post_processing` example, we have a struct for the
label and a struct for the node, this seems like boilerplate and on
discord, @IceSentry (sowy for the ping)
[asked](https://discord.com/channels/691052431525675048/743663924229963868/1175197016947699742)
if a node could automatically introduce a label (or i completely
misunderstood that). The problem with that is, that nodes like
`EmptyNode` exist multiple times *inside the same* (sub)graph, so there
we need extern labels to distinguish between those. Hopefully we can
find a way to reduce boilerplate and still have everything unique. For
EmptyNode, we could maybe make a macro which implements an "empty node"
for a type, but for nodes which contain code and need to be present
multiple times, this could get nasty...
|
||
|
|
83d6600267
|
Implement minimal reflection probes (fixed macOS, iOS, and Android). (#11366)
This pull request re-submits #10057, which was backed out for breaking macOS, iOS, and Android. I've tested this version on macOS and Android and on the iOS simulator. # Objective This pull request implements *reflection probes*, which generalize environment maps to allow for multiple environment maps in the same scene, each of which has an axis-aligned bounding box. This is a standard feature of physically-based renderers and was inspired by [the corresponding feature in Blender's Eevee renderer]. ## Solution This is a minimal implementation of reflection probes that allows artists to define cuboid bounding regions associated with environment maps. For every view, on every frame, a system builds up a list of the nearest 4 reflection probes that are within the view's frustum and supplies that list to the shader. The PBR fragment shader searches through the list, finds the first containing reflection probe, and uses it for indirect lighting, falling back to the view's environment map if none is found. Both forward and deferred renderers are fully supported. A reflection probe is an entity with a pair of components, *LightProbe* and *EnvironmentMapLight* (as well as the standard *SpatialBundle*, to position it in the world). The *LightProbe* component (along with the *Transform*) defines the bounding region, while the *EnvironmentMapLight* component specifies the associated diffuse and specular cubemaps. A frequent question is "why two components instead of just one?" The advantages of this setup are: 1. It's readily extensible to other types of light probes, in particular *irradiance volumes* (also known as ambient cubes or voxel global illumination), which use the same approach of bounding cuboids. With a single component that applies to both reflection probes and irradiance volumes, we can share the logic that implements falloff and blending between multiple light probes between both of those features. 2. It reduces duplication between the existing *EnvironmentMapLight* and these new reflection probes. Systems can treat environment maps attached to cameras the same way they treat environment maps applied to reflection probes if they wish. Internally, we gather up all environment maps in the scene and place them in a cubemap array. At present, this means that all environment maps must have the same size, mipmap count, and texture format. A warning is emitted if this restriction is violated. We could potentially relax this in the future as part of the automatic mipmap generation work, which could easily do texture format conversion as part of its preprocessing. An easy way to generate reflection probe cubemaps is to bake them in Blender and use the `export-blender-gi` tool that's part of the [`bevy-baked-gi`] project. This tool takes a `.blend` file containing baked cubemaps as input and exports cubemap images, pre-filtered with an embedded fork of the [glTF IBL Sampler], alongside a corresponding `.scn.ron` file that the scene spawner can use to recreate the reflection probes. Note that this is intentionally a minimal implementation, to aid reviewability. Known issues are: * Reflection probes are basically unsupported on WebGL 2, because WebGL 2 has no cubemap arrays. (Strictly speaking, you can have precisely one reflection probe in the scene if you have no other cubemaps anywhere, but this isn't very useful.) * Reflection probes have no falloff, so reflections will abruptly change when objects move from one bounding region to another. * As mentioned before, all cubemaps in the world of a given type (diffuse or specular) must have the same size, format, and mipmap count. Future work includes: * Blending between multiple reflection probes. * A falloff/fade-out region so that reflected objects disappear gradually instead of vanishing all at once. * Irradiance volumes for voxel-based global illumination. This should reuse much of the reflection probe logic, as they're both GI techniques based on cuboid bounding regions. * Support for WebGL 2, by breaking batches when reflection probes are used. These issues notwithstanding, I think it's best to land this with roughly the current set of functionality, because this patch is useful as is and adding everything above would make the pull request significantly larger and harder to review. --- ## Changelog ### Added * A new *LightProbe* component is available that specifies a bounding region that an *EnvironmentMapLight* applies to. The combination of a *LightProbe* and an *EnvironmentMapLight* offers *reflection probe* functionality similar to that available in other engines. [the corresponding feature in Blender's Eevee renderer]: https://docs.blender.org/manual/en/latest/render/eevee/light_probes/reflection_cubemaps.html [`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi [glTF IBL Sampler]: https://github.com/KhronosGroup/glTF-IBL-Sampler |
||
|
|
3d996639a0
|
Revert "Implement minimal reflection probes. (#10057)" (#11307)
# Objective - Fix working on macOS, iOS, Android on main - Fixes #11281 - Fixes #11282 - Fixes #11283 - Fixes #11299 ## Solution - Revert #10057 |
||
|
|
a657478675
|
resolve all internal ambiguities (#10411)
- ignore all ambiguities that are not a problem - remove `.before(Assets::<Image>::track_assets),` that points into a different schedule (-> should this be caught?) - add some explicit orderings: - run `poll_receivers` and `update_accessibility_nodes` after `window_closed` in `bevy_winit::accessibility` - run `bevy_ui::accessibility::calc_bounds` after `CameraUpdateSystem` - run ` bevy_text::update_text2d_layout` and `bevy_ui::text_system` after `font_atlas_set::remove_dropped_font_atlas_sets` - add `app.ignore_ambiguity(a, b)` function for cases where you want to ignore an ambiguity between two independent plugins `A` and `B` - add `IgnoreAmbiguitiesPlugin` in `DefaultPlugins` that allows cross-crate ambiguities like `bevy_animation`/`bevy_ui` - Fixes https://github.com/bevyengine/bevy/issues/9511 ## Before **Render**  **PostUpdate**  ## After **Render**  **PostUpdate**  --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com> Co-authored-by: François <mockersf@gmail.com> |
||
|
|
54a943d232
|
Implement minimal reflection probes. (#10057)
# Objective This pull request implements *reflection probes*, which generalize environment maps to allow for multiple environment maps in the same scene, each of which has an axis-aligned bounding box. This is a standard feature of physically-based renderers and was inspired by [the corresponding feature in Blender's Eevee renderer]. ## Solution This is a minimal implementation of reflection probes that allows artists to define cuboid bounding regions associated with environment maps. For every view, on every frame, a system builds up a list of the nearest 4 reflection probes that are within the view's frustum and supplies that list to the shader. The PBR fragment shader searches through the list, finds the first containing reflection probe, and uses it for indirect lighting, falling back to the view's environment map if none is found. Both forward and deferred renderers are fully supported. A reflection probe is an entity with a pair of components, *LightProbe* and *EnvironmentMapLight* (as well as the standard *SpatialBundle*, to position it in the world). The *LightProbe* component (along with the *Transform*) defines the bounding region, while the *EnvironmentMapLight* component specifies the associated diffuse and specular cubemaps. A frequent question is "why two components instead of just one?" The advantages of this setup are: 1. It's readily extensible to other types of light probes, in particular *irradiance volumes* (also known as ambient cubes or voxel global illumination), which use the same approach of bounding cuboids. With a single component that applies to both reflection probes and irradiance volumes, we can share the logic that implements falloff and blending between multiple light probes between both of those features. 2. It reduces duplication between the existing *EnvironmentMapLight* and these new reflection probes. Systems can treat environment maps attached to cameras the same way they treat environment maps applied to reflection probes if they wish. Internally, we gather up all environment maps in the scene and place them in a cubemap array. At present, this means that all environment maps must have the same size, mipmap count, and texture format. A warning is emitted if this restriction is violated. We could potentially relax this in the future as part of the automatic mipmap generation work, which could easily do texture format conversion as part of its preprocessing. An easy way to generate reflection probe cubemaps is to bake them in Blender and use the `export-blender-gi` tool that's part of the [`bevy-baked-gi`] project. This tool takes a `.blend` file containing baked cubemaps as input and exports cubemap images, pre-filtered with an embedded fork of the [glTF IBL Sampler], alongside a corresponding `.scn.ron` file that the scene spawner can use to recreate the reflection probes. Note that this is intentionally a minimal implementation, to aid reviewability. Known issues are: * Reflection probes are basically unsupported on WebGL 2, because WebGL 2 has no cubemap arrays. (Strictly speaking, you can have precisely one reflection probe in the scene if you have no other cubemaps anywhere, but this isn't very useful.) * Reflection probes have no falloff, so reflections will abruptly change when objects move from one bounding region to another. * As mentioned before, all cubemaps in the world of a given type (diffuse or specular) must have the same size, format, and mipmap count. Future work includes: * Blending between multiple reflection probes. * A falloff/fade-out region so that reflected objects disappear gradually instead of vanishing all at once. * Irradiance volumes for voxel-based global illumination. This should reuse much of the reflection probe logic, as they're both GI techniques based on cuboid bounding regions. * Support for WebGL 2, by breaking batches when reflection probes are used. These issues notwithstanding, I think it's best to land this with roughly the current set of functionality, because this patch is useful as is and adding everything above would make the pull request significantly larger and harder to review. --- ## Changelog ### Added * A new *LightProbe* component is available that specifies a bounding region that an *EnvironmentMapLight* applies to. The combination of a *LightProbe* and an *EnvironmentMapLight* offers *reflection probe* functionality similar to that available in other engines. [the corresponding feature in Blender's Eevee renderer]: https://docs.blender.org/manual/en/latest/render/eevee/light_probes/reflection_cubemaps.html [`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi [glTF IBL Sampler]: https://github.com/KhronosGroup/glTF-IBL-Sampler |
||
|
|
dd14f3a477
|
Implement lightmaps. (#10231)
 # Objective Lightmaps, textures that store baked global illumination, have been a mainstay of real-time graphics for decades. Bevy currently has no support for them, so this pull request implements them. ## Solution The new `Lightmap` component can be attached to any entity that contains a `Handle<Mesh>` and a `StandardMaterial`. When present, it will be applied in the PBR shader. Because multiple lightmaps are frequently packed into atlases, each lightmap may have its own UV boundaries within its texture. An `exposure` field is also provided, to control the brightness of the lightmap. Note that this PR doesn't provide any way to bake the lightmaps. That can be done with [The Lightmapper] or another solution, such as Unity's Bakery. --- ## Changelog ### Added * A new component, `Lightmap`, is available, for baked global illumination. If your mesh has a second UV channel (UV1), and you attach this component to the entity with that mesh, Bevy will apply the texture referenced in the lightmap. [The Lightmapper]: https://github.com/Naxela/The_Lightmapper --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
|
|
6b84ba97a3
|
Auto insert sync points (#9822)
# Objective - Users are often confused when their command effects are not visible in the next system. This PR auto inserts sync points if there are deferred buffers on a system and there are dependents on that system (systems with after relationships). - Manual sync points can lead to users adding more than needed and it's hard for the user to have a global understanding of their system graph to know which sync points can be merged. However we can easily calculate which sync points can be merged automatically. ## Solution 1. Add new edge types to allow opting out of new behavior 2. Insert an sync point for each edge whose initial node has deferred system params. 3. Reuse nodes if they're at the number of sync points away. * add opt outs for specific edges with `after_ignore_deferred`, `before_ignore_deferred` and `chain_ignore_deferred`. The `auto_insert_apply_deferred` boolean on `ScheduleBuildSettings` can be set to false to opt out for the whole schedule. ## Perf This has a small negative effect on schedule build times. ```text group auto-sync main-for-auto-sync ----- ----------- ------------------ build_schedule/1000_schedule 1.06 2.8±0.15s ? ?/sec 1.00 2.7±0.06s ? ?/sec build_schedule/1000_schedule_noconstraints 1.01 26.2±0.88ms ? ?/sec 1.00 25.8±0.36ms ? ?/sec build_schedule/100_schedule 1.02 13.1±0.33ms ? ?/sec 1.00 12.9±0.28ms ? ?/sec build_schedule/100_schedule_noconstraints 1.08 505.3±29.30µs ? ?/sec 1.00 469.4±12.48µs ? ?/sec build_schedule/500_schedule 1.00 485.5±6.29ms ? ?/sec 1.00 485.5±9.80ms ? ?/sec build_schedule/500_schedule_noconstraints 1.00 6.8±0.10ms ? ?/sec 1.02 6.9±0.16ms ? ?/sec ``` --- ## Changelog - Auto insert sync points and added `after_ignore_deferred`, `before_ignore_deferred`, `chain_no_deferred` and `auto_insert_apply_deferred` APIs to opt out of this behavior ## Migration Guide - `apply_deferred` points are added automatically when there is ordering relationship with a system that has deferred parameters like `Commands`. If you want to opt out of this you can switch from `after`, `before`, and `chain` to the corresponding `ignore_deferred` API, `after_ignore_deferred`, `before_ignore_deferred` or `chain_ignore_deferred` for your system/set ordering. - You can also set `ScheduleBuildSettings::auto_insert_sync_points` to `false` if you want to do it for the whole schedule. Note that in this mode you can still add `apply_deferred` points manually. - For most manual insertions of `apply_deferred` you should remove them as they cannot be merged with the automatically inserted points and might reduce parallelizability of the system graph. ## TODO - [x] remove any apply_deferred used in the engine - [x] ~~decide if we should deprecate manually using apply_deferred.~~ We'll still allow inserting manual sync points for now for whatever edge cases users might have. - [x] Update migration guide - [x] rerun schedule build benchmarks --------- Co-authored-by: Joseph <21144246+JoJoJet@users.noreply.github.com> |
||
|
|
fd308571c4
|
Remove unnecessary path prefixes (#10749)
# Objective - Shorten paths by removing unnecessary prefixes ## Solution - Remove the prefixes from many paths which do not need them. Finding the paths was done automatically using built-in refactoring tools in Jetbrains RustRover. |
||
|
|
0e9f6e92ea
|
Add clippy::manual_let_else at warn level to lints (#10684)
# Objective Related to #10612. Enable the [`clippy::manual_let_else`](https://rust-lang.github.io/rust-clippy/master/#manual_let_else) lint as a warning. The `let else` form seems more idiomatic to me than a `match`/`if else` that either match a pattern or diverge, and from the clippy doc, the lint doesn't seem to have any possible false positive. ## Solution Add the lint as warning in `Cargo.toml`, refactor places where the lint triggers. |