# Objective
Another clippy-lint fix: the goal is so that `ci lints` actually
displays the problems that a contributor caused, and not a bunch of
existing stuff in the repo. (when run on nightly)
## Solution
This fixes all but the `clippy::needless_lifetimes` lint, which will
result in substantially more fixes and be in other PR(s). I also
explicitly allow `non_local_definitions` since it is [not working
correctly, but will be
fixed](https://github.com/rust-lang/rust/issues/131643).
A few things were manually fixed: for example, some places had an
explicitly defined `div_ceil` function that was used, which is no longer
needed since this function is stable on unsigned integers. Also, empty
lines in doc comments were handled individually.
## Testing
I ran `cargo clippy --workspace --all-targets --all-features --fix
--allow-staged` with the `clippy::needless_lifetimes` lint marked as
`allow` in `Cargo.toml` to avoid fixing that too. It now passes with all
but the listed lint.
The previous fixes were breaking pretty much everything on main due to
naga-oil complaining about the OIT shader not being loaded, since
apparently webgl is a default feature. This fix is a bit messier, but
properly warns the user and is probably what we should have gone for in
the first place.
# Objective
- bevy_render is gargantuan
## Solution
- Split out bevy_mesh
## Testing
- Ran some examples, everything looks fine
## Migration Guide
`bevy_render::mesh::morph::inherit_weights` is now
`bevy_render::mesh::inherit_weights`
if you were using `Mesh::compute_aabb`, you will need to `use
bevy_render::mesh::MeshAabb;` now
---------
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
# Objective
Fixes#15541
A bunch of lifetimes were added during the Assets V2 rework, but after
moving to async traits in #12550 they can be elided. That PR mentions
that this might be the case, but apparently it wasn't followed up on at
the time.
~~I ended up grepping for `<'a` and finding a similar case in
`bevy_reflect` which I also fixed.~~ (edit: that one was needed
apparently)
Note that elided lifetimes are unstable in `impl Trait`. If that gets
stabilized then we can elide even more.
## Solution
Remove the extra lifetimes.
## Testing
Everything still compiles. If I have messed something up there is a
small risk that some user code stops compiling, but all the examples
still work at least.
---
## Migration Guide
The traits `AssetLoader`, `AssetSaver` and `Process` traits from
`bevy_asset` now use elided lifetimes. If you implement these then
remove the named lifetime.
# Objective
* Remove all uses of render_resource_wrapper.
* Make it easier to share a `wgpu::Device` between Bevy and application
code.
## Solution
Removed the `render_resource_wrapper` macro.
To improve the `RenderCreation:: Manual ` API, `ErasedRenderDevice` was
replaced by `Arc`. Unfortunately I had to introduce one more usage of
`WgpuWrapper` which seems like an unwanted constraint on the caller.
## Testing
- Did you test these changes? If so, how?
- Ran `cargo test`.
- Ran a few examples.
- Used `RenderCreation::Manual` in my own project
- Exercised `RenderCreation::Automatic` through examples
- Are there any parts that need more testing?
- No
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run examples
- Use `RenderCreation::Manual` in their own project
# Objective
- Fixes#6370
- Closes#6581
## Solution
- Added the following lints to the workspace:
- `std_instead_of_core`
- `std_instead_of_alloc`
- `alloc_instead_of_core`
- Used `cargo +nightly fmt` with [item level use
formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Item%5C%3A)
to split all `use` statements into single items.
- Used `cargo clippy --workspace --all-targets --all-features --fix
--allow-dirty` to _attempt_ to resolve the new linting issues, and
intervened where the lint was unable to resolve the issue automatically
(usually due to needing an `extern crate alloc;` statement in a crate
root).
- Manually removed certain uses of `std` where negative feature gating
prevented `--all-features` from finding the offending uses.
- Used `cargo +nightly fmt` with [crate level use
formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Crate%5C%3A)
to re-merge all `use` statements matching Bevy's previous styling.
- Manually fixed cases where the `fmt` tool could not re-merge `use`
statements due to conditional compilation attributes.
## Testing
- Ran CI locally
## Migration Guide
The MSRV is now 1.81. Please update to this version or higher.
## Notes
- This is a _massive_ change to try and push through, which is why I've
outlined the semi-automatic steps I used to create this PR, in case this
fails and someone else tries again in the future.
- Making this change has no impact on user code, but does mean Bevy
contributors will be warned to use `core` and `alloc` instead of `std`
where possible.
- This lint is a critical first step towards investigating `no_std`
options for Bevy.
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
Adds a new `Handle<Storage>` asset type that can be used as a render
asset, particularly for use with `AsBindGroup`.
Closes: #13658
# Objective
Allow users to create storage buffers in the main world without having
to access the `RenderDevice`. While this resource is technically
available, it's bad form to use in the main world and requires mixing
rendering details with main world code. Additionally, this makes storage
buffers easier to use with `AsBindGroup`, particularly in the following
scenarios:
- Sharing the same buffers between a compute stage and material shader.
We already have examples of this for storage textures (see game of life
example) and these changes allow a similar pattern to be used with
storage buffers.
- Preventing repeated gpu upload (see the previous easier to use `Vec`
`AsBindGroup` option).
- Allow initializing custom materials using `Default`. Previously, the
lack of a `Default` implement for the raw `wgpu::Buffer` type made
implementing a `AsBindGroup + Default` bound difficult in the presence
of buffers.
## Solution
Adds a new `Handle<Storage>` asset type that is prepared into a
`GpuStorageBuffer` render asset. This asset can either be initialized
with a `Vec<u8>` of properly aligned data or with a size hint. Users can
modify the underlying `wgpu::BufferDescriptor` to provide additional
usage flags.
## Migration Guide
The `AsBindGroup` `storage` attribute has been modified to reference the
new `Handle<Storage>` asset instead. Usages of Vec` should be converted
into assets instead.
---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
# Objective
- Fixes#14974
## Solution
- Replace all* instances of `NonZero*` with `NonZero<*>`
## Testing
- CI passed locally.
---
## Notes
Within the `bevy_reflect` implementations for `std` types,
`impl_reflect_value!()` will continue to use the type aliases instead,
as it inappropriately parses the concrete type parameter as a generic
argument. If the `ZeroablePrimitive` trait was stable, or the macro
could be modified to accept a finite list of types, then we could fully
migrate.
# Objective
- Fixes#14841
## Solution
- Compute BufferSlice size manually and use it for comparison in
`TrackedRenderPass`
## Testing
- Gizmo example does not crash with #14721 (without system ordering),
and `slice` computes correct size there
---
## Migration Guide
- `TrackedRenderPass::set_vertex_buffer` function has been modified to
update vertex buffers when the same buffer with the same offset is
provided, but its size has changed. Some existing code may rely on the
previous behavior, which did not update the vertex buffer in this
scenario.
---------
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
# Objective
- Faster meshlet rasterization path for small triangles
- Avoid having to allocate and write out a triangle buffer
- Refactor gpu_scene.rs
## Solution
- Replace the 32bit visbuffer texture with a 64bit visbuffer buffer,
where the left 32 bits encode depth, and the right 32 bits encode the
existing cluster + triangle IDs. Can't use 64bit textures, wgpu/naga
doesn't support atomic ops on textures yet.
- Instead of writing out a buffer of packed cluster + triangle IDs (per
triangle) to raster, the culling pass now writes out a buffer of just
cluster IDs (per cluster, so less memory allocated, cheaper to write
out).
- Clusters for software raster are allocated from the left side
- Clusters for hardware raster are allocated in the same buffer, from
the right side
- The buffer size is fixed at MeshletPlugin build time, and should be
set to a reasonable value for your scene (no warning on overflow, and no
good way to determine what value you need outside of renderdoc - I plan
to fix this in a future PR adding a meshlet stats overlay)
- Currently I don't have a heuristic for software vs hardware raster
selection for each cluster. The existing code is just a placeholder. I
need to profile on a release scene and come up with a heuristic,
probably in a future PR.
- The culling shader is getting pretty hard to follow at this point, but
I don't want to spend time improving it as the entire shader/pass is
getting rewritten/replaced in the near future.
- Software raster is a compute workgroup per-cluster. Each workgroup
loads and transforms the <=64 vertices of the cluster, and then
rasterizes the <=64 triangles of the cluster.
- Two variants are implemented: Scanline for clusters with any larger
triangles (still smaller than hardware is good at), and brute-force for
very very tiny triangles
- Once the shader determines that a pixel should be filled in, it does
an atomicMax() on the visbuffer to store the results, copying how Nanite
works
- On devices with a low max workgroups per dispatch limit, an extra
compute pass is inserted before software raster to convert from a 1d to
2d dispatch (I don't think 3d would ever be necessary).
- I haven't implemented the top-left rule or subpixel precision yet, I'm
leaving that for a future PR since I get usable results without it for
now
- Resources used:
https://kristoffer-dyrkorn.github.io/triangle-rasterizer and chapters
6-8 of
https://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index
- Hardware raster now spawns 64*3 vertex invocations per meshlet,
instead of the actual meshlet vertex count. Extra invocations just
early-exit.
- While this is slower than the existing system, hardware draws should
be rare now that software raster is usable, and it saves a ton of memory
using the unified cluster ID buffer. This would be fixed if wgpu had
support for mesh shaders.
- Instead of writing to a color+depth attachment, the hardware raster
pass also does the same atomic visbuffer writes that software raster
uses.
- We have to bind a dummy render target anyways, as wgpu doesn't
currently support render passes without any attachments
- Material IDs are no longer written out during the main rasterization
passes.
- If we had async compute queues, we could overlap the software and
hardware raster passes.
- New material and depth resolve passes run at the end of the visbuffer
node, and write out view depth and material ID depth textures
### Misc changes
- Fixed cluster culling importing, but never actually using the previous
view uniforms when doing occlusion culling
- Fixed incorrectly adding the LOD error twice when building the meshlet
mesh
- Splitup gpu_scene module into meshlet_mesh_manager, instance_manager,
and resource_manager
- resource_manager is still too complex and inefficient (extract and
prepare are way too expensive). I plan on improving this in a future PR,
but for now ResourceManager is mostly a 1:1 port of the leftover
MeshletGpuScene bits.
- Material draw passes have been renamed to the more accurate material
shade pass, as well as some other misc renaming (in the future, these
will be compute shaders even, and not actual draw calls)
---
## Migration Guide
- TBD (ask me at the end of the release for meshlet changes as a whole)
---------
Co-authored-by: vero <email@atlasdostal.com>
# Objective
When using instancing, 2 `VertexBufferLayout`s are needed, one for
per-vertex and one for per-instance data. Shader locations of all
attributes must not overlap, so one of the layouts needs to start its
locations at an offset. However,
`VertexBufferLayout::from_vertex_formats` will always start locations at
0, requiring manual adjustment, which is currently pretty verbose.
## Solution
Add `VertexBufferLayout::offset_locations`, which adds an offset to all
attribute locations.
Code using this method looks like this:
```rust
VertexState {
shader: BACKBUFFER_SHADER_HANDLE.typed(),
shader_defs: Vec::new(),
entry_point: "vertex".into(),
buffers: vec![
VertexBufferLayout::from_vertex_formats(
VertexStepMode::Vertex,
[VertexFormat::Float32x2],
),
VertexBufferLayout::from_vertex_formats(
VertexStepMode::Instance,
[VertexFormat::Float32x2, VertexFormat::Float32x3],
)
.offset_locations(1),
],
}
```
Alternative solutions include:
- Pass the starting location to `from_vertex_formats` – this is a bit
simpler than my solution here, but most calls don't need an offset, so
they'd always pass 0 there.
- Do nothing and make the user hand-write this.
---
## Changelog
- Add `VertexBufferLayout::offset_locations` to simplify buffer layout
construction when using instancing.
---------
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Adding more features to `AsBindGroup` proc macro means making the trait
arguments uglier. Downstream implementors of the trait without the proc
macro might want to do different things than our default arguments.
## Solution
Make `AsBindGroup` take an associated `Param` type.
## Migration Guide
`AsBindGroup` now allows the user to specify a `SystemParam` to be used
for creating bind groups.
# Objective
Fixes#14782
## Solution
Enable the lint and fix all upcoming hints (`--fix`). Also tried to
figure out the false-positive (see review comment). Maybe split this PR
up into multiple parts where only the last one enables the lint, so some
can already be merged resulting in less many files touched / less
potential for merge conflicts?
Currently, there are some cases where it might be easier to read the
code with the qualifier, so perhaps remove the import of it and adapt
its cases? In the current stage it's just a plain adoption of the
suggestions in order to have a base to discuss.
## Testing
`cargo clippy` and `cargo run -p ci` are happy.
# Objective
currently if we use an image with the wrong sampler type in a material,
wgpu panics with an invalid texture format. turn this into a warning and
fail more gracefully.
## Solution
the expected sampler type is specified in the AsBindGroup derive, so we
can just check the image sampler is what it should be.
i am not totally sure about the mapping of image sampler type to
#[sampler(type)], i assumed:
```
"filtering" => [ TextureSampleType::Float { filterable: true } ],
"non_filtering" => [
TextureSampleType::Float { filterable: false },
TextureSampleType::Sint,
TextureSampleType::Uint,
],
"comparison" => [ TextureSampleType::Depth ],
```
Upgrading to WGPU 22.
Needs `naga_oil` to upgrade first, I've got a fork that compiles but
fails tests, so until that's fixed and the crate is officially
updated/released this will be blocked.
---------
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Currently `TextureFormat::Astc` can't be programmatically constructed
without importing wgpu in addition to bevy.
# Objective
Allow programmatic construction of `TextureFormat::Astc` with no
additional imports required.
## Solution
Exported the two component enums `AstcBlock` and `AstcChannel` used in
`TextureFormat::Astc` construction.
## Testing
I did not test this, the change seemed pretty safe. :)
# Objective
- Bevy currently has lot of invalid intra-doc links, let's fix them!
- Also make CI test them, to avoid future regressions.
- Helps with #1983 (but doesn't fix it, as there could still be explicit
links to docs.rs that are broken)
## Solution
- Make `cargo r -p ci -- doc-check` check fail on warnings (could also
be changed to just some specific lints)
- Manually fix all the warnings (note that in some cases it was unclear
to me what the fix should have been, I'll try to highlight them in a
self-review)
# Objective
The `AssetReader` trait allows customizing the behavior of fetching
bytes for an `AssetPath`, and expects implementors to return `dyn
AsyncRead + AsyncSeek`. This gives implementors of `AssetLoader` great
flexibility to tightly integrate their asset loading behavior with the
asynchronous task system.
However, almost all implementors of `AssetLoader` don't use the async
functionality at all, and just call `AsyncReadExt::read_to_end(&mut
Vec<u8>)`. This is incredibly inefficient, as this method repeatedly
calls `poll_read` on the trait object, filling the vector 32 bytes at a
time. At my work we have assets that are hundreds of megabytes which
makes this a meaningful overhead.
## Solution
Turn the `Reader` type alias into an actual trait, with a provided
method `read_to_end`. This provided method should be more efficient than
the existing extension method, as the compiler will know the underlying
type of `Reader` when generating this function, which removes the
repeated dynamic dispatches and allows the compiler to make further
optimizations after inlining. Individual implementors are able to
override the provided implementation -- for simple asset readers that
just copy bytes from one buffer to another, this allows removing a large
amount of overhead from the provided implementation.
Now that `Reader` is an actual trait, I also improved the ergonomics for
implementing `AssetReader`. Currently, implementors are expected to box
their reader and return it as a trait object, which adds unnecessary
boilerplate to implementations. This PR changes that trait method to
return a pseudo trait alias, which allows implementors to return `impl
Reader` instead of `Box<dyn Reader>`. Now, the boilerplate for boxing
occurs in `ErasedAssetReader`.
## Testing
I made identical changes to my company's fork of bevy. Our app, which
makes heavy use of `read_to_end` for asset loading, still worked
properly after this. I am not aware if we have a more systematic way of
testing asset loading for correctness.
---
## Migration Guide
The trait method `bevy_asset::io::AssetReader::read` (and `read_meta`)
now return an opaque type instead of a boxed trait object. Implementors
of these methods should change the type signatures appropriately
```rust
impl AssetReader for MyReader {
// Before
async fn read<'a>(&'a self, path: &'a Path) -> Result<Box<Reader<'a>>, AssetReaderError> {
let reader = // construct a reader
Box::new(reader) as Box<Reader<'a>>
}
// After
async fn read<'a>(&'a self, path: &'a Path) -> Result<impl Reader + 'a, AssetReaderError> {
// create a reader
}
}
```
`bevy::asset::io::Reader` is now a trait, rather than a type alias for a
trait object. Implementors of `AssetLoader::load` will need to adjust
the method signature accordingly
```rust
impl AssetLoader for MyLoader {
async fn load<'a>(
&'a self,
// Before:
reader: &'a mut bevy::asset::io::Reader,
// After:
reader: &'a mut dyn bevy::asset::io::Reader,
_: &'a Self::Settings,
load_context: &'a mut LoadContext<'_>,
) -> Result<Self::Asset, Self::Error> {
}
```
Additionally, implementors of `AssetReader` that return a type
implementing `futures_io::AsyncRead` and `AsyncSeek` might need to
explicitly implement `bevy::asset::io::Reader` for that type.
```rust
impl bevy::asset::io::Reader for MyAsyncReadAndSeek {}
```
Currently blocked on https://github.com/gfx-rs/wgpu/issues/5774
# Objective
Update to wgpu 0.20
## Solution
Update to wgpu 0.20 and naga_oil 0.14.
## Testing
Tested a few different examples on linux (vulkan, webgl2, webgpu) and
windows (dx12 + vulkan) and they worked.
---
## Changelog
- Updated to wgpu 0.20. Note that we don't currently support wgpu's new
pipeline overridable constants, as they don't work on web currently and
need some more changes to naga_oil (and are somewhat redundant with
naga_oil's shader defs). See wgpu's changelog for more
https://github.com/gfx-rs/wgpu/blob/trunk/CHANGELOG.md#v0200-2024-04-28
## Migration Guide
TODO
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <mockersf@gmail.com>
# Objective
The error printed-out due to a missing shader file was confusing; This
PR changes the error message.
Fixes#13644
## Solution
I replaced the confusing wording (`... shader is not loaded yet`) with a
clear explanation (`... shader could not be loaded`)
## Testing
> Did you test these changes? If so, how?
removing `assets/shaders/game_of_life.wgsl` & running its associated
example now produces the following error:
```
thread '<unnamed>' panicked at examples/shader/compute_shader_game_of_life.rs:233:25:
Initializing assets/shaders/game_of_life.wgsl:
Pipeline could not be compiled because the following shader could not be loaded: AssetId<bevy_render::render_resource::shader::Shader>{ index: 0, generation: 0}
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Encountered a panic in system `bevy_render::renderer::render_system`!
```
I don't think there are any tests expecting the previous error message,
so this change should not break anything.
> Are there any parts that need more testing?
If there was an intent behind the original message, this might need more
attention.
> How can other people (reviewers) test your changes? Is there anything
specific they need to know?
One should be able to preview the changes by running any example after
deleting/renaming their associated shader(s).
> If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
N/A
# Objective
- Other render resources have a convenient `.binding()` helper function
to get the binding to the resource
## Solution
- Add the same thing to `BufferVec`, `RawBufferVec`, and
`UninitBufferVec`
# Objective
Fixes#12966
## Solution
Renaming multi_threaded feature to match snake case
## Migration Guide
Bevy feature multi-threaded should be refered to multi_threaded from now
on.
# Objective
- `DynamicUniformBuffer` tries to create a buffer as soon as the changed
flag is set to true. This doesn't work correctly when the buffer wasn't
already created. This currently creates a crash because it's trying to
create a buffer of size 0 if the flag is set but there's no buffer yet.
## Solution
- Don't create a changed buffer until there's data that needs to be
written to a buffer.
## Testing
- run `cargo run --example scene_viewer` and see that it doesn't crash
anymore
Fixes#13235
# Objective
- Add auto exposure/eye adaptation to the bevy render pipeline.
- Support features that users might expect from other engines:
- Metering masks
- Compensation curves
- Smooth exposure transitions
This PR is based on an implementation I already built for a personal
project before https://github.com/bevyengine/bevy/pull/8809 was
submitted, so I wasn't able to adopt that PR in the proper way. I've
still drawn inspiration from it, so @fintelia should be credited as
well.
## Solution
An auto exposure compute shader builds a 64 bin histogram of the scene's
luminance, and then adjusts the exposure based on that histogram. Using
a histogram allows the system to ignore outliers like shadows and
specular highlights, and it allows to give more weight to certain areas
based on a mask.
---
## Changelog
- Added: AutoExposure plugin that allows to adjust a camera's exposure
based on it's scene's luminance.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
This is an adoption of #12670 plus some documentation fixes. See that PR
for more details.
---
## Changelog
* Renamed `BufferVec` to `RawBufferVec` and added a new `BufferVec`
type.
## Migration Guide
`BufferVec` has been renamed to `RawBufferVec` and a new similar type
has taken the `BufferVec` name.
---------
Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
# Objective
- I've been using the `texture_binding_array` example as a base to use
multiple textures in meshes in my program
- I only realised once I was deep in render code that these helpers
existed to create layouts
- I wish I knew the existed earlier because the alternative (filling in
every struct field) is so much more verbose
## Solution
- Use `BindGroupLayoutEntries::with_indices` to teach users that the
helper exists
- Also fix typo which should be `texture_2d`.
## Alternatives considered
- Just leave it as is to teach users about every single struct field
- However, leaving as is leaves users writing roughly 29 lines versus
roughly 2 lines for 2 entries and I'd prefer the 2 line approach
## Testing
Ran the example locally and compared before and after.
Before:
<img width="1280" alt="image"
src="https://github.com/bevyengine/bevy/assets/135186256/f5897210-2560-4110-b92b-85497be9023c">
After:
<img width="1279" alt="image"
src="https://github.com/bevyengine/bevy/assets/135186256/8d13a939-b1ce-4a49-a9da-0b1779c8cb6a">
Co-authored-by: mgi388 <>
# Objective
- Update glam version requirement to latest version.
## Solution
- Updated `glam` version requirement from 0.25 to 0.27.
- Updated `encase` and `encase_derive_impl` version requirement from 0.7
to 0.8.
- Updated `hexasphere` version requirement from 10.0 to 12.0.
- Breaking changes from glam changelog:
- [0.26.0] Minimum Supported Rust Version bumped to 1.68.2 for impl
From<bool> for {f32,f64} support.
- [0.27.0] Changed implementation of vector fract method to match the
Rust implementation instead of the GLSL implementation, that is self -
self.trunc() instead of self - self.floor().
---
## Migration Guide
- When using `glam` exports, keep in mind that `vector` `fract()` method
now matches Rust implementation (that is `self - self.trunc()` instead
of `self - self.floor()`). If you want to use the GLSL implementation
you should now use `fract_gl()`.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
This commit implements opt-in GPU frustum culling, built on top of the
infrastructure in https://github.com/bevyengine/bevy/pull/12773. To
enable it on a camera, add the `GpuCulling` component to it. To
additionally disable CPU frustum culling, add the `NoCpuCulling`
component. Note that adding `GpuCulling` without `NoCpuCulling`
*currently* does nothing useful. The reason why `GpuCulling` doesn't
automatically imply `NoCpuCulling` is that I intend to follow this patch
up with GPU two-phase occlusion culling, and CPU frustum culling plus
GPU occlusion culling seems like a very commonly-desired mode.
Adding the `GpuCulling` component to a view puts that view into
*indirect mode*. This mode makes all drawcalls indirect, relying on the
mesh preprocessing shader to allocate instances dynamically. In indirect
mode, the `PreprocessWorkItem` `output_index` points not to a
`MeshUniform` instance slot but instead to a set of `wgpu`
`IndirectParameters`, from which it allocates an instance slot
dynamically if frustum culling succeeds. Batch building has been updated
to allocate and track indirect parameter slots, and the AABBs are now
supplied to the GPU as `MeshCullingData`.
A small amount of code relating to the frustum culling has been borrowed
from meshlets and moved into `maths.wgsl`. Note that standard Bevy
frustum culling uses AABBs, while meshlets use bounding spheres; this
means that not as much code can be shared as one might think.
This patch doesn't provide any way to perform GPU culling on shadow
maps, to avoid making this patch bigger than it already is. That can be
a followup.
## Changelog
### Added
* Frustum culling can now optionally be done on the GPU. To enable it,
add the `GpuCulling` component to a camera.
* To disable CPU frustum culling, add `NoCpuCulling` to a camera. Note
that `GpuCulling` doesn't automatically imply `NoCpuCulling`.
https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0
# Objective
- Adds per-object motion blur to the core 3d pipeline. This is a common
effect used in games and other simulations.
- Partially resolves#4710
## Solution
- This is a post-process effect that uses the depth and motion vector
buffers to estimate per-object motion blur. The implementation is
combined from knowledge from multiple papers and articles. The approach
itself, and the shader are quite simple. Most of the effort was in
wiring up the bevy rendering plumbing, and properly specializing for HDR
and MSAA.
- To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is
required. I've extracted this code from #9000. This is because the
prepass buffers are multisampled, and require accessing with
`textureLoad` as opposed to the widely compatible `textureSample`.
- Added an example to demonstrate the effect of motion blur parameters.
## Future Improvements
- While this approach does have limitations, it's one of the most
commonly used, and is much better than camera motion blur, which does
not consider object velocity. For example, this implementation allows a
dolly to track an object, and that object will remain unblurred while
the background is blurred. The biggest issue with this implementation is
that blur is constrained to the boundaries of objects which results in
hard edges. There are solutions to this by either dilating the object or
the motion vector buffer, or by taking a different approach such as
https://casual-effects.com/research/McGuire2012Blur/index.html
- I'm using a noise PRNG function to jitter samples. This could be
replaced with a blue noise texture lookup or similar, however after
playing with the parameters, it gives quite nice results with 4 samples,
and is significantly better than the artifacts generated when not
jittering.
---
## Changelog
- Added: per-object motion blur. This can be enabled and configured by
adding the `MotionBlurBundle` to a camera entity.
---------
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
Currently, `MeshUniform`s are rather large: 160 bytes. They're also
somewhat expensive to compute, because they involve taking the inverse
of a 3x4 matrix. Finally, if a mesh is present in multiple views, that
mesh will have a separate `MeshUniform` for each and every view, which
is wasteful.
This commit fixes these issues by introducing the concept of a *mesh
input uniform* and adding a *mesh uniform building* compute shader pass.
The `MeshInputUniform` is simply the minimum amount of data needed for
the GPU to compute the full `MeshUniform`. Most of this data is just the
transform and is therefore only 64 bytes. `MeshInputUniform`s are
computed during the *extraction* phase, much like skins are today, in
order to avoid needlessly copying transforms around on CPU. (In fact,
the render app has been changed to only store the translation of each
mesh; it no longer cares about any other part of the transform, which is
stored only on the GPU and the main world.) Before rendering, the
`build_mesh_uniforms` pass runs to expand the `MeshInputUniform`s to the
full `MeshUniform`.
The mesh uniform building pass does the following, all on GPU:
1. Copy the appropriate fields of the `MeshInputUniform` to the
`MeshUniform` slot. If a single mesh is present in multiple views, this
effectively duplicates it into each view.
2. Compute the inverse transpose of the model transform, used for
transforming normals.
3. If applicable, copy the mesh's transform from the previous frame for
TAA. To support this, we double-buffer the `MeshInputUniform`s over two
frames and swap the buffers each frame. The `MeshInputUniform`s for the
current frame contain the index of that mesh's `MeshInputUniform` for
the previous frame.
This commit produces wins in virtually every CPU part of the pipeline:
`extract_meshes`, `queue_material_meshes`,
`batch_and_prepare_render_phase`, and especially
`write_batched_instance_buffer` are all faster. Shrinking the amount of
CPU data that has to be shuffled around speeds up the entire rendering
process.
| Benchmark | This branch | `main` | Speedup |
|------------------------|-------------|---------|---------|
| `many_cubes -nfc` | 17.259 | 24.529 | 42.12% |
| `many_cubes -nfc -vpi` | 302.116 | 312.123 | 3.31% |
| `many_foxes` | 3.227 | 3.515 | 8.92% |
Because mesh uniform building requires compute shader, and WebGL 2 has
no compute shader, the existing CPU mesh uniform building code has been
left as-is. Many types now have both CPU mesh uniform building and GPU
mesh uniform building modes. Developers can opt into the old CPU mesh
uniform building by setting the `use_gpu_uniform_builder` option on
`PbrPlugin` to `false`.
Below are graphs of the CPU portions of `many-cubes
--no-frustum-culling`. Yellow is this branch, red is `main`.
`extract_meshes`:

It's notable that we get a small win even though we're now writing to a
GPU buffer.
`queue_material_meshes`:

There's a bit of a regression here; not sure what's causing it. In any
case it's very outweighed by the other gains.
`batch_and_prepare_render_phase`:

There's a huge win here, enough to make batching basically drop off the
profile.
`write_batched_instance_buffer`:

There's a massive improvement here, as expected. Note that a lot of it
simply comes from the fact that `MeshInputUniform` is `Pod`. (This isn't
a maintainability problem in my view because `MeshInputUniform` is so
simple: just 16 tightly-packed words.)
## Changelog
### Added
* Per-mesh instance data is now generated on GPU with a compute shader
instead of CPU, resulting in rendering performance improvements on
platforms where compute shaders are supported.
## Migration guide
* Custom render phases now need multiple systems beyond just
`batch_and_prepare_render_phase`. Code that was previously creating
custom render phases should now add a `BinnedRenderPhasePlugin` or
`SortedRenderPhasePlugin` as appropriate instead of directly adding
`batch_and_prepare_render_phase`.
# Objective
- Replace `RenderMaterials` / `RenderMaterials2d` / `RenderUiMaterials`
with `RenderAssets` to enable implementing changes to one thing,
`RenderAssets`, that applies to all use cases rather than duplicating
changes everywhere for multiple things that should be one thing.
- Adopts #8149
## Solution
- Make RenderAsset generic over the destination type rather than the
source type as in #8149
- Use `RenderAssets<PreparedMaterial<M>>` etc for render materials
---
## Changelog
- Changed:
- The `RenderAsset` trait is now implemented on the destination type.
Its `SourceAsset` associated type refers to the type of the source
asset.
- `RenderMaterials`, `RenderMaterials2d`, and `RenderUiMaterials` have
been replaced by `RenderAssets<PreparedMaterial<M>>` and similar.
## Migration Guide
- `RenderAsset` is now implemented for the destination type rather that
the source asset type. The source asset type is now the `RenderAsset`
trait's `SourceAsset` associated type.
# Objective
- Add a way to easily get currently waiting pipelines IDs.
## Solution
- Added a method to get waiting pipelines `CachedPipelineId`.
---------
Co-authored-by: James Liu <contact@jamessliu.com>
# Objective
Since BufferVec was first introduced, `bytemuck` has added additional
traits with fewer restrictions than `Pod`. Within BufferVec, we only
rely on the constraints of `bytemuck::cast_slice` to a `u8` slice, which
now only requires `T: NoUninit` which is a strict superset of `Pod`
types.
## Solution
Change out the `Pod` generic type constraint with `NoUninit`. Also
taking the opportunity to substitute `cast_slice` with
`must_cast_slice`, which avoids a runtime panic in place of a compile
time failure if `T` cannot be used.
---
## Changelog
Changed: `BufferVec` now supports working with types containing
`NoUninit` but not `Pod` members.
Changed: `BufferVec` will now fail to compile if used with a type that
cannot be safely read from. Most notably, this includes ZSTs, which
would previously always panic at runtime.
This commit makes the following optimizations:
## `MeshPipelineKey`/`BaseMeshPipelineKey` split
`MeshPipelineKey` has been split into `BaseMeshPipelineKey`, which lives
in `bevy_render` and `MeshPipelineKey`, which lives in `bevy_pbr`.
Conceptually, `BaseMeshPipelineKey` is a superclass of
`MeshPipelineKey`. For `BaseMeshPipelineKey`, the bits start at the
highest (most significant) bit and grow downward toward the lowest bit;
for `MeshPipelineKey`, the bits start at the lowest bit and grow upward
toward the highest bit. This prevents them from colliding.
The goal of this is to avoid having to reassemble bits of the pipeline
key for every mesh every frame. Instead, we can just use a bitwise or
operation to combine the pieces that make up a `MeshPipelineKey`.
## `specialize_slow`
Previously, all of `specialize()` was marked as `#[inline]`. This
bloated `queue_material_meshes` unnecessarily, as a large chunk of it
ended up being a slow path that was rarely hit. This commit refactors
the function to move the slow path to `specialize_slow()`.
Together, these two changes shave about 5% off `queue_material_meshes`:

## Migration Guide
- The `primitive_topology` field on `GpuMesh` is now an accessor method:
`GpuMesh::primitive_topology()`.
- For performance reasons, `MeshPipelineKey` has been split into
`BaseMeshPipelineKey`, which lives in `bevy_render`, and
`MeshPipelineKey`, which lives in `bevy_pbr`. These two should be
combined with bitwise-or to produce the final `MeshPipelineKey`.
Today, we sort all entities added to all phases, even the phases that
don't strictly need sorting, such as the opaque and shadow phases. This
results in a performance loss because our `PhaseItem`s are rather large
in memory, so sorting is slow. Additionally, determining the boundaries
of batches is an O(n) process.
This commit makes Bevy instead applicable place phase items into *bins*
keyed by *bin keys*, which have the invariant that everything in the
same bin is potentially batchable. This makes determining batch
boundaries O(1), because everything in the same bin can be batched.
Instead of sorting each entity, we now sort only the bin keys. This
drops the sorting time to near-zero on workloads with few bins like
`many_cubes --no-frustum-culling`. Memory usage is improved too, with
batch boundaries and dynamic indices now implicit instead of explicit.
The improved memory usage results in a significant win even on
unbatchable workloads like `many_cubes --no-frustum-culling
--vary-material-data-per-instance`, presumably due to cache effects.
Not all phases can be binned; some, such as transparent and transmissive
phases, must still be sorted. To handle this, this commit splits
`PhaseItem` into `BinnedPhaseItem` and `SortedPhaseItem`. Most of the
logic that today deals with `PhaseItem`s has been moved to
`SortedPhaseItem`. `BinnedPhaseItem` has the new logic.
Frame time results (in ms/frame) are as follows:
| Benchmark | `binning` | `main` | Speedup |
| ------------------------ | --------- | ------- | ------- |
| `many_cubes -nfc -vpi` | 232.179 | 312.123 | 34.43% |
| `many_cubes -nfc` | 25.874 | 30.117 | 16.40% |
| `many_foxes` | 3.276 | 3.515 | 7.30% |
(`-nfc` is short for `--no-frustum-culling`; `-vpi` is short for
`--vary-per-instance`.)
---
## Changelog
### Changed
* Render phases have been split into binned and sorted phases. Binned
phases, such as the common opaque phase, achieve improved CPU
performance by avoiding the sorting step.
## Migration Guide
- `PhaseItem` has been split into `BinnedPhaseItem` and
`SortedPhaseItem`. If your code has custom `PhaseItem`s, you will need
to migrate them to one of these two types. `SortedPhaseItem` requires
the fewest code changes, but you may want to pick `BinnedPhaseItem` if
your phase doesn't require sorting, as that enables higher performance.
## Tracy graphs
`many-cubes --no-frustum-culling`, `main` branch:
<img width="1064" alt="Screenshot 2024-03-12 180037"
src="https://github.com/bevyengine/bevy/assets/157897/e1180ce8-8e89-46d2-85e3-f59f72109a55">
`many-cubes --no-frustum-culling`, this branch:
<img width="1064" alt="Screenshot 2024-03-12 180011"
src="https://github.com/bevyengine/bevy/assets/157897/0899f036-6075-44c5-a972-44d95895f46c">
You can see that `batch_and_prepare_binned_render_phase` is a much
smaller fraction of the time. Zooming in on that function, with yellow
being this branch and red being `main`, we see:
<img width="1064" alt="Screenshot 2024-03-12 175832"
src="https://github.com/bevyengine/bevy/assets/157897/0dfc8d3f-49f4-496e-8825-a66e64d356d0">
The binning happens in `queue_material_meshes`. Again with yellow being
this branch and red being `main`:
<img width="1064" alt="Screenshot 2024-03-12 175755"
src="https://github.com/bevyengine/bevy/assets/157897/b9b20dc1-11c8-400c-a6cc-1c2e09c1bb96">
We can see that there is a small regression in `queue_material_meshes`
performance, but it's not nearly enough to outweigh the large gains in
`batch_and_prepare_binned_render_phase`.
---------
Co-authored-by: James Liu <contact@jamessliu.com>
# Objective
Fixes#12727. All parts that `PersistentGpuBuffer` interact with should
be 100% safe both on the CPU and the GPU: `Queue::write_buffer_with`
zeroes out the slice being written to and when uploading to the GPU, and
all slice writes are bounds checked on the CPU side.
## Solution
Make `PersistentGpuBufferable` a safe trait. Enforce it's correct
implementation via assertions. Re-enable `forbid(unsafe_code)` on
`bevy_pbr`.
# Objective
This gets Bevy building on Wasm when the `atomics` flag is enabled. This
does not yet multithread Bevy itself, but it allows Bevy users to use a
crate like `wasm_thread` to spawn their own threads and manually
parallelize work. This is a first step towards resolving #4078 . Also
fixes#9304.
This provides a foothold so that Bevy contributors can begin to think
about multithreaded Wasm's constraints and Bevy can work towards changes
to get the engine itself multithreaded.
Some flags need to be set on the Rust compiler when compiling for Wasm
multithreading. Here's what my build script looks like, with the correct
flags set, to test out Bevy examples on web:
```bash
set -e
RUSTFLAGS='-C target-feature=+atomics,+bulk-memory,+mutable-globals' \
cargo build --example breakout --target wasm32-unknown-unknown -Z build-std=std,panic_abort --release
wasm-bindgen --out-name wasm_example \
--out-dir examples/wasm/target \
--target web target/wasm32-unknown-unknown/release/examples/breakout.wasm
devserver --header Cross-Origin-Opener-Policy='same-origin' --header Cross-Origin-Embedder-Policy='require-corp' --path examples/wasm
```
A few notes:
1. `cpal` crashes immediately when the `atomics` flag is set. That is
patched in https://github.com/RustAudio/cpal/pull/837, but not yet in
the latest crates.io release.
That can be temporarily worked around by patching Cpal like so:
```toml
[patch.crates-io]
cpal = { git = "https://github.com/RustAudio/cpal" }
```
2. When testing out `wasm_thread` you need to enable the `es_modules`
feature.
## Solution
The largest obstacle to compiling Bevy with `atomics` on web is that
`wgpu` types are _not_ Send and Sync. Longer term Bevy will need an
approach to handle that, but in the near term Bevy is already configured
to be single-threaded on web.
Therefor it is enough to wrap `wgpu` types in a
`send_wrapper::SendWrapper` that _is_ Send / Sync, but panics if
accessed off the `wgpu` thread.
---
## Changelog
- `wgpu` types that are not `Send` are wrapped in
`send_wrapper::SendWrapper` on Wasm + 'atomics'
- CommandBuffers are not generated in parallel on Wasm + 'atomics'
## Questions
- Bevy should probably add CI checks to make sure this doesn't regress.
Should that go in this PR or a separate PR? **Edit:** Added checks to
build Wasm with atomics
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: daxpedda <daxpedda@gmail.com>
Co-authored-by: François <francois.mockers@vleue.com>
# Objective
Simplify implementing some asset traits without Box::pin(async move{})
shenanigans.
Fixes (in part) https://github.com/bevyengine/bevy/issues/11308
## Solution
Use async-fn in traits when possible in all traits. Traits with return
position impl trait are not object safe however, and as AssetReader and
AssetWriter are both used with dynamic dispatch, you need a Boxed
version of these futures anyway.
In the future, Rust is [adding
](https://blog.rust-lang.org/2023/12/21/async-fn-rpit-in-traits.html)proc
macros to generate these traits automatically, and at some point in the
future dyn traits should 'just work'. Until then.... this seemed liked
the right approach given more ErasedXXX already exist, but, no clue if
there's plans here! Especially since these are public now, it's a bit of
an unfortunate API, and means this is a breaking change.
In theory this saves some performance when these traits are used with
static dispatch, but, seems like most code paths go through dynamic
dispatch, which boxes anyway.
I also suspect a bunch of the lifetime annotations on these function
could be simplified now as the BoxedFuture was often the only thing
returned which needed a lifetime annotation, but I'm not touching that
for now as traits + lifetimes can be so tricky.
This is a revival of
[pull/11362](https://github.com/bevyengine/bevy/pull/11362) after a
spectacular merge f*ckup, with updates to the latest Bevy. Just to recap
some discussion:
- Overall this seems like a win for code quality, especially when
implementing these traits, but a loss for having to deal with ErasedXXX
variants.
- `ConditionalSend` was the preferred name for the trait that might be
Send, to deal with wasm platforms.
- When reviewing be sure to disable whitespace difference, as that's 95%
of the PR.
## Changelog
- AssetReader, AssetWriter, AssetLoader, AssetSaver and Process now use
async-fn in traits rather than boxed futures.
## Migration Guide
- Custom implementations of AssetReader, AssetWriter, AssetLoader,
AssetSaver and Process should switch to async fn rather than returning a
bevy_utils::BoxedFuture.
- Simultaniously, to use dynamic dispatch on these traits you should
instead use dyn ErasedXXX.
# Objective
Make bevy_utils less of a compilation bottleneck. Tackle #11478.
## Solution
* Move all of the directly reexported dependencies and move them to
where they're actually used.
* Remove the UUID utilities that have gone unused since `TypePath` took
over for `TypeUuid`.
* There was also a extraneous bytemuck dependency on `bevy_core` that
has not been used for a long time (since `encase` became the primary way
to prepare GPU buffers).
* Remove the `all_tuples` macro reexport from bevy_ecs since it's
accessible from `bevy_utils`.
---
## Changelog
Removed: Many of the reexports from bevy_utils (petgraph, uuid, nonmax,
smallvec, and thiserror).
Removed: bevy_core's reexports of bytemuck.
## Migration Guide
bevy_utils' reexports of petgraph, uuid, nonmax, smallvec, and thiserror
have been removed.
bevy_core' reexports of bytemuck's types has been removed.
Add them as dependencies in your own crate instead.
# Objective
Fixes#11298. Make the use of bevy_log vs bevy_utils::tracing more
consistent.
## Solution
Replace all uses of bevy_log's logging macros with the reexport from
bevy_utils. Remove bevy_log as a dependency where it's no longer needed
anymore.
Ideally we should just be using tracing directly, but given that all of
these crates are already using bevy_utils, this likely isn't that great
of a loss right now.
# Objective
While mucking around with batch_and_prepare systems, it became apparent
that `GpuArrayBufferIndex::index` doesn't need to be a NonMaxU32.
## Solution
Replace it with a normal u32.
This likely has some potential perf benefit by avoiding panics and the
NOT operations, but I haven't been able to find any substantial gains,
so this is primarily for code quality.
---
## Changelog
Changed: `GpuArrayBufferIndex::index` is now a u32.
## Migration Guide
`GpuArrayBuferIndex::index` is now a u32 instead of a `NonMaxU32`.
Remove any calls to `NonMaxU32::get` on the member.
Although we cached hashes of `MeshVertexBufferLayout`, we were paying
the cost of `PartialEq` on `InnerMeshVertexBufferLayout` for every
entity, every frame. This patch changes that logic to place
`MeshVertexBufferLayout`s in `Arc`s so that they can be compared and
hashed by pointer. This results in a 28% speedup in the
`queue_material_meshes` phase of `many_cubes`, with frustum culling
disabled.
Additionally, this patch contains two minor changes:
1. This commit flattens the specialized mesh pipeline cache to one level
of hash tables instead of two. This saves a hash lookup.
2. The example `many_cubes` has been given a `--no-frustum-culling`
flag, to aid in benchmarking.
See the Tracy profile:
<img width="1064" alt="Screenshot 2024-02-29 144406"
src="https://github.com/bevyengine/bevy/assets/157897/18632f1d-1fdd-4ac7-90ed-2d10306b2a1e">
## Migration guide
* Duplicate `MeshVertexBufferLayout`s are now combined into a single
object, `MeshVertexBufferLayoutRef`, which contains an
atomically-reference-counted pointer to the layout. Code that was using
`MeshVertexBufferLayout` may need to be updated to use
`MeshVertexBufferLayoutRef` instead.
# Objective
- As part of the migration process we need to a) see the end effect of
the migration on user ergonomics b) check for serious perf regressions
c) actually migrate the code
- To accomplish this, I'm going to attempt to migrate all of the
remaining user-facing usages of `LegacyColor` in one PR, being careful
to keep a clean commit history.
- Fixes#12056.
## Solution
I've chosen to use the polymorphic `Color` type as our standard
user-facing API.
- [x] Migrate `bevy_gizmos`.
- [x] Take `impl Into<Color>` in all `bevy_gizmos` APIs
- [x] Migrate sprites
- [x] Migrate UI
- [x] Migrate `ColorMaterial`
- [x] Migrate `MaterialMesh2D`
- [x] Migrate fog
- [x] Migrate lights
- [x] Migrate StandardMaterial
- [x] Migrate wireframes
- [x] Migrate clear color
- [x] Migrate text
- [x] Migrate gltf loader
- [x] Register color types for reflection
- [x] Remove `LegacyColor`
- [x] Make sure CI passes
Incidental improvements to ease migration:
- added `Color::srgba_u8`, `Color::srgba_from_array` and friends
- added `set_alpha`, `is_fully_transparent` and `is_fully_opaque` to the
`Alpha` trait
- add and immediately deprecate (lol) `Color::rgb` and friends in favor
of more explicit and consistent `Color::srgb`
- standardized on white and black for most example text colors
- added vector field traits to `LinearRgba`: ~~`Add`, `Sub`,
`AddAssign`, `SubAssign`,~~ `Mul<f32>` and `Div<f32>`. Multiplications
and divisions do not scale alpha. `Add` and `Sub` have been cut from
this PR.
- added `LinearRgba` and `Srgba` `RED/GREEN/BLUE`
- added `LinearRgba_to_f32_array` and `LinearRgba::to_u32`
## Migration Guide
Bevy's color types have changed! Wherever you used a
`bevy::render::Color`, a `bevy::color::Color` is used instead.
These are quite similar! Both are enums storing a color in a specific
color space (or to be more precise, using a specific color model).
However, each of the different color models now has its own type.
TODO...
- `Color::rgba`, `Color::rgb`, `Color::rbga_u8`, `Color::rgb_u8`,
`Color::rgb_from_array` are now `Color::srgba`, `Color::srgb`,
`Color::srgba_u8`, `Color::srgb_u8` and `Color::srgb_from_array`.
- `Color::set_a` and `Color::a` is now `Color::set_alpha` and
`Color::alpha`. These are part of the `Alpha` trait in `bevy_color`.
- `Color::is_fully_transparent` is now part of the `Alpha` trait in
`bevy_color`
- `Color::r`, `Color::set_r`, `Color::with_r` and the equivalents for
`g`, `b` `h`, `s` and `l` have been removed due to causing silent
relatively expensive conversions. Convert your `Color` into the desired
color space, perform your operations there, and then convert it back
into a polymorphic `Color` enum.
- `Color::hex` is now `Srgba::hex`. Call `.into` or construct a
`Color::Srgba` variant manually to convert it.
- `WireframeMaterial`, `ExtractedUiNode`, `ExtractedDirectionalLight`,
`ExtractedPointLight`, `ExtractedSpotLight` and `ExtractedSprite` now
store a `LinearRgba`, rather than a polymorphic `Color`
- `Color::rgb_linear` and `Color::rgba_linear` are now
`Color::linear_rgb` and `Color::linear_rgba`
- The various CSS color constants are no longer stored directly on
`Color`. Instead, they're defined in the `Srgba` color space, and
accessed via `bevy::color::palettes::css`. Call `.into()` on them to
convert them into a `Color` for quick debugging use, and consider using
the much prettier `tailwind` palette for prototyping.
- The `LIME_GREEN` color has been renamed to `LIMEGREEN` to comply with
the standard naming.
- Vector field arithmetic operations on `Color` (add, subtract, multiply
and divide by a f32) have been removed. Instead, convert your colors
into `LinearRgba` space, and perform your operations explicitly there.
This is particularly relevant when working with emissive or HDR colors,
whose color channel values are routinely outside of the ordinary 0 to 1
range.
- `Color::as_linear_rgba_f32` has been removed. Call
`LinearRgba::to_f32_array` instead, converting if needed.
- `Color::as_linear_rgba_u32` has been removed. Call
`LinearRgba::to_u32` instead, converting if needed.
- Several other color conversion methods to transform LCH or HSL colors
into float arrays or `Vec` types have been removed. Please reimplement
these externally or open a PR to re-add them if you found them
particularly useful.
- Various methods on `Color` such as `rgb` or `hsl` to convert the color
into a specific color space have been removed. Convert into
`LinearRgba`, then to the color space of your choice.
- Various implicitly-converting color value methods on `Color` such as
`r`, `g`, `b` or `h` have been removed. Please convert it into the color
space of your choice, then check these properties.
- `Color` no longer implements `AsBindGroup`. Store a `LinearRgba`
internally instead to avoid conversion costs.
---------
Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
Co-authored-by: Afonso Lage <lage.afonso@gmail.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
Co-authored-by: Zachary Harrold <zac@harrold.com.au>