A few versions ago, wgpu made it possible to set shader entry point to
`None`, which will select the correct entry point in file where only a
single entrypoint is specified. This makes it possible to implement
`Default` for pipeline descriptors. This PR does so and attempts to
`..default()` everything possible.
# Objective
Upgrade to `wgpu` version `25.0`.
Depends on https://github.com/bevyengine/naga_oil/pull/121
## Solution
### Problem
The biggest issue we face upgrading is the following requirement:
> To facilitate this change, there was an additional validation rule put
in place: if there is a binding array in a bind group, you may not use
dynamic offset buffers or uniform buffers in that bind group. This
requirement comes from vulkan rules on UpdateAfterBind descriptors.
This is a major difficulty for us, as there are a number of binding
arrays that are used in the view bind group. Note, this requirement does
not affect merely uniform buffors that use dynamic offset but the use of
*any* uniform in a bind group that also has a binding array.
### Attempted fixes
The easiest fix would be to change uniforms to be storage buffers
whenever binding arrays are in use:
```wgsl
#ifdef BINDING_ARRAYS_ARE_USED
@group(0) @binding(0) var<uniform> view: View;
@group(0) @binding(1) var<uniform> lights: types::Lights;
#else
@group(0) @binding(0) var<storage> view: array<View>;
@group(0) @binding(1) var<storage> lights: array<types::Lights>;
#endif
```
This requires passing the view index to the shader so that we know where
to index into the buffer:
```wgsl
struct PushConstants {
view_index: u32,
}
var<push_constant> push_constants: PushConstants;
```
Using push constants is no problem because binding arrays are only
usable on native anyway.
However, this greatly complicates the ability to access `view` in
shaders. For example:
```wgsl
#ifdef BINDING_ARRAYS_ARE_USED
mesh_view_bindings::view.view_from_world[0].z
#else
mesh_view_bindings::view[mesh_view_bindings::view_index].view_from_world[0].z
#endif
```
Using this approach would work but would have the effect of polluting
our shaders with ifdef spam basically *everywhere*.
Why not use a function? Unfortunately, the following is not valid wgsl
as it returns a binding directly from a function in the uniform path.
```wgsl
fn get_view() -> View {
#if BINDING_ARRAYS_ARE_USED
let view_index = push_constants.view_index;
let view = views[view_index];
#endif
return view;
}
```
This also poses problems for things like lights where we want to return
a ptr to the light data. Returning ptrs from wgsl functions isn't
allowed even if both bindings were buffers.
The next attempt was to simply use indexed buffers everywhere, in both
the binding array and non binding array path. This would be viable if
push constants were available everywhere to pass the view index, but
unfortunately they are not available on webgpu. This means either
passing the view index in a storage buffer (not ideal for such a small
amount of state) or using push constants sometimes and uniform buffers
only on webgpu. However, this kind of conditional layout infects
absolutely everything.
Even if we were to accept just using storage buffer for the view index,
there's also the additional problem that some dynamic offsets aren't
actually per-view but per-use of a setting on a camera, which would
require passing that uniform data on *every* camera regardless of
whether that rendering feature is being used, which is also gross.
As such, although it's gross, the simplest solution just to bump binding
arrays into `@group(1)` and all other bindings up one bind group. This
should still bring us under the device limit of 4 for most users.
### Next steps / looking towards the future
I'd like to avoid needing split our view bind group into multiple parts.
In the future, if `wgpu` were to add `@builtin(draw_index)`, we could
build a list of draw state in gpu processing and avoid the need for any
kind of state change at all (see
https://github.com/gfx-rs/wgpu/issues/6823). This would also provide
significantly more flexibility to handle things like offsets into other
arrays that may not be per-view.
### Testing
Tested a number of examples, there are probably more that are still
broken.
---------
Co-authored-by: François Mockers <mockersf@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
# Objective
- Running `cargo run --package ci` in MacOS does not currently work in
`main`.
- It shows a `error: this lint expectation is unfulfilled`.
- Fixes#19583
## Solution
- Remove an unnecessary `#[expect(clippy::large_enum_variant)]` on a
function.
## Testing
- `cargo run --package ci`: 👍
# Bevy Solari
<img
src="https://github.com/user-attachments/assets/94061fc8-01cf-4208-b72a-8eecad610d76"
width="100" />
## Preface
- See release notes.
- Please talk to me in #rendering-dev on discord or open a github
discussion if you have questions about the long term plan, and keep
discussion in this PR limited to the contents of the PR :)
## Connections
- Works towards #639, #16408.
- Spawned https://github.com/bevyengine/bevy/issues/18993.
- Need to fix RT stuff in naga_oil first
https://github.com/bevyengine/naga_oil/pull/116.
## This PR
After nearly two years, I've revived the raytraced lighting effort I
first started in https://github.com/bevyengine/bevy/pull/10000.
Unlike that PR, which has realtime techniques, I've limited this PR to:
* `RaytracingScenePlugin` - BLAS and TLAS building, geometry and texture
binding, sampling functions.
* `PathtracingPlugin` - A non-realtime path tracer intended to serve as
a testbed and reference.
## What's implemented?

* BLAS building on mesh load
* Emissive lights
* Directional lights with soft shadows
* Diffuse (lambert, not Bevy's diffuse BRDF) and emissive materials
* A reference path tracer with:
* Antialiasing
* Direct light sampling (next event estimation) with 0/1 MIS weights
* Importance-sampled BRDF bounces
* Russian roulette
## What's _not_ implemented?
* Anything realtime, including a real-time denoiser
* Integration with Bevy's rasterized gbuffer
* Specular materials
* Non-opaque geometry
* Any sort of CPU or GPU optimizations
* BLAS compaction, proper bindless, and further RT APIs are things that
we need wgpu to add
* PointLights, SpotLights, or skyboxes / environment lighting
* Support for materials other than StandardMaterial (and only a subset
of properties are supported)
* Skinned/morphed or otherwise animating/deformed meshes
* Mipmaps
* Adaptive self-intersection ray bias
* A good way for developers to detect whether the user's GPU supports RT
or not, and fallback to baked lighting.
* Documentation and actual finalized APIs (literally everything is
subject to change)
## End-user Usage
* Have a GPU that supports RT with inline ray queries
* Add `SolariPlugin` to your app
* Ensure any `Mesh` asset you want to use for raytracing has
`enable_raytracing: true` (defaults to true), and that it uses the
standard uncompressed position/normal/uv_0/tangent vertex attribute set,
triangle list topology, and 32-bit indices.
* If you don't want to build a BLAS and use the mesh for RT, set
enable_raytracing to false.
* Add the `RaytracingMesh3d` component to your entity (separate from
`Mesh3d` or `MeshletMesh3d`).
## Testing
- Did you test these changes? If so, how?
- Ran the solari example.
- Are there any parts that need more testing?
- Other test scenes probably. Normal mapping would be good to test.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- See the solari.rs example for how to setup raytracing.
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
- Windows 11, NVIDIA RTX 3080.
---------
Co-authored-by: atlv <email@atlasdostal.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- A step towards splitting out bevy_camera from bevy_render
## Solution
- Move a shim type into bevy_utils to avoid a dependency cycle
- Manually expand Deref/DerefMut to avoid having a bevy_derive
dependency so early in the dep tree
## Testing
- It compiles
# Objective
- transitive shader imports sometimes fail to load silently and return
Ok
- Fixes#19226
## Solution
- Don't return Ok, return the appropriate error code which will retry
the load later when the dependencies load
## Testing
- `bevy run --example=3d_scene web --open`
Note: this is was theoretically a problem before the hot reloading PR,
but probably extremely unlikely to occur.
# Objective
- Get in-engine shader hot reloading working
## Solution
- Adopt #12009
- Cut back on everything possible to land an MVP: we only hot-reload PBR
in deferred shading mode. This is to minimize the diff and avoid merge
hell. The rest shall come in followups.
## Testing
- `cargo run --example pbr --features="embedded_watcher"` and edit some
pbr shader code
# Objective
Fixes a part of #14274.
Bevy has an incredibly inconsistent naming convention for its system
sets, both internally and across the ecosystem.
<img alt="System sets in Bevy"
src="https://github.com/user-attachments/assets/d16e2027-793f-4ba4-9cc9-e780b14a5a1b"
width="450" />
*Names of public system set types in Bevy*
Most Bevy types use a naming of `FooSystem` or just `Foo`, but there are
also a few `FooSystems` and `FooSet` types. In ecosystem crates on the
other hand, `FooSet` is perhaps the most commonly used name in general.
Conventions being so wildly inconsistent can make it harder for users to
pick names for their own types, to search for system sets on docs.rs, or
to even discern which types *are* system sets.
To reign in the inconsistency a bit and help unify the ecosystem, it
would be good to establish a common recommended naming convention for
system sets in Bevy itself, similar to how plugins are commonly suffixed
with `Plugin` (ex: `TimePlugin`). By adopting a consistent naming
convention in first-party Bevy, we can softly nudge ecosystem crates to
follow suit (for types where it makes sense to do so).
Choosing a naming convention is also relevant now, as the [`bevy_cli`
recently adopted
lints](https://github.com/TheBevyFlock/bevy_cli/pull/345) to enforce
naming for plugins and system sets, and the recommended naming used for
system sets is still a bit open.
## Which Name To Use?
Now the contentious part: what naming convention should we actually
adopt?
This was discussed on the Bevy Discord at the end of last year, starting
[here](<https://discord.com/channels/691052431525675048/692572690833473578/1310659954683936789>).
`FooSet` and `FooSystems` were the clear favorites, with `FooSet` very
narrowly winning an unofficial poll. However, it seems to me like the
consensus was broadly moving towards `FooSystems` at the end and after
the poll, with Cart
([source](https://discord.com/channels/691052431525675048/692572690833473578/1311140204974706708))
and later Alice
([source](https://discord.com/channels/691052431525675048/692572690833473578/1311092530732859533))
and also me being in favor of it.
Let's do a quick pros and cons list! Of course these are just what I
thought of, so take it with a grain of salt.
`FooSet`:
- Pro: Nice and short!
- Pro: Used by many ecosystem crates.
- Pro: The `Set` suffix comes directly from the trait name `SystemSet`.
- Pro: Pairs nicely with existing APIs like `in_set` and
`configure_sets`.
- Con: `Set` by itself doesn't actually indicate that it's related to
systems *at all*, apart from the implemented trait. A set of what?
- Con: Is `FooSet` a set of `Foo`s or a system set related to `Foo`? Ex:
`ContactSet`, `MeshSet`, `EnemySet`...
`FooSystems`:
- Pro: Very clearly indicates that the type represents a collection of
systems. The actual core concept, system(s), is in the name.
- Pro: Parallels nicely with `FooPlugins` for plugin groups.
- Pro: Low risk of conflicts with other names or misunderstandings about
what the type is.
- Pro: In most cases, reads *very* nicely and clearly. Ex:
`PhysicsSystems` and `AnimationSystems` as opposed to `PhysicsSet` and
`AnimationSet`.
- Pro: Easy to search for on docs.rs.
- Con: Usually results in longer names.
- Con: Not yet as widely used.
Really the big problem with `FooSet` is that it doesn't actually
describe what it is. It describes what *kind of thing* it is (a set of
something), but not *what it is a set of*, unless you know the type or
check its docs or implemented traits. `FooSystems` on the other hand is
much more self-descriptive in this regard, at the cost of being a bit
longer to type.
Ultimately, in some ways it comes down to preference and how you think
of system sets. Personally, I was originally in favor of `FooSet`, but
have been increasingly on the side of `FooSystems`, especially after
seeing what the new names would actually look like in Avian and now
Bevy. I prefer it because it usually reads better, is much more clearly
related to groups of systems than `FooSet`, and overall *feels* more
correct and natural to me in the long term.
For these reasons, and because Alice and Cart also seemed to share a
preference for it when it was previously being discussed, I propose that
we adopt a `FooSystems` naming convention where applicable.
## Solution
Rename Bevy's system set types to use a consistent `FooSet` naming where
applicable.
- `AccessibilitySystem` → `AccessibilitySystems`
- `GizmoRenderSystem` → `GizmoRenderSystems`
- `PickSet` → `PickingSystems`
- `RunFixedMainLoopSystem` → `RunFixedMainLoopSystems`
- `TransformSystem` → `TransformSystems`
- `RemoteSet` → `RemoteSystems`
- `RenderSet` → `RenderSystems`
- `SpriteSystem` → `SpriteSystems`
- `StateTransitionSteps` → `StateTransitionSystems`
- `RenderUiSystem` → `RenderUiSystems`
- `UiSystem` → `UiSystems`
- `Animation` → `AnimationSystems`
- `AssetEvents` → `AssetEventSystems`
- `TrackAssets` → `AssetTrackingSystems`
- `UpdateGizmoMeshes` → `GizmoMeshSystems`
- `InputSystem` → `InputSystems`
- `InputFocusSet` → `InputFocusSystems`
- `ExtractMaterialsSet` → `MaterialExtractionSystems`
- `ExtractMeshesSet` → `MeshExtractionSystems`
- `RumbleSystem` → `RumbleSystems`
- `CameraUpdateSystem` → `CameraUpdateSystems`
- `ExtractAssetsSet` → `AssetExtractionSystems`
- `Update2dText` → `Text2dUpdateSystems`
- `TimeSystem` → `TimeSystems`
- `AudioPlaySet` → `AudioPlaybackSystems`
- `SendEvents` → `EventSenderSystems`
- `EventUpdates` → `EventUpdateSystems`
A lot of the names got slightly longer, but they are also a lot more
consistent, and in my opinion the majority of them read much better. For
a few of the names I took the liberty of rewording things a bit;
definitely open to any further naming improvements.
There are still also cases where the `FooSystems` naming doesn't really
make sense, and those I left alone. This primarily includes system sets
like `Interned<dyn SystemSet>`, `EnterSchedules<S>`, `ExitSchedules<S>`,
or `TransitionSchedules<S>`, where the type has some special purpose and
semantics.
## Todo
- [x] Should I keep all the old names as deprecated type aliases? I can
do this, but to avoid wasting work I'd prefer to first reach consensus
on whether these renames are even desired.
- [x] Migration guide
- [x] Release notes
# Objective
The goal of `bevy_platform_support` is to provide a set of platform
agnostic APIs, alongside platform-specific functionality. This is a high
traffic crate (providing things like HashMap and Instant). Especially in
light of https://github.com/bevyengine/bevy/discussions/18799, it
deserves a friendlier / shorter name.
Given that it hasn't had a full release yet, getting this change in
before Bevy 0.16 makes sense.
## Solution
- Rename `bevy_platform_support` to `bevy_platform`.
This fixes a panic that occurs if one calls
`PipelineCache::get_render_pipeline_state(id)` or
`PipelineCache::get_compute_pipeline_state(id)` with a queued pipeline
id that has not yet been processed by `PipelineCache::process_queue()`.
```
thread 'Compute Task Pool (0)' panicked at [...]/bevy/crates/bevy_render/src/render_resource/pipeline_cache.rs:611:24:
index out of bounds: the len is 0 but the index is 20
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
# Objective
- Allow bevy and wgpu developers to test newer versions of wgpu without
having to update naga_oil.
## Solution
- Currently bevy feeds wgsl through naga_oil to get a naga::Module that
it passes to wgpu.
- Added a way to pass wgsl through naga_oil, and then serialize the
naga::Module back into a wgsl string to feed to wgpu, allowing wgpu to
parse it using it's internal version of naga (and not the version of
naga bevy_render/naga_oil is using).
## Testing
1. Run 3d_scene (it works)
2. Run 3d_scene with `--features bevy_render/decoupled_naga` (it still
works)
3. Add the following patch to bevy/Cargo.toml, run cargo update, and
compile again (it will fail)
```toml
[patch.crates-io]
wgpu = { git = "https://github.com/gfx-rs/wgpu", rev = "2764e7a39920e23928d300e8856a672f1952da63" }
wgpu-core = { git = "https://github.com/gfx-rs/wgpu", rev = "2764e7a39920e23928d300e8856a672f1952da63" }
wgpu-hal = { git = "https://github.com/gfx-rs/wgpu", rev = "2764e7a39920e23928d300e8856a672f1952da63" }
wgpu-types = { git = "https://github.com/gfx-rs/wgpu", rev = "2764e7a39920e23928d300e8856a672f1952da63" }
```
4. Fix errors and compile again (it will work, and you didn't have to
touch naga_oil)
# Objective
WESL's pre-MVP `0.1.0` has been
[released](https://docs.rs/wesl/latest/wesl/)!
Add support for WESL shader source so that we can begin playing and
testing WESL, as well as aiding in their development.
## Solution
Adds a `ShaderSource::WESL` that can be used to load `.wesl` shaders.
Right now, we don't support mixing `naga-oil`. Additionally, WESL
shaders currently need to pass through the naga frontend, which the WESL
team is aware isn't great for performance (they're working on compiling
into naga modules). Also, since our shaders are managed using the asset
system, we don't currently support using file based imports like `super`
or package scoped imports. Further work will be needed to asses how we
want to support this.
---
## Showcase
See the `shader_material_wesl` example. Be sure to press space to
activate party mode (trigger conditional compilation)!
https://github.com/user-attachments/assets/ec6ad19f-b6e4-4e9d-a00f-6f09336b08a4
# Objective
- Fixes#17960
## Solution
- Followed the [edition upgrade
guide](https://doc.rust-lang.org/edition-guide/editions/transitioning-an-existing-project-to-a-new-edition.html)
## Testing
- CI
---
## Summary of Changes
### Documentation Indentation
When using lists in documentation, proper indentation is now linted for.
This means subsequent lines within the same list item must start at the
same indentation level as the item.
```rust
/* Valid */
/// - Item 1
/// Run-on sentence.
/// - Item 2
struct Foo;
/* Invalid */
/// - Item 1
/// Run-on sentence.
/// - Item 2
struct Foo;
```
### Implicit `!` to `()` Conversion
`!` (the never return type, returned by `panic!`, etc.) no longer
implicitly converts to `()`. This is particularly painful for systems
with `todo!` or `panic!` statements, as they will no longer be functions
returning `()` (or `Result<()>`), making them invalid systems for
functions like `add_systems`. The ideal fix would be to accept functions
returning `!` (or rather, _not_ returning), but this is blocked on the
[stabilisation of the `!` type
itself](https://doc.rust-lang.org/std/primitive.never.html), which is
not done.
The "simple" fix would be to add an explicit `-> ()` to system
signatures (e.g., `|| { todo!() }` becomes `|| -> () { todo!() }`).
However, this is _also_ banned, as there is an existing lint which (IMO,
incorrectly) marks this as an unnecessary annotation.
So, the "fix" (read: workaround) is to put these kinds of `|| -> ! { ...
}` closuers into variables and give the variable an explicit type (e.g.,
`fn()`).
```rust
// Valid
let system: fn() = || todo!("Not implemented yet!");
app.add_systems(..., system);
// Invalid
app.add_systems(..., || todo!("Not implemented yet!"));
```
### Temporary Variable Lifetimes
The order in which temporary variables are dropped has changed. The
simple fix here is _usually_ to just assign temporaries to a named
variable before use.
### `gen` is a keyword
We can no longer use the name `gen` as it is reserved for a future
generator syntax. This involved replacing uses of the name `gen` with
`r#gen` (the raw-identifier syntax).
### Formatting has changed
Use statements have had the order of imports changed, causing a
substantial +/-3,000 diff when applied. For now, I have opted-out of
this change by amending `rustfmt.toml`
```toml
style_edition = "2021"
```
This preserves the original formatting for now, reducing the size of
this PR. It would be a simple followup to update this to 2024 and run
`cargo fmt`.
### New `use<>` Opt-Out Syntax
Lifetimes are now implicitly included in RPIT types. There was a handful
of instances where it needed to be added to satisfy the borrow checker,
but there may be more cases where it _should_ be added to avoid
breakages in user code.
### `MyUnitStruct { .. }` is an invalid pattern
Previously, you could match against unit structs (and unit enum
variants) with a `{ .. }` destructuring. This is no longer valid.
### Pretty much every use of `ref` and `mut` are gone
Pattern binding has changed to the point where these terms are largely
unused now. They still serve a purpose, but it is far more niche now.
### `iter::repeat(...).take(...)` is bad
New lint recommends using the more explicit `iter::repeat_n(..., ...)`
instead.
## Migration Guide
The lifetimes of functions using return-position impl-trait (RPIT) are
likely _more_ conservative than they had been previously. If you
encounter lifetime issues with such a function, please create an issue
to investigate the addition of `+ use<...>`.
## Notes
- Check the individual commits for a clearer breakdown for what
_actually_ changed.
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
Make checked vs unchecked shaders configurable
Fixes#17786
## Solution
Added `ValidateShaders` enum to `Shader` and added
`create_and_validate_shader_module` to `RenderDevice`
## Testing
I tested the shader examples locally and they all worked. I'd like to
write a few tests to verify but am unsure how to start.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
## Objective
Get rid of a redundant Cargo feature flag.
## Solution
Use the built-in `target_abi = "sim"` instead of a custom Cargo feature
flag, which is set for the iOS (and visionOS and tvOS) simulator. This
has been stable since Rust 1.78.
In the future, some of this may become redundant if Wgpu implements
proper supper for the iOS Simulator:
https://github.com/gfx-rs/wgpu/issues/7057
CC @mockersf who implemented [the original
fix](https://github.com/bevyengine/bevy/pull/10178).
## Testing
- Open mobile example in Xcode.
- Launch the simulator.
- See that no errors are emitted.
- Remove the code cfg-guarded behind `target_abi = "sim"`.
- See that an error now happens.
(I haven't actually performed these steps on the latest `main`, because
I'm hitting an unrelated error (EDIT: It was
https://github.com/bevyengine/bevy/pull/17637). But tested it on
0.15.0).
---
## Migration Guide
> If you're using a project that builds upon the mobile example, remove
the `ios_simulator` feature from your `Cargo.toml` (Bevy now handles
this internally).
Didn't remove WgpuWrapper. Not sure if it's needed or not still.
## Testing
- Did you test these changes? If so, how? Example runner
- Are there any parts that need more testing? Web (portable atomics
thingy?), DXC.
## Migration Guide
- Bevy has upgraded to [wgpu
v24](https://github.com/gfx-rs/wgpu/blob/trunk/CHANGELOG.md#v2400-2025-01-15).
- When using the DirectX 12 rendering backend, the new priority system
for choosing a shader compiler is as follows:
- If the `WGPU_DX12_COMPILER` environment variable is set at runtime, it
is used
- Else if the new `statically-linked-dxc` feature is enabled, a custom
version of DXC will be statically linked into your app at compile time.
- Else Bevy will look in the app's working directory for
`dxcompiler.dll` and `dxil.dll` at runtime.
- Else if they are missing, Bevy will fall back to FXC (not recommended)
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
- Contributes to #16877
## Solution
- Moved `hashbrown`, `foldhash`, and related types out of `bevy_utils`
and into `bevy_platform_support`
- Refactored the above to match the layout of these types in `std`.
- Updated crates as required.
## Testing
- CI
---
## Migration Guide
- The following items were moved out of `bevy_utils` and into
`bevy_platform_support::hash`:
- `FixedState`
- `DefaultHasher`
- `RandomState`
- `FixedHasher`
- `Hashed`
- `PassHash`
- `PassHasher`
- `NoOpHash`
- The following items were moved out of `bevy_utils` and into
`bevy_platform_support::collections`:
- `HashMap`
- `HashSet`
- `bevy_utils::hashbrown` has been removed. Instead, import from
`bevy_platform_support::collections` _or_ take a dependency on
`hashbrown` directly.
- `bevy_utils::Entry` has been removed. Instead, import from
`bevy_platform_support::collections::hash_map` or
`bevy_platform_support::collections::hash_set` as appropriate.
- All of the above equally apply to `bevy::utils` and
`bevy::platform_support`.
## Notes
- I left `PreHashMap`, `PreHashMapExt`, and `TypeIdMap` in `bevy_utils`
as they might be candidates for micro-crating. They can always be moved
into `bevy_platform_support` at a later date if desired.
# Objective
`bevy_ecs`'s `system` module is something of a grab bag, and *very*
large. This is particularly true for the `system_param` module, which is
more than 2k lines long!
While it could be defensible to put `Res` and `ResMut` there (lol no
they're in change_detection.rs, obviously), it doesn't make any sense to
put the `Resource` trait there. This is confusing to navigate (and
painful to work on and review).
## Solution
- Create a root level `bevy_ecs/resource.rs` module to mirror
`bevy_ecs/component.rs`
- move the `Resource` trait to that module
- move the `Resource` derive macro to that module as well (Rust really
likes when you pun on the names of the derive macro and trait and put
them in the same path)
- fix all of the imports
## Notes to reviewers
- We could probably move more stuff into here, but I wanted to keep this
PR as small as possible given the absurd level of import changes.
- This PR is ground work for my upcoming attempts to store resource data
on components (resources-as-entities). Splitting this code out will make
the work and review a bit easier, and is the sort of overdue refactor
that's good to do as part of more meaningful work.
## Testing
cargo build works!
## Migration Guide
`bevy_ecs::system::Resource` has been moved to
`bevy_ecs::resource::Resource`.
# Objective
Stumbled upon a `from <-> form` transposition while reviewing a PR,
thought it was interesting, and went down a bit of a rabbit hole.
## Solution
Fix em
# Objective
- https://github.com/bevyengine/bevy/issues/17111
## Solution
Set the `clippy::allow_attributes` and
`clippy::allow_attributes_without_reason` lints to `deny`, and bring
`bevy_render` in line with the new restrictions.
## Testing
`cargo clippy` and `cargo test --package bevy_render` were run, and no
errors were encountered.
# Objective
- Contributes to #11478
## Solution
- Made `bevy_utils::tracing` `doc(hidden)`
- Re-exported `tracing` from `bevy_log` for end-users
- Added `tracing` directly to crates that need it.
## Testing
- CI
---
## Migration Guide
If you were importing `tracing` via `bevy::utils::tracing`, instead use
`bevy::log::tracing`. Note that many items within `tracing` are also
directly re-exported from `bevy::log` as well, so you may only need
`bevy::log` for the most common items (e.g., `warn!`, `trace!`, etc.).
This also applies to the `log_once!` family of macros.
## Notes
- While this doesn't reduce the line-count in `bevy_utils`, it further
decouples the internal crates from `bevy_utils`, making its eventual
removal more feasible in the future.
- I have just imported `tracing` as we do for all dependencies. However,
a workspace dependency may be more appropriate for version management.
# Objective
- Related to https://github.com/bevyengine/bevy/issues/11478
## Solution
- Moved `futures.rs`, `ConditionalSend` `ConditionalSendFuture` and
`BoxedFuture` from `bevy_utils` to `bevy_tasks`.
## Testing
- CI checks
## Migration Guide
- Several modules were moved from `bevy_utils` into `bevy_tasks`:
- Replace `bevy_utils::futures` imports with `bevy_tasks::futures`.
- Replace `bevy_utils::ConditionalSend` with
`bevy_tasks::ConditionalSend`.
- Replace `bevy_utils::ConditionalSendFuture` with
`bevy_tasks::ConditionalSendFuture`.
- Replace `bevy_utils::BoxedFuture` with `bevy_tasks::BoxedFuture`.
# Objective
Fixes#16104
## Solution
I removed all instances of `:?` and put them back one by one where it
caused an error.
I removed some bevy_utils helper functions that were only used in 2
places and don't add value. See: #11478
## Testing
CI should catch the mistakes
## Migration Guide
`bevy::utils::{dbg,info,warn,error}` were removed. Use
`bevy::utils::tracing::{debug,info,warn,error}` instead.
---------
Co-authored-by: SpecificProtagonist <vincentjunge@posteo.net>
# Objective
- Remove `derive_more`'s error derivation and replace it with
`thiserror`
## Solution
- Added `derive_more`'s `error` feature to `deny.toml` to prevent it
sneaking back in.
- Reverted to `thiserror` error derivation
## Notes
Merge conflicts were too numerous to revert the individual changes, so
this reversion was done manually. Please scrutinise carefully during
review.
# Objective
- wgpu 0.20 made workgroup vars stop being zero-init by default. this
broke some applications (cough foresight cough) and now we workaround
it. wgpu exposes a compilation option that zero initializes workgroup
memory by default, but bevy does not expose it.
## Solution
- expose the compilation option wgpu gives us
## Testing
- ran examples: 3d_scene, compute_shader_game_of_life, gpu_readback,
lines, specialized_mesh_pipeline. they all work
- confirmed fix for our own problems
---
</details>
## Migration Guide
- add `zero_initialize_workgroup_memory: false,` to
`ComputePipelineDescriptor` or `RenderPipelineDescriptor` structs to
preserve 0.14 functionality, add `zero_initialize_workgroup_memory:
true,` to restore bevy 0.13 functionality.
# Objective
* Remove all uses of render_resource_wrapper.
* Make it easier to share a `wgpu::Device` between Bevy and application
code.
## Solution
Removed the `render_resource_wrapper` macro.
To improve the `RenderCreation:: Manual ` API, `ErasedRenderDevice` was
replaced by `Arc`. Unfortunately I had to introduce one more usage of
`WgpuWrapper` which seems like an unwanted constraint on the caller.
## Testing
- Did you test these changes? If so, how?
- Ran `cargo test`.
- Ran a few examples.
- Used `RenderCreation::Manual` in my own project
- Exercised `RenderCreation::Automatic` through examples
- Are there any parts that need more testing?
- No
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run examples
- Use `RenderCreation::Manual` in their own project
# Objective
- Fixes#6370
- Closes#6581
## Solution
- Added the following lints to the workspace:
- `std_instead_of_core`
- `std_instead_of_alloc`
- `alloc_instead_of_core`
- Used `cargo +nightly fmt` with [item level use
formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Item%5C%3A)
to split all `use` statements into single items.
- Used `cargo clippy --workspace --all-targets --all-features --fix
--allow-dirty` to _attempt_ to resolve the new linting issues, and
intervened where the lint was unable to resolve the issue automatically
(usually due to needing an `extern crate alloc;` statement in a crate
root).
- Manually removed certain uses of `std` where negative feature gating
prevented `--all-features` from finding the offending uses.
- Used `cargo +nightly fmt` with [crate level use
formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Crate%5C%3A)
to re-merge all `use` statements matching Bevy's previous styling.
- Manually fixed cases where the `fmt` tool could not re-merge `use`
statements due to conditional compilation attributes.
## Testing
- Ran CI locally
## Migration Guide
The MSRV is now 1.81. Please update to this version or higher.
## Notes
- This is a _massive_ change to try and push through, which is why I've
outlined the semi-automatic steps I used to create this PR, in case this
fails and someone else tries again in the future.
- Making this change has no impact on user code, but does mean Bevy
contributors will be warned to use `core` and `alloc` instead of `std`
where possible.
- This lint is a critical first step towards investigating `no_std`
options for Bevy.
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
Fixes#14782
## Solution
Enable the lint and fix all upcoming hints (`--fix`). Also tried to
figure out the false-positive (see review comment). Maybe split this PR
up into multiple parts where only the last one enables the lint, so some
can already be merged resulting in less many files touched / less
potential for merge conflicts?
Currently, there are some cases where it might be easier to read the
code with the qualifier, so perhaps remove the import of it and adapt
its cases? In the current stage it's just a plain adoption of the
suggestions in order to have a base to discuss.
## Testing
`cargo clippy` and `cargo run -p ci` are happy.
Upgrading to WGPU 22.
Needs `naga_oil` to upgrade first, I've got a fork that compiles but
fails tests, so until that's fixed and the crate is officially
updated/released this will be blocked.
---------
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Currently blocked on https://github.com/gfx-rs/wgpu/issues/5774
# Objective
Update to wgpu 0.20
## Solution
Update to wgpu 0.20 and naga_oil 0.14.
## Testing
Tested a few different examples on linux (vulkan, webgl2, webgpu) and
windows (dx12 + vulkan) and they worked.
---
## Changelog
- Updated to wgpu 0.20. Note that we don't currently support wgpu's new
pipeline overridable constants, as they don't work on web currently and
need some more changes to naga_oil (and are somewhat redundant with
naga_oil's shader defs). See wgpu's changelog for more
https://github.com/gfx-rs/wgpu/blob/trunk/CHANGELOG.md#v0200-2024-04-28
## Migration Guide
TODO
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <mockersf@gmail.com>
# Objective
The error printed-out due to a missing shader file was confusing; This
PR changes the error message.
Fixes#13644
## Solution
I replaced the confusing wording (`... shader is not loaded yet`) with a
clear explanation (`... shader could not be loaded`)
## Testing
> Did you test these changes? If so, how?
removing `assets/shaders/game_of_life.wgsl` & running its associated
example now produces the following error:
```
thread '<unnamed>' panicked at examples/shader/compute_shader_game_of_life.rs:233:25:
Initializing assets/shaders/game_of_life.wgsl:
Pipeline could not be compiled because the following shader could not be loaded: AssetId<bevy_render::render_resource::shader::Shader>{ index: 0, generation: 0}
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Encountered a panic in system `bevy_render::renderer::render_system`!
```
I don't think there are any tests expecting the previous error message,
so this change should not break anything.
> Are there any parts that need more testing?
If there was an intent behind the original message, this might need more
attention.
> How can other people (reviewers) test your changes? Is there anything
specific they need to know?
One should be able to preview the changes by running any example after
deleting/renaming their associated shader(s).
> If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
N/A
# Objective
Fixes#12966
## Solution
Renaming multi_threaded feature to match snake case
## Migration Guide
Bevy feature multi-threaded should be refered to multi_threaded from now
on.
https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0
# Objective
- Adds per-object motion blur to the core 3d pipeline. This is a common
effect used in games and other simulations.
- Partially resolves#4710
## Solution
- This is a post-process effect that uses the depth and motion vector
buffers to estimate per-object motion blur. The implementation is
combined from knowledge from multiple papers and articles. The approach
itself, and the shader are quite simple. Most of the effort was in
wiring up the bevy rendering plumbing, and properly specializing for HDR
and MSAA.
- To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is
required. I've extracted this code from #9000. This is because the
prepass buffers are multisampled, and require accessing with
`textureLoad` as opposed to the widely compatible `textureSample`.
- Added an example to demonstrate the effect of motion blur parameters.
## Future Improvements
- While this approach does have limitations, it's one of the most
commonly used, and is much better than camera motion blur, which does
not consider object velocity. For example, this implementation allows a
dolly to track an object, and that object will remain unblurred while
the background is blurred. The biggest issue with this implementation is
that blur is constrained to the boundaries of objects which results in
hard edges. There are solutions to this by either dilating the object or
the motion vector buffer, or by taking a different approach such as
https://casual-effects.com/research/McGuire2012Blur/index.html
- I'm using a noise PRNG function to jitter samples. This could be
replaced with a blue noise texture lookup or similar, however after
playing with the parameters, it gives quite nice results with 4 samples,
and is significantly better than the artifacts generated when not
jittering.
---
## Changelog
- Added: per-object motion blur. This can be enabled and configured by
adding the `MotionBlurBundle` to a camera entity.
---------
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
# Objective
- Add a way to easily get currently waiting pipelines IDs.
## Solution
- Added a method to get waiting pipelines `CachedPipelineId`.
---------
Co-authored-by: James Liu <contact@jamessliu.com>
Today, we sort all entities added to all phases, even the phases that
don't strictly need sorting, such as the opaque and shadow phases. This
results in a performance loss because our `PhaseItem`s are rather large
in memory, so sorting is slow. Additionally, determining the boundaries
of batches is an O(n) process.
This commit makes Bevy instead applicable place phase items into *bins*
keyed by *bin keys*, which have the invariant that everything in the
same bin is potentially batchable. This makes determining batch
boundaries O(1), because everything in the same bin can be batched.
Instead of sorting each entity, we now sort only the bin keys. This
drops the sorting time to near-zero on workloads with few bins like
`many_cubes --no-frustum-culling`. Memory usage is improved too, with
batch boundaries and dynamic indices now implicit instead of explicit.
The improved memory usage results in a significant win even on
unbatchable workloads like `many_cubes --no-frustum-culling
--vary-material-data-per-instance`, presumably due to cache effects.
Not all phases can be binned; some, such as transparent and transmissive
phases, must still be sorted. To handle this, this commit splits
`PhaseItem` into `BinnedPhaseItem` and `SortedPhaseItem`. Most of the
logic that today deals with `PhaseItem`s has been moved to
`SortedPhaseItem`. `BinnedPhaseItem` has the new logic.
Frame time results (in ms/frame) are as follows:
| Benchmark | `binning` | `main` | Speedup |
| ------------------------ | --------- | ------- | ------- |
| `many_cubes -nfc -vpi` | 232.179 | 312.123 | 34.43% |
| `many_cubes -nfc` | 25.874 | 30.117 | 16.40% |
| `many_foxes` | 3.276 | 3.515 | 7.30% |
(`-nfc` is short for `--no-frustum-culling`; `-vpi` is short for
`--vary-per-instance`.)
---
## Changelog
### Changed
* Render phases have been split into binned and sorted phases. Binned
phases, such as the common opaque phase, achieve improved CPU
performance by avoiding the sorting step.
## Migration Guide
- `PhaseItem` has been split into `BinnedPhaseItem` and
`SortedPhaseItem`. If your code has custom `PhaseItem`s, you will need
to migrate them to one of these two types. `SortedPhaseItem` requires
the fewest code changes, but you may want to pick `BinnedPhaseItem` if
your phase doesn't require sorting, as that enables higher performance.
## Tracy graphs
`many-cubes --no-frustum-culling`, `main` branch:
<img width="1064" alt="Screenshot 2024-03-12 180037"
src="https://github.com/bevyengine/bevy/assets/157897/e1180ce8-8e89-46d2-85e3-f59f72109a55">
`many-cubes --no-frustum-culling`, this branch:
<img width="1064" alt="Screenshot 2024-03-12 180011"
src="https://github.com/bevyengine/bevy/assets/157897/0899f036-6075-44c5-a972-44d95895f46c">
You can see that `batch_and_prepare_binned_render_phase` is a much
smaller fraction of the time. Zooming in on that function, with yellow
being this branch and red being `main`, we see:
<img width="1064" alt="Screenshot 2024-03-12 175832"
src="https://github.com/bevyengine/bevy/assets/157897/0dfc8d3f-49f4-496e-8825-a66e64d356d0">
The binning happens in `queue_material_meshes`. Again with yellow being
this branch and red being `main`:
<img width="1064" alt="Screenshot 2024-03-12 175755"
src="https://github.com/bevyengine/bevy/assets/157897/b9b20dc1-11c8-400c-a6cc-1c2e09c1bb96">
We can see that there is a small regression in `queue_material_meshes`
performance, but it's not nearly enough to outweigh the large gains in
`batch_and_prepare_binned_render_phase`.
---------
Co-authored-by: James Liu <contact@jamessliu.com>
# Objective
Memory usage optimisation
## Solution
`HashMap` and `HashSet`'s keys are immutable. So using mutable types
like `String`, `Vec<T>`, or `PathBuf` as a key is a waste of memory:
they have an extra `usize` for their capacity and may have spare
capacity.
This PR replaces these types by their immutable equivalents `Box<str>`,
`Box<[T]>`, and `Box<Path>`.
For more context, I recommend watching the [Use Arc Instead of
Vec](https://www.youtube.com/watch?v=A4cKi7PTJSs) video.
---------
Co-authored-by: James Liu <contact@jamessliu.com>
# Objective
- Add the new `-Zcheck-cfg` checks to catch more warnings
- Fixes#12091
## Solution
- Create a new `cfg-check` to the CI that runs `cargo check -Zcheck-cfg
--workspace` using cargo nightly (and fails if there are warnings)
- Fix all warnings generated by the new check
---
## Changelog
- Remove all redundant imports
- Fix cfg wasm32 targets
- Add 3 dead code exceptions (should StandardColor be unused?)
- Convert ios_simulator to a feature (I'm not sure if this is the right
way to do it, but the check complained before)
## Migration Guide
No breaking changes
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Fixes#11977 - user defined shaders don't work in wasm
- After investigation, it won't work if the shader is not yet available
when compiling the pipeline on all platforms, for example if you load
many assets
## Solution
- Set the pipeline state to queued when it errs waiting for the shader
so that it's retried
This PR closes#11978
# Objective
Fix rendering on iOS Simulators.
iOS Simulator doesn't support the capability CUBE_ARRAY_TEXTURES, since
0.13 this started to make iOS Simulator not render anything with the
following message being outputted:
```
2024-02-19T14:59:34.896266Z ERROR bevy_render::render_resource::pipeline_cache: failed to create shader module: Validation Error
Caused by:
In Device::create_shader_module
Shader validation error:
Type [40] '' is invalid
Capability Capabilities(CUBE_ARRAY_TEXTURES) is required
```
## Solution
- Split up NO_ARRAY_TEXTURES_SUPPORT into both NO_ARRAY_TEXTURES_SUPPORT
and NO_CUBE_ARRAY_TEXTURES_SUPPORT and correctly apply
NO_ARRAY_TEXTURES_SUPPORT for iOS Simulator using the cfg flag
introduced in #10178.
---
## Changelog
### Fixed
- Rendering on iOS Simulator due to missing CUBE_ARRAY_TEXTURES support.
---------
Co-authored-by: Sam Pettersson <sam.pettersson@geoguessr.com>