f09c3a1a56
7 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
![]() |
7b5e4e3be0
|
Allow images to be resized on the GPU without losing data (#19462)
# Objective #19410 added support for resizing images "in place" meaning that their data was copied into the new texture allocation on the CPU. However, there are some scenarios where an image may be created and populated entirely on the GPU. Using this method would cause data to disappear, as it wouldn't be copied into the new texture. ## Solution When an image is resized in place, if it has no data in it's asset, we'll opt into a new flag `copy_on_resize` which will issue a `copy_texture_to_texture` command on the old allocation. To support this, we require passing the old asset to all `RenderAsset` implementations. This will be generally useful in the future for reducing things like buffer re-allocations. ## Testing Tested using the example in the issue. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
![]() |
9b32e09551
|
bevy_reflect: Add clone registrations project-wide (#18307)
# Objective Now that #13432 has been merged, it's important we update our reflected types to properly opt into this feature. If we do not, then this could cause issues for users downstream who want to make use of reflection-based cloning. ## Solution This PR is broken into 4 commits: 1. Add `#[reflect(Clone)]` on all types marked `#[reflect(opaque)]` that are also `Clone`. This is mandatory as these types would otherwise cause the cloning operation to fail for any type that contains it at any depth. 2. Update the reflection example to suggest adding `#[reflect(Clone)]` on opaque types. 3. Add `#[reflect(clone)]` attributes on all fields marked `#[reflect(ignore)]` that are also `Clone`. This prevents the ignored field from causing the cloning operation to fail. Note that some of the types that contain these fields are also `Clone`, and thus can be marked `#[reflect(Clone)]`. This makes the `#[reflect(clone)]` attribute redundant. However, I think it's safer to keep it marked in the case that the `Clone` impl/derive is ever removed. I'm open to removing them, though, if people disagree. 4. Finally, I added `#[reflect(Clone)]` on all types that are also `Clone`. While not strictly necessary, it enables us to reduce the generated output since we can just call `Clone::clone` directly instead of calling `PartialReflect::reflect_clone` on each variant/field. It also means we benefit from any optimizations or customizations made in the `Clone` impl, including directly dereferencing `Copy` values and increasing reference counters. Along with that change I also took the liberty of adding any missing registrations that I saw could be applied to the type as well, such as `Default`, `PartialEq`, and `Hash`. There were hundreds of these to edit, though, so it's possible I missed quite a few. That last commit is **_massive_**. There were nearly 700 types to update. So it's recommended to review the first three before moving onto that last one. Additionally, I can break the last commit off into its own PR or into smaller PRs, but I figured this would be the easiest way of doing it (and in a timely manner since I unfortunately don't have as much time as I used to for code contributions). ## Testing You can test locally with a `cargo check`: ``` cargo check --workspace --all-features ``` |
||
![]() |
5adf831b42
|
Add a bindless mode to AsBindGroup . (#16368)
This patch adds the infrastructure necessary for Bevy to support *bindless resources*, by adding a new `#[bindless]` attribute to `AsBindGroup`. Classically, only a single texture (or sampler, or buffer) can be attached to each shader binding. This means that switching materials requires breaking a batch and issuing a new drawcall, even if the mesh is otherwise identical. This adds significant overhead not only in the driver but also in `wgpu`, as switching bind groups increases the amount of validation work that `wgpu` must do. *Bindless resources* are the typical solution to this problem. Instead of switching bindings between each texture, the renderer instead supplies a large *array* of all textures in the scene up front, and the material contains an index into that array. This pattern is repeated for buffers and samplers as well. The renderer now no longer needs to switch binding descriptor sets while drawing the scene. Unfortunately, as things currently stand, this approach won't quite work for Bevy. Two aspects of `wgpu` conspire to make this ideal approach unacceptably slow: 1. In the DX12 backend, all binding arrays (bindless resources) must have a constant size declared in the shader, and all textures in an array must be bound to actual textures. Changing the size requires a recompile. 2. Changing even one texture incurs revalidation of all textures, a process that takes time that's linear in the total size of the binding array. This means that declaring a large array of textures big enough to encompass the entire scene is presently unacceptably slow. For example, if you declare 4096 textures, then `wgpu` will have to revalidate all 4096 textures if even a single one changes. This process can take multiple frames. To work around this problem, this PR groups bindless resources into small *slabs* and maintains a free list for each. The size of each slab for the bindless arrays associated with a material is specified via the `#[bindless(N)]` attribute. For instance, consider the following declaration: ```rust #[derive(AsBindGroup)] #[bindless(16)] struct MyMaterial { #[buffer(0)] color: Vec4, #[texture(1)] #[sampler(2)] diffuse: Handle<Image>, } ``` The `#[bindless(N)]` attribute specifies that, if bindless arrays are supported on the current platform, each resource becomes a binding array of N instances of that resource. So, for `MyMaterial` above, the `color` attribute is exposed to the shader as `binding_array<vec4<f32>, 16>`, the `diffuse` texture is exposed to the shader as `binding_array<texture_2d<f32>, 16>`, and the `diffuse` sampler is exposed to the shader as `binding_array<sampler, 16>`. Inside the material's vertex and fragment shaders, the applicable index is available via the `material_bind_group_slot` field of the `Mesh` structure. So, for instance, you can access the current color like so: ```wgsl // `uniform` binding arrays are a non-sequitur, so `uniform` is automatically promoted // to `storage` in bindless mode. @group(2) @binding(0) var<storage> material_color: binding_array<Color, 4>; ... @fragment fn fragment(in: VertexOutput) -> @location(0) vec4<f32> { let color = material_color[mesh[in.instance_index].material_bind_group_slot]; ... } ``` Note that portable shader code can't guarantee that the current platform supports bindless textures. Indeed, bindless mode is only available in Vulkan and DX12. The `BINDLESS` shader definition is available for your use to determine whether you're on a bindless platform or not. Thus a portable version of the shader above would look like: ```wgsl #ifdef BINDLESS @group(2) @binding(0) var<storage> material_color: binding_array<Color, 4>; #else // BINDLESS @group(2) @binding(0) var<uniform> material_color: Color; #endif // BINDLESS ... @fragment fn fragment(in: VertexOutput) -> @location(0) vec4<f32> { #ifdef BINDLESS let color = material_color[mesh[in.instance_index].material_bind_group_slot]; #else // BINDLESS let color = material_color; #endif // BINDLESS ... } ``` Importantly, this PR *doesn't* update `StandardMaterial` to be bindless. So, for example, `scene_viewer` will currently not run any faster. I intend to update `StandardMaterial` to use bindless mode in a follow-up patch. A new example, `shaders/shader_material_bindless`, has been added to demonstrate how to use this new feature. Here's a Tracy profile of `submit_graph_commands` of this patch and an additional patch (not submitted yet) that makes `StandardMaterial` use bindless. Red is those patches; yellow is `main`. The scene was Bistro Exterior with a hack that forces all textures to opaque. You can see a 1.47x mean speedup.  ## Migration Guide * `RenderAssets::prepare_asset` now takes an `AssetId` parameter. * Bin keys now have Bevy-specific material bind group indices instead of `wgpu` material bind group IDs, as part of the bindless change. Use the new `MaterialBindGroupAllocator` to map from bind group index to bind group ID. |
||
![]() |
efda7f3f9c
|
Simpler lint fixes: makes ci lints work but disables a lint for now (#15376)
Takes the first two commits from #15375 and adds suggestions from this comment: https://github.com/bevyengine/bevy/pull/15375#issuecomment-2366968300 See #15375 for more reasoning/motivation. ## Rebasing (rerunning) ```rust git switch simpler-lint-fixes git reset --hard main cargo fmt --all -- --unstable-features --config normalize_comments=true,imports_granularity=Crate cargo fmt --all git add --update git commit --message "rustfmt" cargo clippy --workspace --all-targets --all-features --fix cargo fmt --all -- --unstable-features --config normalize_comments=true,imports_granularity=Crate cargo fmt --all git add --update git commit --message "clippy" git cherry-pick e6c0b94f6795222310fb812fa5c4512661fc7887 ``` |
||
![]() |
83356b12c9
|
bevy_reflect: Replace "value" terminology with "opaque" (#15240)
# Objective Currently, the term "value" in the context of reflection is a bit overloaded. For one, it can be used synonymously with "data" or "variable". An example sentence would be "this function takes a reflected value". However, it is also used to refer to reflected types which are `ReflectKind::Value`. These types are usually either primitives, opaque types, or types that don't fall into any other `ReflectKind` (or perhaps could, but don't due to some limitation/difficulty). An example sentence would be "this function takes a reflected value type". This makes it difficult to write good documentation or other learning material without causing some amount of confusion to readers. Ideally, we'd be able to move away from the `ReflectKind::Value` usage and come up with a better term. ## Solution This PR replaces the terminology of "value" with "opaque" across `bevy_reflect`. This includes in documentation, type names, variant names, and macros. The term "opaque" was chosen because that's essentially how the type is treated within the reflection API. In other words, its internal structure is hidden. All we can do is work with the type itself. ### Primitives While primitives are not technically opaque types, I think it's still clearer to refer to them as "opaque" rather than keep the confusing "value" terminology. We could consider adding another concept for primitives (e.g. `ReflectKind::Primitive`), but I'm not sure that provides a lot of benefit right now. In most circumstances, they'll be treated just like an opaque type. They would also likely use the same macro (or two copies of the same macro but with different names). ## Testing You can test locally by running: ``` cargo test --package bevy_reflect --all-features ``` --- ## Migration Guide The reflection concept of "value type" has been replaced with a clearer "opaque type". The following renames have been made to account for this: - `ReflectKind::Value` → `ReflectKind::Opaque` - `ReflectRef::Value` → `ReflectRef::Opaque` - `ReflectMut::Value` → `ReflectMut::Opaque` - `ReflectOwned::Value` → `ReflectOwned::Opaque` - `TypeInfo::Value` → `TypeInfo::Opaque` - `ValueInfo` → `OpaqueInfo` - `impl_reflect_value!` → `impl_reflect_opaque!` - `impl_from_reflect_value!` → `impl_from_reflect_opaque!` Additionally, declaring your own opaque types no longer uses `#[reflect_value]`. This attribute has been replaced by `#[reflect(opaque)]`: ```rust // BEFORE #[derive(Reflect)] #[reflect_value(Default)] struct MyOpaqueType(u32); // AFTER #[derive(Reflect)] #[reflect(opaque)] #[reflect(Default)] struct MyOpaqueType(u32); ``` Note that the order in which `#[reflect(opaque)]` appears does not matter. |
||
![]() |
5eca832cee
|
Add convenience methods for constructing and setting storage buffer data (#15044)
Adds some methods to assist in building `ShaderStorageBuffer` without using `bytemuck`. We keep the `&[u8]` constructors since this is still modeled as a thin wrapper around the buffer descriptor, but should make it easier to interact with at the cost of an extra allocation in the `ShaderType` path for the buffer writer. Follow up from #14663 |
||
![]() |
a4640046fc
|
Adds ShaderStorageBuffer asset (#14663)
Adds a new `Handle<Storage>` asset type that can be used as a render asset, particularly for use with `AsBindGroup`. Closes: #13658 # Objective Allow users to create storage buffers in the main world without having to access the `RenderDevice`. While this resource is technically available, it's bad form to use in the main world and requires mixing rendering details with main world code. Additionally, this makes storage buffers easier to use with `AsBindGroup`, particularly in the following scenarios: - Sharing the same buffers between a compute stage and material shader. We already have examples of this for storage textures (see game of life example) and these changes allow a similar pattern to be used with storage buffers. - Preventing repeated gpu upload (see the previous easier to use `Vec` `AsBindGroup` option). - Allow initializing custom materials using `Default`. Previously, the lack of a `Default` implement for the raw `wgpu::Buffer` type made implementing a `AsBindGroup + Default` bound difficult in the presence of buffers. ## Solution Adds a new `Handle<Storage>` asset type that is prepared into a `GpuStorageBuffer` render asset. This asset can either be initialized with a `Vec<u8>` of properly aligned data or with a size hint. Users can modify the underlying `wgpu::BufferDescriptor` to provide additional usage flags. ## Migration Guide The `AsBindGroup` `storage` attribute has been modified to reference the new `Handle<Storage>` asset instead. Usages of Vec` should be converted into assets instead. --------- Co-authored-by: IceSentry <IceSentry@users.noreply.github.com> |