![]() # Objective Upgrade to `wgpu` version `25.0`. Depends on https://github.com/bevyengine/naga_oil/pull/121 ## Solution ### Problem The biggest issue we face upgrading is the following requirement: > To facilitate this change, there was an additional validation rule put in place: if there is a binding array in a bind group, you may not use dynamic offset buffers or uniform buffers in that bind group. This requirement comes from vulkan rules on UpdateAfterBind descriptors. This is a major difficulty for us, as there are a number of binding arrays that are used in the view bind group. Note, this requirement does not affect merely uniform buffors that use dynamic offset but the use of *any* uniform in a bind group that also has a binding array. ### Attempted fixes The easiest fix would be to change uniforms to be storage buffers whenever binding arrays are in use: ```wgsl #ifdef BINDING_ARRAYS_ARE_USED @group(0) @binding(0) var<uniform> view: View; @group(0) @binding(1) var<uniform> lights: types::Lights; #else @group(0) @binding(0) var<storage> view: array<View>; @group(0) @binding(1) var<storage> lights: array<types::Lights>; #endif ``` This requires passing the view index to the shader so that we know where to index into the buffer: ```wgsl struct PushConstants { view_index: u32, } var<push_constant> push_constants: PushConstants; ``` Using push constants is no problem because binding arrays are only usable on native anyway. However, this greatly complicates the ability to access `view` in shaders. For example: ```wgsl #ifdef BINDING_ARRAYS_ARE_USED mesh_view_bindings::view.view_from_world[0].z #else mesh_view_bindings::view[mesh_view_bindings::view_index].view_from_world[0].z #endif ``` Using this approach would work but would have the effect of polluting our shaders with ifdef spam basically *everywhere*. Why not use a function? Unfortunately, the following is not valid wgsl as it returns a binding directly from a function in the uniform path. ```wgsl fn get_view() -> View { #if BINDING_ARRAYS_ARE_USED let view_index = push_constants.view_index; let view = views[view_index]; #endif return view; } ``` This also poses problems for things like lights where we want to return a ptr to the light data. Returning ptrs from wgsl functions isn't allowed even if both bindings were buffers. The next attempt was to simply use indexed buffers everywhere, in both the binding array and non binding array path. This would be viable if push constants were available everywhere to pass the view index, but unfortunately they are not available on webgpu. This means either passing the view index in a storage buffer (not ideal for such a small amount of state) or using push constants sometimes and uniform buffers only on webgpu. However, this kind of conditional layout infects absolutely everything. Even if we were to accept just using storage buffer for the view index, there's also the additional problem that some dynamic offsets aren't actually per-view but per-use of a setting on a camera, which would require passing that uniform data on *every* camera regardless of whether that rendering feature is being used, which is also gross. As such, although it's gross, the simplest solution just to bump binding arrays into `@group(1)` and all other bindings up one bind group. This should still bring us under the device limit of 4 for most users. ### Next steps / looking towards the future I'd like to avoid needing split our view bind group into multiple parts. In the future, if `wgpu` were to add `@builtin(draw_index)`, we could build a list of draw state in gpu processing and avoid the need for any kind of state change at all (see https://github.com/gfx-rs/wgpu/issues/6823). This would also provide significantly more flexibility to handle things like offsets into other arrays that may not be per-view. ### Testing Tested a number of examples, there are probably more that are still broken. --------- Co-authored-by: François Mockers <mockersf@gmail.com> Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com> |
||
---|---|---|
.. | ||
compile_fail | ||
derive | ||
examples | ||
src | ||
Cargo.toml | ||
LICENSE-APACHE | ||
LICENSE-MIT | ||
README.md |
Bevy Reflect
This crate enables you to dynamically interact with Rust types:
- Derive the
Reflect
traits - Interact with fields using their names (for named structs) or indices (for tuple structs)
- "Patch" your types with new values
- Look up nested fields using "path strings"
- Iterate over struct fields
- Automatically serialize and deserialize via Serde (without explicit serde impls)
- Trait "reflection"
Features
Derive the Reflect
traits
// this will automatically implement the `Reflect` trait and the `Struct` trait (because the type is a struct)
#[derive(Reflect)]
struct Foo {
a: u32,
b: Bar,
c: Vec<i32>,
d: Vec<Baz>,
}
// this will automatically implement the `Reflect` trait and the `TupleStruct` trait (because the type is a tuple struct)
#[derive(Reflect)]
struct Bar(String);
#[derive(Reflect)]
struct Baz {
value: f32,
}
// We will use this value to illustrate `bevy_reflect` features
let mut foo = Foo {
a: 1,
b: Bar("hello".to_string()),
c: vec![1, 2],
d: vec![Baz { value: 3.14 }],
};
Interact with fields using their names
assert_eq!(*foo.get_field::<u32>("a").unwrap(), 1);
*foo.get_field_mut::<u32>("a").unwrap() = 2;
assert_eq!(foo.a, 2);
"Patch" your types with new values
let mut dynamic_struct = DynamicStruct::default();
dynamic_struct.insert("a", 42u32);
dynamic_struct.insert("c", vec![3, 4, 5]);
foo.apply(&dynamic_struct);
assert_eq!(foo.a, 42);
assert_eq!(foo.c, vec![3, 4, 5]);
Look up nested fields using "path strings"
let value = *foo.get_path::<f32>("d[0].value").unwrap();
assert_eq!(value, 3.14);
Iterate over struct fields
for (i, value: &Reflect) in foo.iter_fields().enumerate() {
let field_name = foo.name_at(i).unwrap();
if let Some(value) = value.downcast_ref::<u32>() {
println!("{} is a u32 with the value: {}", field_name, *value);
}
}
Automatically serialize and deserialize via Serde (without explicit serde impls)
let mut registry = TypeRegistry::default();
registry.register::<u32>();
registry.register::<i32>();
registry.register::<f32>();
registry.register::<String>();
registry.register::<Bar>();
registry.register::<Baz>();
let serializer = ReflectSerializer::new(&foo, ®istry);
let serialized = ron::ser::to_string_pretty(&serializer, ron::ser::PrettyConfig::default()).unwrap();
let mut deserializer = ron::de::Deserializer::from_str(&serialized).unwrap();
let reflect_deserializer = ReflectDeserializer::new(®istry);
let value = reflect_deserializer.deserialize(&mut deserializer).unwrap();
let dynamic_struct = value.take::<DynamicStruct>().unwrap();
assert!(foo.reflect_partial_eq(&dynamic_struct).unwrap());
Trait "reflection"
Call a trait on a given &dyn Reflect
reference without knowing the underlying type!
#[derive(Reflect)]
#[reflect(DoThing)]
struct MyType {
value: String,
}
impl DoThing for MyType {
fn do_thing(&self) -> String {
format!("{} World!", self.value)
}
}
#[reflect_trait]
pub trait DoThing {
fn do_thing(&self) -> String;
}
// First, lets box our type as a Box<dyn Reflect>
let reflect_value: Box<dyn Reflect> = Box::new(MyType {
value: "Hello".to_string(),
});
// This means we no longer have direct access to MyType or its methods. We can only call Reflect methods on reflect_value.
// What if we want to call `do_thing` on our type? We could downcast using reflect_value.downcast_ref::<MyType>(), but what if we
// don't know the type at compile time?
// Normally in rust we would be out of luck at this point. Lets use our new reflection powers to do something cool!
let mut type_registry = TypeRegistry::default();
type_registry.register::<MyType>();
// The #[reflect] attribute we put on our DoThing trait generated a new `ReflectDoThing` struct, which implements TypeData.
// This was added to MyType's TypeRegistration.
let reflect_do_thing = type_registry
.get_type_data::<ReflectDoThing>(reflect_value.type_id())
.unwrap();
// We can use this generated type to convert our `&dyn Reflect` reference to a `&dyn DoThing` reference
let my_trait: &dyn DoThing = reflect_do_thing.get(&*reflect_value).unwrap();
// Which means we can now call do_thing(). Magic!
println!("{}", my_trait.do_thing());
// This works because the #[reflect(MyTrait)] we put on MyType informed the Reflect derive to insert a new instance
// of ReflectDoThing into MyType's registration. The instance knows how to cast &dyn Reflect to &dyn DoThing, because it
// knows that &dyn Reflect should first be downcasted to &MyType, which can then be safely casted to &dyn DoThing
Why make this?
The whole point of Rust is static safety! Why build something that makes it easy to throw it all away?
- Some problems are inherently dynamic (scripting, some types of serialization / deserialization)
- Sometimes the dynamic way is easier
- Sometimes the dynamic way puts less burden on your users to derive a bunch of traits (this was a big motivator for the Bevy project)