# Objective Fixes a part of #14274. Bevy has an incredibly inconsistent naming convention for its system sets, both internally and across the ecosystem. <img alt="System sets in Bevy" src="https://github.com/user-attachments/assets/d16e2027-793f-4ba4-9cc9-e780b14a5a1b" width="450" /> *Names of public system set types in Bevy* Most Bevy types use a naming of `FooSystem` or just `Foo`, but there are also a few `FooSystems` and `FooSet` types. In ecosystem crates on the other hand, `FooSet` is perhaps the most commonly used name in general. Conventions being so wildly inconsistent can make it harder for users to pick names for their own types, to search for system sets on docs.rs, or to even discern which types *are* system sets. To reign in the inconsistency a bit and help unify the ecosystem, it would be good to establish a common recommended naming convention for system sets in Bevy itself, similar to how plugins are commonly suffixed with `Plugin` (ex: `TimePlugin`). By adopting a consistent naming convention in first-party Bevy, we can softly nudge ecosystem crates to follow suit (for types where it makes sense to do so). Choosing a naming convention is also relevant now, as the [`bevy_cli` recently adopted lints](https://github.com/TheBevyFlock/bevy_cli/pull/345) to enforce naming for plugins and system sets, and the recommended naming used for system sets is still a bit open. ## Which Name To Use? Now the contentious part: what naming convention should we actually adopt? This was discussed on the Bevy Discord at the end of last year, starting [here](<https://discord.com/channels/691052431525675048/692572690833473578/1310659954683936789>). `FooSet` and `FooSystems` were the clear favorites, with `FooSet` very narrowly winning an unofficial poll. However, it seems to me like the consensus was broadly moving towards `FooSystems` at the end and after the poll, with Cart ([source](https://discord.com/channels/691052431525675048/692572690833473578/1311140204974706708)) and later Alice ([source](https://discord.com/channels/691052431525675048/692572690833473578/1311092530732859533)) and also me being in favor of it. Let's do a quick pros and cons list! Of course these are just what I thought of, so take it with a grain of salt. `FooSet`: - Pro: Nice and short! - Pro: Used by many ecosystem crates. - Pro: The `Set` suffix comes directly from the trait name `SystemSet`. - Pro: Pairs nicely with existing APIs like `in_set` and `configure_sets`. - Con: `Set` by itself doesn't actually indicate that it's related to systems *at all*, apart from the implemented trait. A set of what? - Con: Is `FooSet` a set of `Foo`s or a system set related to `Foo`? Ex: `ContactSet`, `MeshSet`, `EnemySet`... `FooSystems`: - Pro: Very clearly indicates that the type represents a collection of systems. The actual core concept, system(s), is in the name. - Pro: Parallels nicely with `FooPlugins` for plugin groups. - Pro: Low risk of conflicts with other names or misunderstandings about what the type is. - Pro: In most cases, reads *very* nicely and clearly. Ex: `PhysicsSystems` and `AnimationSystems` as opposed to `PhysicsSet` and `AnimationSet`. - Pro: Easy to search for on docs.rs. - Con: Usually results in longer names. - Con: Not yet as widely used. Really the big problem with `FooSet` is that it doesn't actually describe what it is. It describes what *kind of thing* it is (a set of something), but not *what it is a set of*, unless you know the type or check its docs or implemented traits. `FooSystems` on the other hand is much more self-descriptive in this regard, at the cost of being a bit longer to type. Ultimately, in some ways it comes down to preference and how you think of system sets. Personally, I was originally in favor of `FooSet`, but have been increasingly on the side of `FooSystems`, especially after seeing what the new names would actually look like in Avian and now Bevy. I prefer it because it usually reads better, is much more clearly related to groups of systems than `FooSet`, and overall *feels* more correct and natural to me in the long term. For these reasons, and because Alice and Cart also seemed to share a preference for it when it was previously being discussed, I propose that we adopt a `FooSystems` naming convention where applicable. ## Solution Rename Bevy's system set types to use a consistent `FooSet` naming where applicable. - `AccessibilitySystem` → `AccessibilitySystems` - `GizmoRenderSystem` → `GizmoRenderSystems` - `PickSet` → `PickingSystems` - `RunFixedMainLoopSystem` → `RunFixedMainLoopSystems` - `TransformSystem` → `TransformSystems` - `RemoteSet` → `RemoteSystems` - `RenderSet` → `RenderSystems` - `SpriteSystem` → `SpriteSystems` - `StateTransitionSteps` → `StateTransitionSystems` - `RenderUiSystem` → `RenderUiSystems` - `UiSystem` → `UiSystems` - `Animation` → `AnimationSystems` - `AssetEvents` → `AssetEventSystems` - `TrackAssets` → `AssetTrackingSystems` - `UpdateGizmoMeshes` → `GizmoMeshSystems` - `InputSystem` → `InputSystems` - `InputFocusSet` → `InputFocusSystems` - `ExtractMaterialsSet` → `MaterialExtractionSystems` - `ExtractMeshesSet` → `MeshExtractionSystems` - `RumbleSystem` → `RumbleSystems` - `CameraUpdateSystem` → `CameraUpdateSystems` - `ExtractAssetsSet` → `AssetExtractionSystems` - `Update2dText` → `Text2dUpdateSystems` - `TimeSystem` → `TimeSystems` - `AudioPlaySet` → `AudioPlaybackSystems` - `SendEvents` → `EventSenderSystems` - `EventUpdates` → `EventUpdateSystems` A lot of the names got slightly longer, but they are also a lot more consistent, and in my opinion the majority of them read much better. For a few of the names I took the liberty of rewording things a bit; definitely open to any further naming improvements. There are still also cases where the `FooSystems` naming doesn't really make sense, and those I left alone. This primarily includes system sets like `Interned<dyn SystemSet>`, `EnterSchedules<S>`, `ExitSchedules<S>`, or `TransitionSchedules<S>`, where the type has some special purpose and semantics. ## Todo - [x] Should I keep all the old names as deprecated type aliases? I can do this, but to avoid wasting work I'd prefer to first reach consensus on whether these renames are even desired. - [x] Migration guide - [x] Release notes
386 lines
13 KiB
Rust
386 lines
13 KiB
Rust
use crate::{
|
|
extract_component::ExtractComponentPlugin,
|
|
render_asset::RenderAssets,
|
|
render_resource::{
|
|
Buffer, BufferUsages, CommandEncoder, Extent3d, TexelCopyBufferLayout, Texture,
|
|
TextureFormat,
|
|
},
|
|
renderer::{render_system, RenderDevice},
|
|
storage::{GpuShaderStorageBuffer, ShaderStorageBuffer},
|
|
sync_world::MainEntity,
|
|
texture::GpuImage,
|
|
ExtractSchedule, MainWorld, Render, RenderApp, RenderSystems,
|
|
};
|
|
use async_channel::{Receiver, Sender};
|
|
use bevy_app::{App, Plugin};
|
|
use bevy_asset::Handle;
|
|
use bevy_derive::{Deref, DerefMut};
|
|
use bevy_ecs::schedule::IntoScheduleConfigs;
|
|
use bevy_ecs::{
|
|
change_detection::ResMut,
|
|
entity::Entity,
|
|
event::Event,
|
|
prelude::{Component, Resource, World},
|
|
system::{Query, Res},
|
|
};
|
|
use bevy_image::{Image, TextureFormatPixelInfo};
|
|
use bevy_platform::collections::HashMap;
|
|
use bevy_reflect::Reflect;
|
|
use bevy_render_macros::ExtractComponent;
|
|
use encase::internal::ReadFrom;
|
|
use encase::private::Reader;
|
|
use encase::ShaderType;
|
|
use tracing::warn;
|
|
|
|
/// A plugin that enables reading back gpu buffers and textures to the cpu.
|
|
pub struct GpuReadbackPlugin {
|
|
/// Describes the number of frames a buffer can be unused before it is removed from the pool in
|
|
/// order to avoid unnecessary reallocations.
|
|
max_unused_frames: usize,
|
|
}
|
|
|
|
impl Default for GpuReadbackPlugin {
|
|
fn default() -> Self {
|
|
Self {
|
|
max_unused_frames: 10,
|
|
}
|
|
}
|
|
}
|
|
|
|
impl Plugin for GpuReadbackPlugin {
|
|
fn build(&self, app: &mut App) {
|
|
app.add_plugins(ExtractComponentPlugin::<Readback>::default());
|
|
|
|
if let Some(render_app) = app.get_sub_app_mut(RenderApp) {
|
|
render_app
|
|
.init_resource::<GpuReadbackBufferPool>()
|
|
.init_resource::<GpuReadbacks>()
|
|
.insert_resource(GpuReadbackMaxUnusedFrames(self.max_unused_frames))
|
|
.add_systems(ExtractSchedule, sync_readbacks.ambiguous_with_all())
|
|
.add_systems(
|
|
Render,
|
|
(
|
|
prepare_buffers.in_set(RenderSystems::PrepareResources),
|
|
map_buffers
|
|
.after(render_system)
|
|
.in_set(RenderSystems::Render),
|
|
),
|
|
);
|
|
}
|
|
}
|
|
}
|
|
|
|
/// A component that registers the wrapped handle for gpu readback, either a texture or a buffer.
|
|
///
|
|
/// Data is read asynchronously and will be triggered on the entity via the [`ReadbackComplete`] event
|
|
/// when complete. If this component is not removed, the readback will be attempted every frame
|
|
#[derive(Component, ExtractComponent, Clone, Debug)]
|
|
pub enum Readback {
|
|
Texture(Handle<Image>),
|
|
Buffer(Handle<ShaderStorageBuffer>),
|
|
}
|
|
|
|
impl Readback {
|
|
/// Create a readback component for a texture using the given handle.
|
|
pub fn texture(image: Handle<Image>) -> Self {
|
|
Self::Texture(image)
|
|
}
|
|
|
|
/// Create a readback component for a buffer using the given handle.
|
|
pub fn buffer(buffer: Handle<ShaderStorageBuffer>) -> Self {
|
|
Self::Buffer(buffer)
|
|
}
|
|
}
|
|
|
|
/// An event that is triggered when a gpu readback is complete.
|
|
///
|
|
/// The event contains the data as a `Vec<u8>`, which can be interpreted as the raw bytes of the
|
|
/// requested buffer or texture.
|
|
#[derive(Event, Deref, DerefMut, Reflect, Debug)]
|
|
#[reflect(Debug)]
|
|
pub struct ReadbackComplete(pub Vec<u8>);
|
|
|
|
impl ReadbackComplete {
|
|
/// Convert the raw bytes of the event to a shader type.
|
|
pub fn to_shader_type<T: ShaderType + ReadFrom + Default>(&self) -> T {
|
|
let mut val = T::default();
|
|
let mut reader = Reader::new::<T>(&self.0, 0).expect("Failed to create Reader");
|
|
T::read_from(&mut val, &mut reader);
|
|
val
|
|
}
|
|
}
|
|
|
|
#[derive(Resource)]
|
|
struct GpuReadbackMaxUnusedFrames(usize);
|
|
|
|
struct GpuReadbackBuffer {
|
|
buffer: Buffer,
|
|
taken: bool,
|
|
frames_unused: usize,
|
|
}
|
|
|
|
#[derive(Resource, Default)]
|
|
struct GpuReadbackBufferPool {
|
|
// Map of buffer size to list of buffers, with a flag for whether the buffer is taken and how
|
|
// many frames it has been unused for.
|
|
// TODO: We could ideally write all readback data to one big buffer per frame, the assumption
|
|
// here is that very few entities well actually be read back at once, and their size is
|
|
// unlikely to change.
|
|
buffers: HashMap<u64, Vec<GpuReadbackBuffer>>,
|
|
}
|
|
|
|
impl GpuReadbackBufferPool {
|
|
fn get(&mut self, render_device: &RenderDevice, size: u64) -> Buffer {
|
|
let buffers = self.buffers.entry(size).or_default();
|
|
|
|
// find an untaken buffer for this size
|
|
if let Some(buf) = buffers.iter_mut().find(|x| !x.taken) {
|
|
buf.taken = true;
|
|
buf.frames_unused = 0;
|
|
return buf.buffer.clone();
|
|
}
|
|
|
|
let buffer = render_device.create_buffer(&wgpu::BufferDescriptor {
|
|
label: Some("Readback Buffer"),
|
|
size,
|
|
usage: BufferUsages::COPY_DST | BufferUsages::MAP_READ,
|
|
mapped_at_creation: false,
|
|
});
|
|
buffers.push(GpuReadbackBuffer {
|
|
buffer: buffer.clone(),
|
|
taken: true,
|
|
frames_unused: 0,
|
|
});
|
|
buffer
|
|
}
|
|
|
|
// Returns the buffer to the pool so it can be used in a future frame
|
|
fn return_buffer(&mut self, buffer: &Buffer) {
|
|
let size = buffer.size();
|
|
let buffers = self
|
|
.buffers
|
|
.get_mut(&size)
|
|
.expect("Returned buffer of untracked size");
|
|
if let Some(buf) = buffers.iter_mut().find(|x| x.buffer.id() == buffer.id()) {
|
|
buf.taken = false;
|
|
} else {
|
|
warn!("Returned buffer that was not allocated");
|
|
}
|
|
}
|
|
|
|
fn update(&mut self, max_unused_frames: usize) {
|
|
for (_, buffers) in &mut self.buffers {
|
|
// Tick all the buffers
|
|
for buf in &mut *buffers {
|
|
if !buf.taken {
|
|
buf.frames_unused += 1;
|
|
}
|
|
}
|
|
|
|
// Remove buffers that haven't been used for MAX_UNUSED_FRAMES
|
|
buffers.retain(|x| x.frames_unused < max_unused_frames);
|
|
}
|
|
|
|
// Remove empty buffer sizes
|
|
self.buffers.retain(|_, buffers| !buffers.is_empty());
|
|
}
|
|
}
|
|
|
|
enum ReadbackSource {
|
|
Texture {
|
|
texture: Texture,
|
|
layout: TexelCopyBufferLayout,
|
|
size: Extent3d,
|
|
},
|
|
Buffer {
|
|
src_start: u64,
|
|
dst_start: u64,
|
|
buffer: Buffer,
|
|
},
|
|
}
|
|
|
|
#[derive(Resource, Default)]
|
|
struct GpuReadbacks {
|
|
requested: Vec<GpuReadback>,
|
|
mapped: Vec<GpuReadback>,
|
|
}
|
|
|
|
struct GpuReadback {
|
|
pub entity: Entity,
|
|
pub src: ReadbackSource,
|
|
pub buffer: Buffer,
|
|
pub rx: Receiver<(Entity, Buffer, Vec<u8>)>,
|
|
pub tx: Sender<(Entity, Buffer, Vec<u8>)>,
|
|
}
|
|
|
|
fn sync_readbacks(
|
|
mut main_world: ResMut<MainWorld>,
|
|
mut buffer_pool: ResMut<GpuReadbackBufferPool>,
|
|
mut readbacks: ResMut<GpuReadbacks>,
|
|
max_unused_frames: Res<GpuReadbackMaxUnusedFrames>,
|
|
) {
|
|
readbacks.mapped.retain(|readback| {
|
|
if let Ok((entity, buffer, result)) = readback.rx.try_recv() {
|
|
main_world.trigger_targets(ReadbackComplete(result), entity);
|
|
buffer_pool.return_buffer(&buffer);
|
|
false
|
|
} else {
|
|
true
|
|
}
|
|
});
|
|
|
|
buffer_pool.update(max_unused_frames.0);
|
|
}
|
|
|
|
fn prepare_buffers(
|
|
render_device: Res<RenderDevice>,
|
|
mut readbacks: ResMut<GpuReadbacks>,
|
|
mut buffer_pool: ResMut<GpuReadbackBufferPool>,
|
|
gpu_images: Res<RenderAssets<GpuImage>>,
|
|
ssbos: Res<RenderAssets<GpuShaderStorageBuffer>>,
|
|
handles: Query<(&MainEntity, &Readback)>,
|
|
) {
|
|
for (entity, readback) in handles.iter() {
|
|
match readback {
|
|
Readback::Texture(image) => {
|
|
if let Some(gpu_image) = gpu_images.get(image) {
|
|
let layout = layout_data(gpu_image.size, gpu_image.texture_format);
|
|
let buffer = buffer_pool.get(
|
|
&render_device,
|
|
get_aligned_size(
|
|
gpu_image.size,
|
|
gpu_image.texture_format.pixel_size() as u32,
|
|
) as u64,
|
|
);
|
|
let (tx, rx) = async_channel::bounded(1);
|
|
readbacks.requested.push(GpuReadback {
|
|
entity: entity.id(),
|
|
src: ReadbackSource::Texture {
|
|
texture: gpu_image.texture.clone(),
|
|
layout,
|
|
size: gpu_image.size,
|
|
},
|
|
buffer,
|
|
rx,
|
|
tx,
|
|
});
|
|
}
|
|
}
|
|
Readback::Buffer(buffer) => {
|
|
if let Some(ssbo) = ssbos.get(buffer) {
|
|
let size = ssbo.buffer.size();
|
|
let buffer = buffer_pool.get(&render_device, size);
|
|
let (tx, rx) = async_channel::bounded(1);
|
|
readbacks.requested.push(GpuReadback {
|
|
entity: entity.id(),
|
|
src: ReadbackSource::Buffer {
|
|
src_start: 0,
|
|
dst_start: 0,
|
|
buffer: ssbo.buffer.clone(),
|
|
},
|
|
buffer,
|
|
rx,
|
|
tx,
|
|
});
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
pub(crate) fn submit_readback_commands(world: &World, command_encoder: &mut CommandEncoder) {
|
|
let readbacks = world.resource::<GpuReadbacks>();
|
|
for readback in &readbacks.requested {
|
|
match &readback.src {
|
|
ReadbackSource::Texture {
|
|
texture,
|
|
layout,
|
|
size,
|
|
} => {
|
|
command_encoder.copy_texture_to_buffer(
|
|
texture.as_image_copy(),
|
|
wgpu::TexelCopyBufferInfo {
|
|
buffer: &readback.buffer,
|
|
layout: *layout,
|
|
},
|
|
*size,
|
|
);
|
|
}
|
|
ReadbackSource::Buffer {
|
|
src_start,
|
|
dst_start,
|
|
buffer,
|
|
} => {
|
|
command_encoder.copy_buffer_to_buffer(
|
|
buffer,
|
|
*src_start,
|
|
&readback.buffer,
|
|
*dst_start,
|
|
buffer.size(),
|
|
);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
/// Move requested readbacks to mapped readbacks after commands have been submitted in render system
|
|
fn map_buffers(mut readbacks: ResMut<GpuReadbacks>) {
|
|
let requested = readbacks.requested.drain(..).collect::<Vec<GpuReadback>>();
|
|
for readback in requested {
|
|
let slice = readback.buffer.slice(..);
|
|
let entity = readback.entity;
|
|
let buffer = readback.buffer.clone();
|
|
let tx = readback.tx.clone();
|
|
slice.map_async(wgpu::MapMode::Read, move |res| {
|
|
res.expect("Failed to map buffer");
|
|
let buffer_slice = buffer.slice(..);
|
|
let data = buffer_slice.get_mapped_range();
|
|
let result = Vec::from(&*data);
|
|
drop(data);
|
|
buffer.unmap();
|
|
if let Err(e) = tx.try_send((entity, buffer, result)) {
|
|
warn!("Failed to send readback result: {}", e);
|
|
}
|
|
});
|
|
readbacks.mapped.push(readback);
|
|
}
|
|
}
|
|
|
|
// Utils
|
|
|
|
/// Round up a given value to be a multiple of [`wgpu::COPY_BYTES_PER_ROW_ALIGNMENT`].
|
|
pub(crate) const fn align_byte_size(value: u32) -> u32 {
|
|
RenderDevice::align_copy_bytes_per_row(value as usize) as u32
|
|
}
|
|
|
|
/// Get the size of a image when the size of each row has been rounded up to [`wgpu::COPY_BYTES_PER_ROW_ALIGNMENT`].
|
|
pub(crate) const fn get_aligned_size(extent: Extent3d, pixel_size: u32) -> u32 {
|
|
extent.height * align_byte_size(extent.width * pixel_size) * extent.depth_or_array_layers
|
|
}
|
|
|
|
/// Get a [`TexelCopyBufferLayout`] aligned such that the image can be copied into a buffer.
|
|
pub(crate) fn layout_data(extent: Extent3d, format: TextureFormat) -> TexelCopyBufferLayout {
|
|
TexelCopyBufferLayout {
|
|
bytes_per_row: if extent.height > 1 || extent.depth_or_array_layers > 1 {
|
|
// 1 = 1 row
|
|
Some(get_aligned_size(
|
|
Extent3d {
|
|
width: extent.width,
|
|
height: 1,
|
|
depth_or_array_layers: 1,
|
|
},
|
|
format.pixel_size() as u32,
|
|
))
|
|
} else {
|
|
None
|
|
},
|
|
rows_per_image: if extent.depth_or_array_layers > 1 {
|
|
let (_, block_dimension_y) = format.block_dimensions();
|
|
Some(extent.height / block_dimension_y)
|
|
} else {
|
|
None
|
|
},
|
|
offset: 0,
|
|
}
|
|
}
|