Commit Graph

315 Commits

Author SHA1 Message Date
atlv
b62b14c293
Add UVec to_extents helper method (#19807)
# Objective

- Simplify common usecase

## Solution

- Helper trait
2025-06-26 20:53:49 +00:00
charlotte 🌸
92e65d5eb1
Upgrade to Rust 1.88 (#19825) 2025-06-26 19:38:19 +00:00
Aevyrie
3cefe82aff
Projection Improvements (#18458)
# Objective

- Remove a component impl footgun
- Make projection code slightly nicer, and remove the need to import the
projection trait when using the methods on `Projection`.

## Solution

- Do the things.
2025-06-24 03:26:38 +00:00
atlv
2915a3b903
rename GlobalTransform::compute_matrix to to_matrix (#19643)
# Objective

- compute_matrix doesn't compute anything, it just puts an Affine3A into
a Mat4. the name is inaccurate

## Solution

- rename it to conform with to_isometry (which, ironically, does compute
a decomposition which is rather expensive)

## Testing

- Its a rename. If it compiles, its good to go

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2025-06-17 18:37:26 +00:00
Joona Aalto
38c3423693
Event Split: Event, EntityEvent, and BufferedEvent (#19647)
# Objective

Closes #19564.

The current `Event` trait looks like this:

```rust
pub trait Event: Send + Sync + 'static {
    type Traversal: Traversal<Self>;
    const AUTO_PROPAGATE: bool = false;
    
    fn register_component_id(world: &mut World) -> ComponentId { ... }
    fn component_id(world: &World) -> Option<ComponentId> { ... }
}
```

The `Event` trait is used by both buffered events
(`EventReader`/`EventWriter`) and observer events. If they are observer
events, they can optionally be targeted at specific `Entity`s or
`ComponentId`s, and can even be propagated to other entities.

However, there has long been a desire to split the trait semantically
for a variety of reasons, see #14843, #14272, and #16031 for discussion.
Some reasons include:

- It's very uncommon to use a single event type as both a buffered event
and targeted observer event. They are used differently and tend to have
distinct semantics.
- A common footgun is using buffered events with observers or event
readers with observer events, as there is no type-level error that
prevents this kind of misuse.
- #19440 made `Trigger::target` return an `Option<Entity>`. This
*seriously* hurts ergonomics for the general case of entity observers,
as you need to `.unwrap()` each time. If we could statically determine
whether the event is expected to have an entity target, this would be
unnecessary.

There's really two main ways that we can categorize events: push vs.
pull (i.e. "observer event" vs. "buffered event") and global vs.
targeted:

|              | Push            | Pull                        |
| ------------ | --------------- | --------------------------- |
| **Global**   | Global observer | `EventReader`/`EventWriter` |
| **Targeted** | Entity observer | -                           |

There are many ways to approach this, each with their tradeoffs.
Ultimately, we kind of want to split events both ways:

- A type-level distinction between observer events and buffered events,
to prevent people from using the wrong kind of event in APIs
- A statically designated entity target for observer events to avoid
accidentally using untargeted events for targeted APIs

This PR achieves these goals by splitting event traits into `Event`,
`EntityEvent`, and `BufferedEvent`, with `Event` being the shared trait
implemented by all events.

## `Event`, `EntityEvent`, and `BufferedEvent`

`Event` is now a very simple trait shared by all events.

```rust
pub trait Event: Send + Sync + 'static {
    // Required for observer APIs
    fn register_component_id(world: &mut World) -> ComponentId { ... }
    fn component_id(world: &World) -> Option<ComponentId> { ... }
}
```

You can call `trigger` for *any* event, and use a global observer for
listening to the event.

```rust
#[derive(Event)]
struct Speak {
    message: String,
}

// ...

app.add_observer(|trigger: On<Speak>| {
    println!("{}", trigger.message);
});

// ...

commands.trigger(Speak {
    message: "Y'all like these reworked events?".to_string(),
});
```

To allow an event to be targeted at entities and even propagated
further, you can additionally implement the `EntityEvent` trait:

```rust
pub trait EntityEvent: Event {
    type Traversal: Traversal<Self>;
    const AUTO_PROPAGATE: bool = false;
}
```

This lets you call `trigger_targets`, and to use targeted observer APIs
like `EntityCommands::observe`:

```rust
#[derive(Event, EntityEvent)]
#[entity_event(traversal = &'static ChildOf, auto_propagate)]
struct Damage {
    amount: f32,
}

// ...

let enemy = commands.spawn((Enemy, Health(100.0))).id();

// Spawn some armor as a child of the enemy entity.
// When the armor takes damage, it will bubble the event up to the enemy.
let armor_piece = commands
    .spawn((ArmorPiece, Health(25.0), ChildOf(enemy)))
    .observe(|trigger: On<Damage>, mut query: Query<&mut Health>| {
        // Note: `On::target` only exists because this is an `EntityEvent`.
        let mut health = query.get(trigger.target()).unwrap();
        health.0 -= trigger.amount();
    });

commands.trigger_targets(Damage { amount: 10.0 }, armor_piece);
```

> [!NOTE]
> You *can* still also trigger an `EntityEvent` without targets using
`trigger`. We probably *could* make this an either-or thing, but I'm not
sure that's actually desirable.

To allow an event to be used with the buffered API, you can implement
`BufferedEvent`:

```rust
pub trait BufferedEvent: Event {}
```

The event can then be used with `EventReader`/`EventWriter`:

```rust
#[derive(Event, BufferedEvent)]
struct Message(String);

fn write_hello(mut writer: EventWriter<Message>) {
    writer.write(Message("I hope these examples are alright".to_string()));
}

fn read_messages(mut reader: EventReader<Message>) {
    // Process all buffered events of type `Message`.
    for Message(message) in reader.read() {
        println!("{message}");
    }
}
```

In summary:

- Need a basic event you can trigger and observe? Derive `Event`!
- Need the event to be targeted at an entity? Derive `EntityEvent`!
- Need the event to be buffered and support the
`EventReader`/`EventWriter` API? Derive `BufferedEvent`!

## Alternatives

I'll now cover some of the alternative approaches I have considered and
briefly explored. I made this section collapsible since it ended up
being quite long :P

<details>

<summary>Expand this to see alternatives</summary>

### 1. Unified `Event` Trait

One option is not to have *three* separate traits (`Event`,
`EntityEvent`, `BufferedEvent`), and to instead just use associated
constants on `Event` to determine whether an event supports targeting
and buffering or not:

```rust
pub trait Event: Send + Sync + 'static {
    type Traversal: Traversal<Self>;
    const AUTO_PROPAGATE: bool = false;
    const TARGETED: bool = false;
    const BUFFERED: bool = false;
    
    fn register_component_id(world: &mut World) -> ComponentId { ... }
    fn component_id(world: &World) -> Option<ComponentId> { ... }
}
```

Methods can then use bounds like `where E: Event<TARGETED = true>` or
`where E: Event<BUFFERED = true>` to limit APIs to specific kinds of
events.

This would keep everything under one `Event` trait, but I don't think
it's necessarily a good idea. It makes APIs harder to read, and docs
can't easily refer to specific types of events. You can also create
weird invariants: what if you specify `TARGETED = false`, but have
`Traversal` and/or `AUTO_PROPAGATE` enabled?

### 2. `Event` and `Trigger`

Another option is to only split the traits between buffered events and
observer events, since that is the main thing people have been asking
for, and they have the largest API difference.

If we did this, I think we would need to make the terms *clearly*
separate. We can't really use `Event` and `BufferedEvent` as the names,
since it would be strange that `BufferedEvent` doesn't implement
`Event`. Something like `ObserverEvent` and `BufferedEvent` could work,
but it'd be more verbose.

For this approach, I would instead keep `Event` for the current
`EventReader`/`EventWriter` API, and call the observer event a
`Trigger`, since the "trigger" terminology is already used in the
observer context within Bevy (both as a noun and a verb). This is also
what a long [bikeshed on
Discord](https://discord.com/channels/691052431525675048/749335865876021248/1298057661878898791)
seemed to land on at the end of last year.

```rust
// For `EventReader`/`EventWriter`
pub trait Event: Send + Sync + 'static {}

// For observers
pub trait Trigger: Send + Sync + 'static {
    type Traversal: Traversal<Self>;
    const AUTO_PROPAGATE: bool = false;
    const TARGETED: bool = false;
    
    fn register_component_id(world: &mut World) -> ComponentId { ... }
    fn component_id(world: &World) -> Option<ComponentId> { ... }
}
```

The problem is that "event" is just a really good term for something
that "happens". Observers are rapidly becoming the more prominent API,
so it'd be weird to give them the `Trigger` name and leave the good
`Event` name for the less common API.

So, even though a split like this seems neat on the surface, I think it
ultimately wouldn't really work. We want to keep the `Event` name for
observer events, and there is no good alternative for the buffered
variant. (`Message` was suggested, but saying stuff like "sends a
collision message" is weird.)

### 3. `GlobalEvent` + `TargetedEvent`

What if instead of focusing on the buffered vs. observed split, we
*only* make a distinction between global and targeted events?

```rust
// A shared event trait to allow global observers to work
pub trait Event: Send + Sync + 'static {
    fn register_component_id(world: &mut World) -> ComponentId { ... }
    fn component_id(world: &World) -> Option<ComponentId> { ... }
}

// For buffered events and non-targeted observer events
pub trait GlobalEvent: Event {}

// For targeted observer events
pub trait TargetedEvent: Event {
    type Traversal: Traversal<Self>;
    const AUTO_PROPAGATE: bool = false;
}
```

This is actually the first approach I implemented, and it has the neat
characteristic that you can only use non-targeted APIs like `trigger`
with a `GlobalEvent` and targeted APIs like `trigger_targets` with a
`TargetedEvent`. You have full control over whether the entity should or
should not have a target, as they are fully distinct at the type-level.

However, there's a few problems:

- There is no type-level indication of whether a `GlobalEvent` supports
buffered events or just non-targeted observer events
- An `Event` on its own does literally nothing, it's just a shared trait
required to make global observers accept both non-targeted and targeted
events
- If an event is both a `GlobalEvent` and `TargetedEvent`, global
observers again have ambiguity on whether an event has a target or not,
undermining some of the benefits
- The names are not ideal

### 4. `Event` and `EntityEvent`

We can fix some of the problems of Alternative 3 by accepting that
targeted events can also be used in non-targeted contexts, and simply
having the `Event` and `EntityEvent` traits:

```rust
// For buffered events and non-targeted observer events
pub trait Event: Send + Sync + 'static {
    fn register_component_id(world: &mut World) -> ComponentId { ... }
    fn component_id(world: &World) -> Option<ComponentId> { ... }
}

// For targeted observer events
pub trait EntityEvent: Event {
    type Traversal: Traversal<Self>;
    const AUTO_PROPAGATE: bool = false;
}
```

This is essentially identical to this PR, just without a dedicated
`BufferedEvent`. The remaining major "problem" is that there is still
zero type-level indication of whether an `Event` event *actually*
supports the buffered API. This leads us to the solution proposed in
this PR, using `Event`, `EntityEvent`, and `BufferedEvent`.

</details>

## Conclusion

The `Event` + `EntityEvent` + `BufferedEvent` split proposed in this PR
aims to solve all the common problems with Bevy's current event model
while keeping the "weirdness" factor minimal. It splits in terms of both
the push vs. pull *and* global vs. targeted aspects, while maintaining a
shared concept for an "event".

### Why I Like This

- The term "event" remains as a single concept for all the different
kinds of events in Bevy.
- Despite all event types being "events", they use fundamentally
different APIs. Instead of assuming that you can use an event type with
any pattern (when only one is typically supported), you explicitly opt
in to each one with dedicated traits.
- Using separate traits for each type of event helps with documentation
and clearer function signatures.
- I can safely make assumptions on expected usage.
- If I see that an event is an `EntityEvent`, I can assume that I can
use `observe` on it and get targeted events.
- If I see that an event is a `BufferedEvent`, I can assume that I can
use `EventReader` to read events.
- If I see both `EntityEvent` and `BufferedEvent`, I can assume that
both APIs are supported.

In summary: This allows for a unified concept for events, while limiting
the different ways to use them with opt-in traits. No more guess-work
involved when using APIs.

### Problems?

- Because `BufferedEvent` implements `Event` (for more consistent
semantics etc.), you can still use all buffered events for non-targeted
observers. I think this is fine/good. The important part is that if you
see that an event implements `BufferedEvent`, you know that the
`EventReader`/`EventWriter` API should be supported. Whether it *also*
supports other APIs is secondary.
- I currently only support `trigger_targets` for an `EntityEvent`.
However, you can technically target components too, without targeting
any entities. I consider that such a niche and advanced use case that
it's not a huge problem to only support it for `EntityEvent`s, but we
could also split `trigger_targets` into `trigger_entities` and
`trigger_components` if we wanted to (or implement components as
entities :P).
- You can still trigger an `EntityEvent` *without* targets. I consider
this correct, since `Event` implements the non-targeted behavior, and
it'd be weird if implementing another trait *removed* behavior. However,
it does mean that global observers for entity events can technically
return `Entity::PLACEHOLDER` again (since I got rid of the
`Option<Entity>` added in #19440 for ergonomics). I think that's enough
of an edge case that it's not a huge problem, but it is worth keeping in
mind.
- ~~Deriving both `EntityEvent` and `BufferedEvent` for the same type
currently duplicates the `Event` implementation, so you instead need to
manually implement one of them.~~ Changed to always requiring `Event` to
be derived.

## Related Work

There are plans to implement multi-event support for observers,
especially for UI contexts. [Cart's
example](https://github.com/bevyengine/bevy/issues/14649#issuecomment-2960402508)
API looked like this:

```rust
// Truncated for brevity
trigger: Trigger<(
    OnAdd<Pressed>,
    OnRemove<Pressed>,
    OnAdd<InteractionDisabled>,
    OnRemove<InteractionDisabled>,
    OnInsert<Hovered>,
)>,
```

I believe this shouldn't be in conflict with this PR. If anything, this
PR might *help* achieve the multi-event pattern for entity observers
with fewer footguns: by statically enforcing that all of these events
are `EntityEvent`s in the context of `EntityCommands::observe`, we can
avoid misuse or weird cases where *some* events inside the trigger are
targeted while others are not.
2025-06-15 16:46:34 +00:00
Joona Aalto
e5dc177b4b
Rename Trigger to On (#19596)
# Objective

Currently, the observer API looks like this:

```rust
app.add_observer(|trigger: Trigger<Explode>| {
    info!("Entity {} exploded!", trigger.target());
});
```

Future plans for observers also include "multi-event observers" with a
trigger that looks like this (see [Cart's
example](https://github.com/bevyengine/bevy/issues/14649#issuecomment-2960402508)):

```rust
trigger: Trigger<(
    OnAdd<Pressed>,
    OnRemove<Pressed>,
    OnAdd<InteractionDisabled>,
    OnRemove<InteractionDisabled>,
    OnInsert<Hovered>,
)>,
```

In scenarios like this, there is a lot of repetition of `On`. These are
expected to be very high-traffic APIs especially in UI contexts, so
ergonomics and readability are critical.

By renaming `Trigger` to `On`, we can make these APIs read more cleanly
and get rid of the repetition:

```rust
app.add_observer(|trigger: On<Explode>| {
    info!("Entity {} exploded!", trigger.target());
});
```

```rust
trigger: On<(
    Add<Pressed>,
    Remove<Pressed>,
    Add<InteractionDisabled>,
    Remove<InteractionDisabled>,
    Insert<Hovered>,
)>,
```

Names like `On<Add<Pressed>>` emphasize the actual event listener nature
more than `Trigger<OnAdd<Pressed>>`, and look cleaner. This *also* frees
up the `Trigger` name if we want to use it for the observer event type,
splitting them out from buffered events (bikeshedding this is out of
scope for this PR though).

For prior art:
[`bevy_eventlistener`](https://github.com/aevyrie/bevy_eventlistener)
used
[`On`](https://docs.rs/bevy_eventlistener/latest/bevy_eventlistener/event_listener/struct.On.html)
for its event listener type. Though in our case, the observer is the
event listener, and `On` is just a type containing information about the
triggered event.

## Solution

Steal from `bevy_event_listener` by @aevyrie and use `On`.

- Rename `Trigger` to `On`
- Rename `OnAdd` to `Add`
- Rename `OnInsert` to `Insert`
- Rename `OnReplace` to `Replace`
- Rename `OnRemove` to `Remove`
- Rename `OnDespawn` to `Despawn`

## Discussion

### Naming Conflicts??

Using a name like `Add` might initially feel like a very bad idea, since
it risks conflict with `core::ops::Add`. However, I don't expect this to
be a big problem in practice.

- You rarely need to actually implement the `Add` trait, especially in
modules that would use the Bevy ECS.
- In the rare cases where you *do* get a conflict, it is very easy to
fix by just disambiguating, for example using `ops::Add`.
- The `Add` event is a struct while the `Add` trait is a trait (duh), so
the compiler error should be very obvious.

For the record, renaming `OnAdd` to `Add`, I got exactly *zero* errors
or conflicts within Bevy itself. But this is of course not entirely
representative of actual projects *using* Bevy.

You might then wonder, why not use `Added`? This would conflict with the
`Added` query filter, so it wouldn't work. Additionally, the current
naming convention for observer events does not use past tense.

### Documentation

This does make documentation slightly more awkward when referring to
`On` or its methods. Previous docs often referred to `Trigger::target`
or "sends a `Trigger`" (which is... a bit strange anyway), which would
now be `On::target` and "sends an observer `Event`".

You can see the diff in this PR to see some of the effects. I think it
should be fine though, we may just need to reword more documentation to
read better.
2025-06-12 18:22:33 +00:00
Alice Cecile
6ddd0f16a8
Component lifecycle reorganization and documentation (#19543)
# Objective

I set out with one simple goal: clearly document the differences between
each of the component lifecycle events via module docs.

Unfortunately, no such module existed: the various lifecycle code was
scattered to the wind.
Without a unified module, it's very hard to discover the related types,
and there's nowhere good to put my shiny new documentation.

## Solution

1. Unify the assorted types into a single
`bevy_ecs::component_lifecycle` module.
2. Write docs.
3. Write a migration guide.

## Testing

Thanks CI!

## Follow-up

1. The lifecycle event names are pretty confusing, especially
`OnReplace`. We should consider renaming those. No bikeshedding in my PR
though!
2. Observers need real module docs too :(
3. Any additional functional changes should be done elsewhere; this is a
simple docs and re-org PR.

---------

Co-authored-by: theotherphil <phil.j.ellison@gmail.com>
2025-06-10 00:59:16 +00:00
ickshonpe
43b8fbda93
Unrequire VisibilityClass from Node (#17918)
# Objective

The UI doesn't use `ViewVisibility` so it doesn't do anything.

## Solution

Remove it.
2025-05-31 08:18:01 +00:00
atlv
2946de4573
doc(render): fix incorrectly transposed view matrix docs (#19317)
# Objective

- Mend incorrect docs

## Solution

- Mend them
- add example use
- clarify column major

## Testing

- No code changes
2025-05-27 04:58:58 +00:00
atlv
1732c2253b
refactor(render): move WgpuWrapper into bevy_utils (#19303)
# Objective

- A step towards splitting out bevy_camera from bevy_render

## Solution

- Move a shim type into bevy_utils to avoid a dependency cycle
- Manually expand Deref/DerefMut to avoid having a bevy_derive
dependency so early in the dep tree

## Testing

- It compiles
2025-05-27 03:43:49 +00:00
andriyDev
8db7b6e122
Remove Shader weak_handles from bevy_render. (#19362)
# Objective

- Related to #19024

## Solution

- Use the new `load_shader_library` macro for the shader libraries and
`embedded_asset`/`load_embedded_asset` for the "shader binaries" in
bevy_render.

## Testing

- `animate_shader` example still works

P.S. I don't think this needs a migration guide. Technically users could
be using the `pub` weak handles, but there's no actual good use for
them, so omitting it seems fine. Alternatively, we could mix this in
with the migration guide notes for #19137.
2025-05-26 20:20:25 +00:00
theotherphil
07c9d1acce
Clarify RenderLayers docs (#19241)
# Objective

Clarify `RenderLayers` docs, to fix
https://github.com/bevyengine/bevy/issues/18874

## Solution

-

## Testing

-
2025-05-26 19:52:22 +00:00
Emerson Coskey
7ab00ca185
Split Camera.hdr out into a new component (#18873)
# Objective

- Simplify `Camera` initialization
- allow effects to require HDR

## Solution

- Split out `Camera.hdr` into a marker `Hdr` component

## Testing

- ran `bloom_3d` example

---

## Showcase

```rs
// before
commands.spawn((
  Camera3d
  Camera {
    hdr: true
    ..Default::default()
  }
))

// after
commands.spawn((Camera3d, Hdr));

// other rendering components can require that the camera enables hdr!
// currently implemented for Bloom, AutoExposure, and Atmosphere.
#[require(Hdr)]
pub struct Bloom;
```
2025-05-26 19:24:45 +00:00
IceSentry
85284cbb90
Move trigger_screenshots to finish() (#19204)
# Objective

- The tigger_screenshots system gets added in `.build()` but relies on a
resource that is only inserted in `.finish()`
- This isn't a bug for most users, but when doing headless mode testing
it can technically work without ever calling `.finish()` and did work
before bevy 0.15 but while migrating my work codebase I had an issue of
test failing because of this

## Solution

- Move the trigger_screenshots system to `.finish()`

## Testing

- I ran the screenshot example and it worked as expected
2025-05-26 18:07:17 +00:00
Joona Aalto
7b1c9f192e
Adopt consistent FooSystems naming convention for system sets (#18900)
# Objective

Fixes a part of #14274.

Bevy has an incredibly inconsistent naming convention for its system
sets, both internally and across the ecosystem.

<img alt="System sets in Bevy"
src="https://github.com/user-attachments/assets/d16e2027-793f-4ba4-9cc9-e780b14a5a1b"
width="450" />

*Names of public system set types in Bevy*

Most Bevy types use a naming of `FooSystem` or just `Foo`, but there are
also a few `FooSystems` and `FooSet` types. In ecosystem crates on the
other hand, `FooSet` is perhaps the most commonly used name in general.
Conventions being so wildly inconsistent can make it harder for users to
pick names for their own types, to search for system sets on docs.rs, or
to even discern which types *are* system sets.

To reign in the inconsistency a bit and help unify the ecosystem, it
would be good to establish a common recommended naming convention for
system sets in Bevy itself, similar to how plugins are commonly suffixed
with `Plugin` (ex: `TimePlugin`). By adopting a consistent naming
convention in first-party Bevy, we can softly nudge ecosystem crates to
follow suit (for types where it makes sense to do so).

Choosing a naming convention is also relevant now, as the [`bevy_cli`
recently adopted
lints](https://github.com/TheBevyFlock/bevy_cli/pull/345) to enforce
naming for plugins and system sets, and the recommended naming used for
system sets is still a bit open.

## Which Name To Use?

Now the contentious part: what naming convention should we actually
adopt?

This was discussed on the Bevy Discord at the end of last year, starting
[here](<https://discord.com/channels/691052431525675048/692572690833473578/1310659954683936789>).
`FooSet` and `FooSystems` were the clear favorites, with `FooSet` very
narrowly winning an unofficial poll. However, it seems to me like the
consensus was broadly moving towards `FooSystems` at the end and after
the poll, with Cart
([source](https://discord.com/channels/691052431525675048/692572690833473578/1311140204974706708))
and later Alice
([source](https://discord.com/channels/691052431525675048/692572690833473578/1311092530732859533))
and also me being in favor of it.

Let's do a quick pros and cons list! Of course these are just what I
thought of, so take it with a grain of salt.

`FooSet`:

- Pro: Nice and short!
- Pro: Used by many ecosystem crates.
- Pro: The `Set` suffix comes directly from the trait name `SystemSet`.
- Pro: Pairs nicely with existing APIs like `in_set` and
`configure_sets`.
- Con: `Set` by itself doesn't actually indicate that it's related to
systems *at all*, apart from the implemented trait. A set of what?
- Con: Is `FooSet` a set of `Foo`s or a system set related to `Foo`? Ex:
`ContactSet`, `MeshSet`, `EnemySet`...

`FooSystems`:

- Pro: Very clearly indicates that the type represents a collection of
systems. The actual core concept, system(s), is in the name.
- Pro: Parallels nicely with `FooPlugins` for plugin groups.
- Pro: Low risk of conflicts with other names or misunderstandings about
what the type is.
- Pro: In most cases, reads *very* nicely and clearly. Ex:
`PhysicsSystems` and `AnimationSystems` as opposed to `PhysicsSet` and
`AnimationSet`.
- Pro: Easy to search for on docs.rs.
- Con: Usually results in longer names.
- Con: Not yet as widely used.

Really the big problem with `FooSet` is that it doesn't actually
describe what it is. It describes what *kind of thing* it is (a set of
something), but not *what it is a set of*, unless you know the type or
check its docs or implemented traits. `FooSystems` on the other hand is
much more self-descriptive in this regard, at the cost of being a bit
longer to type.

Ultimately, in some ways it comes down to preference and how you think
of system sets. Personally, I was originally in favor of `FooSet`, but
have been increasingly on the side of `FooSystems`, especially after
seeing what the new names would actually look like in Avian and now
Bevy. I prefer it because it usually reads better, is much more clearly
related to groups of systems than `FooSet`, and overall *feels* more
correct and natural to me in the long term.

For these reasons, and because Alice and Cart also seemed to share a
preference for it when it was previously being discussed, I propose that
we adopt a `FooSystems` naming convention where applicable.

## Solution

Rename Bevy's system set types to use a consistent `FooSet` naming where
applicable.

- `AccessibilitySystem` → `AccessibilitySystems`
- `GizmoRenderSystem` → `GizmoRenderSystems`
- `PickSet` → `PickingSystems`
- `RunFixedMainLoopSystem` → `RunFixedMainLoopSystems`
- `TransformSystem` → `TransformSystems`
- `RemoteSet` → `RemoteSystems`
- `RenderSet` → `RenderSystems`
- `SpriteSystem` → `SpriteSystems`
- `StateTransitionSteps` → `StateTransitionSystems`
- `RenderUiSystem` → `RenderUiSystems`
- `UiSystem` → `UiSystems`
- `Animation` → `AnimationSystems`
- `AssetEvents` → `AssetEventSystems`
- `TrackAssets` → `AssetTrackingSystems`
- `UpdateGizmoMeshes` → `GizmoMeshSystems`
- `InputSystem` → `InputSystems`
- `InputFocusSet` → `InputFocusSystems`
- `ExtractMaterialsSet` → `MaterialExtractionSystems`
- `ExtractMeshesSet` → `MeshExtractionSystems`
- `RumbleSystem` → `RumbleSystems`
- `CameraUpdateSystem` → `CameraUpdateSystems`
- `ExtractAssetsSet` → `AssetExtractionSystems`
- `Update2dText` → `Text2dUpdateSystems`
- `TimeSystem` → `TimeSystems`
- `AudioPlaySet` → `AudioPlaybackSystems`
- `SendEvents` → `EventSenderSystems`
- `EventUpdates` → `EventUpdateSystems`

A lot of the names got slightly longer, but they are also a lot more
consistent, and in my opinion the majority of them read much better. For
a few of the names I took the liberty of rewording things a bit;
definitely open to any further naming improvements.

There are still also cases where the `FooSystems` naming doesn't really
make sense, and those I left alone. This primarily includes system sets
like `Interned<dyn SystemSet>`, `EnterSchedules<S>`, `ExitSchedules<S>`,
or `TransitionSchedules<S>`, where the type has some special purpose and
semantics.

## Todo

- [x] Should I keep all the old names as deprecated type aliases? I can
do this, but to avoid wasting work I'd prefer to first reach consensus
on whether these renames are even desired.
- [x] Migration guide
- [x] Release notes
2025-05-06 15:18:03 +00:00
andriyDev
798e1c5498
Move initializing the ScreenshotToScreenPipeline to the ScreenshotPlugin. (#18524)
# Objective

- Minor cleanup.
- This seems to have been introduced in #8336. There is no discussion
about it I can see, there's no comment explaining why this is here and
not in `ScreenshotPlugin`. This seems to have just been misplaced.

## Solution

- Move this to the ScreenshotPlugin!

## Testing

- The screenshot example still works at least on desktop.
2025-05-05 23:56:22 +00:00
Carter Anderson
e9a0ef49f9
Rename bevy_platform_support to bevy_platform (#18813)
# Objective

The goal of `bevy_platform_support` is to provide a set of platform
agnostic APIs, alongside platform-specific functionality. This is a high
traffic crate (providing things like HashMap and Instant). Especially in
light of https://github.com/bevyengine/bevy/discussions/18799, it
deserves a friendlier / shorter name.

Given that it hasn't had a full release yet, getting this change in
before Bevy 0.16 makes sense.

## Solution

- Rename `bevy_platform_support` to `bevy_platform`.
2025-04-11 23:13:28 +00:00
Carter Anderson
d8fa57bd7b
Switch ChildOf back to tuple struct (#18672)
# Objective

In #17905 we swapped to a named field on `ChildOf` to help resolve
variable naming ambiguity of child vs parent (ex: `child_of.parent`
clearly reads as "I am accessing the parent of the child_of
relationship", whereas `child_of.0` is less clear).

Unfortunately this has the side effect of making initialization less
ideal. `ChildOf { parent }` reads just as well as `ChildOf(parent)`, but
`ChildOf { parent: root }` doesn't read nearly as well as
`ChildOf(root)`.

## Solution

Move back to `ChildOf(pub Entity)` but add a `child_of.parent()`
function and use it for all accesses. The downside here is that users
are no longer "forced" to access the parent field with `parent`
nomenclature, but I think this strikes the right balance.

Take a look at the diff. I think the results provide strong evidence for
this change. Initialization has the benefit of reading much better _and_
of taking up significantly less space, as many lines go from 3 to 1, and
we're cutting out a bunch of syntax in some cases.

Sadly I do think this should land in 0.16 as the cost of doing this
_after_ the relationships migration is high.
2025-04-02 00:10:10 +00:00
charlotte
bd5fed9050
Fix no indirect drawing (#18628)
# Objective

The `NoIndirectDrawing` wasn't working and was causing the scene not to
be rendered.

## Solution

Check the configured preprocessing mode when adding new batch sets and
mark them as batchable instead of muli-drawable if indirect rendering
has been disabled.

## Testing

`cargo run --example many_cubes -- --no-indirect-drawing`
2025-03-31 19:20:57 +00:00
Aevyrie
d09f958056
Parallelize bevy 0.16-rc bottlenecks (#18632)
# Objective

- Found stuttering and performance degradation while updating big_space
stress tests.

## Solution

- Identify and fix slow spots using tracy. Patch to verify fixes.

## Testing

- Tracy
- Before: 

![image](https://github.com/user-attachments/assets/ab7f440d-88c1-4ad9-9ad9-dca127c9421f)
- prev_gt parallelization and mutating instead of component insertion: 

![image](https://github.com/user-attachments/assets/9279a663-c0ba-4529-b709-d0f81f2a1d8b)
- parallelize visibility ranges and mesh specialization

![image](https://github.com/user-attachments/assets/25b70e7c-5d30-48ab-9bb2-79211d4d672f)

---------

Co-authored-by: Zachary Harrold <zac@harrold.com.au>
2025-03-31 18:32:45 +00:00
Vic
f57c7a43c4
reexport entity set collections in entity module (#18413)
# Objective

Unlike for their helper typers, the import paths for
`unique_array::UniqueEntityArray`, `unique_slice::UniqueEntitySlice`,
`unique_vec::UniqueEntityVec`, `hash_set::EntityHashSet`,
`hash_map::EntityHashMap`, `index_set::EntityIndexSet`,
`index_map::EntityIndexMap` are quite redundant.

When looking at the structure of `hashbrown`, we can also see that while
both `HashSet` and `HashMap` have their own modules, the main types
themselves are re-exported to the crate level.

## Solution

Re-export the types in their shared `entity` parent module, and simplify
the imports where they're used.
2025-03-30 03:51:14 +00:00
Alice Cecile
ce7d4e41d6
Make system param validation rely on the unified ECS error handling via the GLOBAL_ERROR_HANDLER (#18454)
# Objective

There are two related problems here:

1. Users should be able to change the fallback behavior of *all*
ECS-based errors in their application by setting the
`GLOBAL_ERROR_HANDLER`. See #18351 for earlier work in this vein.
2. The existing solution (#15500) for customizing this behavior is high
on boilerplate, not global and adds a great deal of complexity.

The consensus is that the default behavior when a parameter fails
validation should be set based on the kind of system parameter in
question: `Single` / `Populated` should silently skip the system, but
`Res` should panic. Setting this behavior at the system level is a
bandaid that makes getting to that ideal behavior more painful, and can
mask real failures (if a resource is missing but you've ignored a system
to make the Single stop panicking you're going to have a bad day).

## Solution

I've removed the existing `ParamWarnPolicy`-based configuration, and
wired up the `GLOBAL_ERROR_HANDLER`/`default_error_handler` to the
various schedule executors to properly plumb through errors .

Additionally, I've done a small cleanup pass on the corresponding
example.

## Testing

I've run the `fallible_params` example, with both the default and a
custom global error handler. The former panics (as expected), and the
latter spams the error console with warnings 🥲

## Questions for reviewers

1. Currently, failed system param validation will result in endless
console spam. Do you want me to implement a solution for warn_once-style
debouncing somehow?
2. Currently, the error reporting for failed system param validation is
very limited: all we get is that a system param failed validation and
the name of the system. Do you want me to implement improved error
reporting by bubbling up errors in this PR?
3. There is broad consensus that the default behavior for failed system
param validation should be set on a per-system param basis. Would you
like me to implement that in this PR?

My gut instinct is that we absolutely want to solve 2 and 3, but it will
be much easier to do that work (and review it) if we split the PRs
apart.

## Migration Guide

`ParamWarnPolicy` and the `WithParamWarnPolicy` have been removed
completely. Failures during system param validation are now handled via
the `GLOBAL_ERROR_HANDLER`: please see the `bevy_ecs::error` module docs
for more information.

---------

Co-authored-by: MiniaczQ <xnetroidpl@gmail.com>
2025-03-24 05:58:05 +00:00
Joshua Holmes
8d9f948684
Create new NonSendMarker (#18301)
# Objective

Create new `NonSendMarker` that does not depend on `NonSend`.

Required, in order to accomplish #17682. In that issue, we are trying to
replace `!Send` resources with `thread_local!` in order to unblock the
resources-as-components effort. However, when we remove all the `!Send`
resources from a system, that allows the system to run on a thread other
than the main thread, which is against the design of the system. So this
marker gives us the control to require a system to run on the main
thread without depending on `!Send` resources.

## Solution

Create a new `NonSendMarker` to replace the existing one that does not
depend on `NonSend`.

## Testing

Other than running tests, I ran a few examples:
- `window_resizing`
- `wireframe`
- `volumetric_fog` (looks so cool)
- `rotation`
- `button`

There is a Mac/iOS-specific change and I do not have a Mac or iOS device
to test it. I am doubtful that it would cause any problems for 2
reasons:
1. The change is the same as the non-wasm change which I did test
2. The Pixel Eagle tests run Mac tests

But it wouldn't hurt if someone wanted to spin up an example that
utilizes the `bevy_render` crate, which is where the Mac/iSO change was.

## Migration Guide

If `NonSendMarker` is being used from `bevy_app::prelude::*`, replace it
with `bevy_ecs::system::NonSendMarker` or use it from
`bevy_ecs::prelude::*`. In addition to that, `NonSendMarker` does not
need to be wrapped like so:
```rust
fn my_system(_non_send_marker: Option<NonSend<NonSendMarker>>) {
    ...
}
```

Instead, it can be used without any wrappers:
```rust
fn my_system(_non_send_marker: NonSendMarker) {
    ...
}
```

---------

Co-authored-by: Chris Russell <8494645+chescock@users.noreply.github.com>
2025-03-23 21:37:40 +00:00
Gino Valente
9b32e09551
bevy_reflect: Add clone registrations project-wide (#18307)
# Objective

Now that #13432 has been merged, it's important we update our reflected
types to properly opt into this feature. If we do not, then this could
cause issues for users downstream who want to make use of
reflection-based cloning.

## Solution

This PR is broken into 4 commits:

1. Add `#[reflect(Clone)]` on all types marked `#[reflect(opaque)]` that
are also `Clone`. This is mandatory as these types would otherwise cause
the cloning operation to fail for any type that contains it at any
depth.
2. Update the reflection example to suggest adding `#[reflect(Clone)]`
on opaque types.
3. Add `#[reflect(clone)]` attributes on all fields marked
`#[reflect(ignore)]` that are also `Clone`. This prevents the ignored
field from causing the cloning operation to fail.
   
Note that some of the types that contain these fields are also `Clone`,
and thus can be marked `#[reflect(Clone)]`. This makes the
`#[reflect(clone)]` attribute redundant. However, I think it's safer to
keep it marked in the case that the `Clone` impl/derive is ever removed.
I'm open to removing them, though, if people disagree.
4. Finally, I added `#[reflect(Clone)]` on all types that are also
`Clone`. While not strictly necessary, it enables us to reduce the
generated output since we can just call `Clone::clone` directly instead
of calling `PartialReflect::reflect_clone` on each variant/field. It
also means we benefit from any optimizations or customizations made in
the `Clone` impl, including directly dereferencing `Copy` values and
increasing reference counters.

Along with that change I also took the liberty of adding any missing
registrations that I saw could be applied to the type as well, such as
`Default`, `PartialEq`, and `Hash`. There were hundreds of these to
edit, though, so it's possible I missed quite a few.

That last commit is **_massive_**. There were nearly 700 types to
update. So it's recommended to review the first three before moving onto
that last one.

Additionally, I can break the last commit off into its own PR or into
smaller PRs, but I figured this would be the easiest way of doing it
(and in a timely manner since I unfortunately don't have as much time as
I used to for code contributions).

## Testing

You can test locally with a `cargo check`:

```
cargo check --workspace --all-features
```
2025-03-17 18:32:35 +00:00
JMS55
0be1529d15
Expose textures to ViewTarget::post_process_write() (#18348)
Extracted from my DLSS branch.

## Changelog
* Added `source_texture` and `destination_texture` to
`PostProcessWrite`, in addition to the existing texture views.
2025-03-17 18:22:06 +00:00
JMS55
160d79f24d
Make prepare_view_targets() caching account for CameraMainTextureUsages (#18347)
Small bugfix.
2025-03-17 04:23:29 +00:00
newclarityex
ecccd57417
Generic system config (#17962)
# Objective
Prevents duplicate implementation between IntoSystemConfigs and
IntoSystemSetConfigs using a generic, adds a NodeType trait for more
config flexibility (opening the door to implement
https://github.com/bevyengine/bevy/issues/14195?).

## Solution
Followed writeup by @ItsDoot:
https://hackmd.io/@doot/rJeefFHc1x

Removes IntoSystemConfigs and IntoSystemSetConfigs, instead using
IntoNodeConfigs with generics.

## Testing
Pending

---

## Showcase
N/A

## Migration Guide
SystemSetConfigs -> NodeConfigs<InternedSystemSet>
SystemConfigs -> NodeConfigs<ScheduleSystem>
IntoSystemSetConfigs -> IntoNodeConfigs<InternedSystemSet, M>
IntoSystemConfigs -> IntoNodeConfigs<ScheduleSystem, M>

---------

Co-authored-by: Christian Hughes <9044780+ItsDoot@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2025-03-12 00:12:30 +00:00
Carter Anderson
b73811d40e
Remove ChildOf::get and Deref impl (#18080)
# Objective

There are currently three ways to access the parent stored on a ChildOf
relationship:

1. `child_of.parent` (field accessor)
2. `child_of.get()` (get function)
3. `**child_of` (Deref impl)

I will assert that we should only have one (the field accessor), and
that the existence of the other implementations causes confusion and
legibility issues. The deref approach is heinous, and `child_of.get()`
is significantly less clear than `child_of.parent`.

## Solution

Remove `impl Deref for ChildOf` and `ChildOf::get`.

The one "downside" I'm seeing is that:

```rust
entity.get::<ChildOf>().map(ChildOf::get)
```
Becomes this:

```rust
entity.get::<ChildOf>().map(|c| c.parent)
```

I strongly believe that this is worth the increased clarity and
consistency. I'm also not really a huge fan of the "pass function
pointer to map" syntax. I think most people don't think this way about
maps. They think in terms of a function that takes the item in the
Option and returns the result of some action on it.

## Migration Guide

```rust
// Before
**child_of
// After
child_of.parent

// Before
child_of.get()
// After
child_of.parent

// Before
entity.get::<ChildOf>().map(ChildOf::get)
// After
entity.get::<ChildOf>().map(|c| c.parent)
```
2025-02-27 23:11:03 +00:00
Tim Overbeek
ccb7069e7f
Change ChildOf to Childof { parent: Entity} and support deriving Relationship and RelationshipTarget with named structs (#17905)
# Objective

fixes #17896 

## Solution

Change ChildOf ( Entity ) to ChildOf { parent: Entity }

by doing this we also allow users to use named structs for relationship
derives, When you have more than 1 field in a struct with named fields
the macro will look for a field with the attribute #[relationship] and
all of the other fields should implement the Default trait. Unnamed
fields are still supported.

When u have a unnamed struct with more than one field the macro will
fail.
Do we want to support something like this ? 

```rust
 #[derive(Component)]
 #[relationship_target(relationship = ChildOf)]
 pub struct Children (#[relationship] Entity, u8);
```
I could add this, it but doesn't seem nice.
## Testing

crates/bevy_ecs - cargo test


## Showcase


```rust

use bevy_ecs::component::Component;
use bevy_ecs::entity::Entity;

 #[derive(Component)]
 #[relationship(relationship_target = Children)]
 pub struct ChildOf {
     #[relationship]
     pub parent: Entity,
     internal: u8,
 };

 #[derive(Component)]
 #[relationship_target(relationship = ChildOf)]
 pub struct Children {
     children: Vec<Entity>
 };

```

---------

Co-authored-by: Tim Overbeek <oorbecktim@Tims-MacBook-Pro.local>
Co-authored-by: Tim Overbeek <oorbecktim@c-001-001-042.client.nl.eduvpn.org>
Co-authored-by: Tim Overbeek <oorbecktim@c-001-001-059.client.nl.eduvpn.org>
Co-authored-by: Tim Overbeek <oorbecktim@c-001-001-054.client.nl.eduvpn.org>
Co-authored-by: Tim Overbeek <oorbecktim@c-001-001-027.client.nl.eduvpn.org>
2025-02-27 19:22:17 +00:00
Zachary Harrold
5241e09671
Upgrade to Rust Edition 2024 (#17967)
# Objective

- Fixes #17960

## Solution

- Followed the [edition upgrade
guide](https://doc.rust-lang.org/edition-guide/editions/transitioning-an-existing-project-to-a-new-edition.html)

## Testing

- CI

---

## Summary of Changes

### Documentation Indentation

When using lists in documentation, proper indentation is now linted for.
This means subsequent lines within the same list item must start at the
same indentation level as the item.

```rust
/* Valid */
/// - Item 1
///   Run-on sentence.
/// - Item 2
struct Foo;

/* Invalid */
/// - Item 1
///     Run-on sentence.
/// - Item 2
struct Foo;
```

### Implicit `!` to `()` Conversion

`!` (the never return type, returned by `panic!`, etc.) no longer
implicitly converts to `()`. This is particularly painful for systems
with `todo!` or `panic!` statements, as they will no longer be functions
returning `()` (or `Result<()>`), making them invalid systems for
functions like `add_systems`. The ideal fix would be to accept functions
returning `!` (or rather, _not_ returning), but this is blocked on the
[stabilisation of the `!` type
itself](https://doc.rust-lang.org/std/primitive.never.html), which is
not done.

The "simple" fix would be to add an explicit `-> ()` to system
signatures (e.g., `|| { todo!() }` becomes `|| -> () { todo!() }`).
However, this is _also_ banned, as there is an existing lint which (IMO,
incorrectly) marks this as an unnecessary annotation.

So, the "fix" (read: workaround) is to put these kinds of `|| -> ! { ...
}` closuers into variables and give the variable an explicit type (e.g.,
`fn()`).

```rust
// Valid
let system: fn() = || todo!("Not implemented yet!");
app.add_systems(..., system);

// Invalid
app.add_systems(..., || todo!("Not implemented yet!"));
```

### Temporary Variable Lifetimes

The order in which temporary variables are dropped has changed. The
simple fix here is _usually_ to just assign temporaries to a named
variable before use.

### `gen` is a keyword

We can no longer use the name `gen` as it is reserved for a future
generator syntax. This involved replacing uses of the name `gen` with
`r#gen` (the raw-identifier syntax).

### Formatting has changed

Use statements have had the order of imports changed, causing a
substantial +/-3,000 diff when applied. For now, I have opted-out of
this change by amending `rustfmt.toml`

```toml
style_edition = "2021"
```

This preserves the original formatting for now, reducing the size of
this PR. It would be a simple followup to update this to 2024 and run
`cargo fmt`.

### New `use<>` Opt-Out Syntax

Lifetimes are now implicitly included in RPIT types. There was a handful
of instances where it needed to be added to satisfy the borrow checker,
but there may be more cases where it _should_ be added to avoid
breakages in user code.

### `MyUnitStruct { .. }` is an invalid pattern

Previously, you could match against unit structs (and unit enum
variants) with a `{ .. }` destructuring. This is no longer valid.

### Pretty much every use of `ref` and `mut` are gone

Pattern binding has changed to the point where these terms are largely
unused now. They still serve a purpose, but it is far more niche now.

### `iter::repeat(...).take(...)` is bad

New lint recommends using the more explicit `iter::repeat_n(..., ...)`
instead.

## Migration Guide

The lifetimes of functions using return-position impl-trait (RPIT) are
likely _more_ conservative than they had been previously. If you
encounter lifetime issues with such a function, please create an issue
to investigate the addition of `+ use<...>`.

## Notes

- Check the individual commits for a clearer breakdown for what
_actually_ changed.

---------

Co-authored-by: François Mockers <francois.mockers@vleue.com>
2025-02-24 03:54:47 +00:00
Patrick Walton
73970d0c12
Don't mark newly-hidden meshes invisible until all visibility-determining systems run. (#17922)
The `check_visibility` system currently follows this algorithm:

1. Store all meshes that were visible last frame in the
`PreviousVisibleMeshes` set.

2. Determine which meshes are visible. For each such visible mesh,
remove it from `PreviousVisibleMeshes`.

3. Mark all meshes that remain in `PreviousVisibleMeshes` as invisible.

This algorithm would be correct if the `check_visibility` were the only
system that marked meshes visible. However, it's not: the shadow-related
systems `check_dir_light_mesh_visibility` and
`check_point_light_mesh_visibility` can as well. This results in the
following sequence of events for meshes that are in a shadow map but
*not* visible from a camera:

A. `check_visibility` runs, finds that no camera contains these meshes,
   and marks them hidden, which sets the changed flag.

B. `check_dir_light_mesh_visibility` and/or
   `check_point_light_mesh_visibility` run, discover that these meshes
   are visible in the shadow map, and marks them as visible, again
   setting the `ViewVisibility` changed flag.

C. During the extraction phase, the mesh extraction system sees that
   `ViewVisibility` is changed and re-extracts the mesh.

This is inefficient and results in needless work during rendering.

This patch fixes the issue in two ways:

* The `check_dir_light_mesh_visibility` and
`check_point_light_mesh_visibility` systems now remove meshes that they
discover from `PreviousVisibleMeshes`.

* Step (3) above has been moved from `check_visibility` to a separate
system, `mark_newly_hidden_entities_invisible`. This system runs after
all visibility-determining systems, ensuring that
`PreviousVisibleMeshes` contains only those meshes that truly became
invisible on this frame.

This fix dramatically improves the performance of [the Caldera
benchmark], when combined with several other patches I've submitted.

[the Caldera benchmark]:
https://github.com/DGriffin91/bevy_caldera_scene
2025-02-18 09:35:22 +00:00
Patrick Walton
0517b9621b
Fix motion vector computation after #17688. (#17717)
PR #17688 broke motion vector computation, and therefore motion blur,
because it enabled retention of `MeshInputUniform`s, and
`MeshInputUniform`s contain the indices of the previous frame's
transform and the previous frame's skinned mesh joint matrices. On frame
N, if a `MeshInputUniform` is retained on GPU from the previous frame,
the `previous_input_index` and `previous_skin_index` would refer to the
indices for frame N - 2, not the index for frame N - 1.

This patch fixes the problems. It solves these issues in two different
ways, one for transforms and one for skins:

1. To fix transforms, this patch supplies the *frame index* to the
shader as part of the view uniforms, and specifies which frame index
each mesh's previous transform refers to. So, in the situation described
above, the frame index would be N, the previous frame index would be N -
1, and the `previous_input_frame_number` would be N - 2. The shader can
now detect this situation and infer that the mesh has been retained, and
can therefore conclude that the mesh's transform hasn't changed.

2. To fix skins, this patch replaces the explicit `previous_skin_index`
with an invariant that the index of the joints for the current frame and
the index of the joints for the previous frame are the same. This means
that the `MeshInputUniform` never has to be updated even if the skin is
animated. The downside is that we have to copy joint matrices from the
previous frame's buffer to the current frame's buffer in
`extract_skins`.

The rationale behind (2) is that we currently have no mechanism to
detect when joints that affect a skin have been updated, short of
comparing all the transforms and setting a flag for
`extract_meshes_for_gpu_building` to consume, which would regress
performance as we want `extract_skins` and
`extract_meshes_for_gpu_building` to be able to run in parallel.

To test this change, use `cargo run --example motion_blur`.
2025-02-18 09:34:19 +00:00
JMS55
669d139c13
Upgrade to wgpu v24 (#17542)
Didn't remove WgpuWrapper. Not sure if it's needed or not still.

## Testing

- Did you test these changes? If so, how? Example runner
- Are there any parts that need more testing? Web (portable atomics
thingy?), DXC.

## Migration Guide
- Bevy has upgraded to [wgpu
v24](https://github.com/gfx-rs/wgpu/blob/trunk/CHANGELOG.md#v2400-2025-01-15).
- When using the DirectX 12 rendering backend, the new priority system
for choosing a shader compiler is as follows:
- If the `WGPU_DX12_COMPILER` environment variable is set at runtime, it
is used
- Else if the new `statically-linked-dxc` feature is enabled, a custom
version of DXC will be statically linked into your app at compile time.
- Else Bevy will look in the app's working directory for
`dxcompiler.dll` and `dxil.dll` at runtime.
- Else if they are missing, Bevy will fall back to FXC (not recommended)

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
2025-02-09 19:40:53 +00:00
Sludge
989f547080
Weak handle migration (#17695)
# Objective

- Make use of the new `weak_handle!` macro added in
https://github.com/bevyengine/bevy/pull/17384

## Solution

- Migrate bevy from `Handle::weak_from_u128` to the new `weak_handle!`
macro that takes a random UUID
- Deprecate `Handle::weak_from_u128`, since there are no remaining use
cases that can't also be addressed by constructing the type manually

## Testing

- `cargo run -p ci -- test`

---

## Migration Guide

Replace `Handle::weak_from_u128` with `weak_handle!` and a random UUID.
2025-02-05 22:44:20 +00:00
charlotte
2ea5e9b846
Cold Specialization (#17567)
# Cold Specialization

## Objective

An ongoing part of our quest to retain everything in the render world,
cold-specialization aims to cache pipeline specialization so that
pipeline IDs can be recomputed only when necessary, rather than every
frame. This approach reduces redundant work in stable scenes, while
still accommodating scenarios in which materials, views, or visibility
might change, as well as unlocking future optimization work like
retaining render bins.

## Solution

Queue systems are split into a specialization system and queue system,
the former of which only runs when necessary to compute a new pipeline
id. Pipelines are invalidated using a combination of change detection
and ECS ticks.

### The difficulty with change detection

Detecting “what changed” can be tricky because pipeline specialization
depends not only on the entity’s components (e.g., mesh, material, etc.)
but also on which view (camera) it is rendering in. In other words, the
cache key for a given pipeline id is a view entity/render entity pair.
As such, it's not sufficient simply to react to change detection in
order to specialize -- an entity could currently be out of view or could
be rendered in the future in camera that is currently disabled or hasn't
spawned yet.

### Why ticks?

Ticks allow us to ensure correctness by allowing us to compare the last
time a view or entity was updated compared to the cached pipeline id.
This ensures that even if an entity was out of view or has never been
seen in a given camera before we can still correctly determine whether
it needs to be re-specialized or not.

## Testing

TODO: Tested a bunch of different examples, need to test more.

## Migration Guide

TODO

- `AssetEvents` has been moved into the `PostUpdate` schedule.

---------

Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
2025-02-05 18:31:20 +00:00
Patrick Walton
dda97880c4
Implement experimental GPU two-phase occlusion culling for the standard 3D mesh pipeline. (#17413)
*Occlusion culling* allows the GPU to skip the vertex and fragment
shading overhead for objects that can be quickly proved to be invisible
because they're behind other geometry. A depth prepass already
eliminates most fragment shading overhead for occluded objects, but the
vertex shading overhead, as well as the cost of testing and rejecting
fragments against the Z-buffer, is presently unavoidable for standard
meshes. We currently perform occlusion culling only for meshlets. But
other meshes, such as skinned meshes, can benefit from occlusion culling
too in order to avoid the transform and skinning overhead for unseen
meshes.

This commit adapts the same [*two-phase occlusion culling*] technique
that meshlets use to Bevy's standard 3D mesh pipeline when the new
`OcclusionCulling` component, as well as the `DepthPrepass` component,
are present on the camera. It has these steps:

1. *Early depth prepass*: We use the hierarchical Z-buffer from the
previous frame to cull meshes for the initial depth prepass, effectively
rendering only the meshes that were visible in the last frame.

2. *Early depth downsample*: We downsample the depth buffer to create
another hierarchical Z-buffer, this time with the current view
transform.

3. *Late depth prepass*: We use the new hierarchical Z-buffer to test
all meshes that weren't rendered in the early depth prepass. Any meshes
that pass this check are rendered.

4. *Late depth downsample*: Again, we downsample the depth buffer to
create a hierarchical Z-buffer in preparation for the early depth
prepass of the next frame. This step is done after all the rendering, in
order to account for custom phase items that might write to the depth
buffer.

Note that this patch has no effect on the per-mesh CPU overhead for
occluded objects, which remains high for a GPU-driven renderer due to
the lack of `cold-specialization` and retained bins. If
`cold-specialization` and retained bins weren't on the horizon, then a
more traditional approach like potentially visible sets (PVS) or low-res
CPU rendering would probably be more efficient than the GPU-driven
approach that this patch implements for most scenes. However, at this
point the amount of effort required to implement a PVS baking tool or a
low-res CPU renderer would probably be greater than landing
`cold-specialization` and retained bins, and the GPU driven approach is
the more modern one anyway. It does mean that the performance
improvements from occlusion culling as implemented in this patch *today*
are likely to be limited, because of the high CPU overhead for occluded
meshes.

Note also that this patch currently doesn't implement occlusion culling
for 2D objects or shadow maps. Those can be addressed in a follow-up.
Additionally, note that the techniques in this patch require compute
shaders, which excludes support for WebGL 2.

This PR is marked experimental because of known precision issues with
the downsampling approach when applied to non-power-of-two framebuffer
sizes (i.e. most of them). These precision issues can, in rare cases,
cause objects to be judged occluded that in fact are not. (I've never
seen this in practice, but I know it's possible; it tends to be likelier
to happen with small meshes.) As a follow-up to this patch, we desire to
switch to the [SPD-based hi-Z buffer shader from the Granite engine],
which doesn't suffer from these problems, at which point we should be
able to graduate this feature from experimental status. I opted not to
include that rewrite in this patch for two reasons: (1) @JMS55 is
planning on doing the rewrite to coincide with the new availability of
image atomic operations in Naga; (2) to reduce the scope of this patch.

A new example, `occlusion_culling`, has been added. It demonstrates
objects becoming quickly occluded and disoccluded by dynamic geometry
and shows the number of objects that are actually being rendered. Also,
a new `--occlusion-culling` switch has been added to `scene_viewer`, in
order to make it easy to test this patch with large scenes like Bistro.

[*two-phase occlusion culling*]:
https://medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501

[Aaltonen SIGGRAPH 2015]:

https://www.advances.realtimerendering.com/s2015/aaltonenhaar_siggraph2015_combined_final_footer_220dpi.pdf

[Some literature]:

https://gist.github.com/reduz/c5769d0e705d8ab7ac187d63be0099b5?permalink_comment_id=5040452#gistcomment-5040452

[SPD-based hi-Z buffer shader from the Granite engine]:
https://github.com/Themaister/Granite/blob/master/assets/shaders/post/hiz.comp

## Migration guide

* When enqueuing a custom mesh pipeline, work item buffers are now
created with
`bevy::render::batching::gpu_preprocessing::get_or_create_work_item_buffer`,
not `PreprocessWorkItemBuffers::new`. See the
`specialized_mesh_pipeline` example.

## Showcase

Occlusion culling example:
![Screenshot 2025-01-15
175051](https://github.com/user-attachments/assets/1544f301-68a3-45f8-84a6-7af3ad431258)

Bistro zoomed out, before occlusion culling:
![Screenshot 2025-01-16
185425](https://github.com/user-attachments/assets/5114bbdf-5dec-4de9-b17e-7aa77e7b61ed)

Bistro zoomed out, after occlusion culling:
![Screenshot 2025-01-16
184949](https://github.com/user-attachments/assets/9dd67713-656c-4276-9768-6d261ca94300)

In this scene, occlusion culling reduces the number of meshes Bevy has
to render from 1591 to 585.
2025-01-27 05:02:46 +00:00
Zachary Harrold
9bc0ae33c3
Move hashbrown and foldhash out of bevy_utils (#17460)
# Objective

- Contributes to #16877

## Solution

- Moved `hashbrown`, `foldhash`, and related types out of `bevy_utils`
and into `bevy_platform_support`
- Refactored the above to match the layout of these types in `std`.
- Updated crates as required.

## Testing

- CI

---

## Migration Guide

- The following items were moved out of `bevy_utils` and into
`bevy_platform_support::hash`:
  - `FixedState`
  - `DefaultHasher`
  - `RandomState`
  - `FixedHasher`
  - `Hashed`
  - `PassHash`
  - `PassHasher`
  - `NoOpHash`
- The following items were moved out of `bevy_utils` and into
`bevy_platform_support::collections`:
  - `HashMap`
  - `HashSet`
- `bevy_utils::hashbrown` has been removed. Instead, import from
`bevy_platform_support::collections` _or_ take a dependency on
`hashbrown` directly.
- `bevy_utils::Entry` has been removed. Instead, import from
`bevy_platform_support::collections::hash_map` or
`bevy_platform_support::collections::hash_set` as appropriate.
- All of the above equally apply to `bevy::utils` and
`bevy::platform_support`.

## Notes

- I left `PreHashMap`, `PreHashMapExt`, and `TypeIdMap` in `bevy_utils`
as they might be candidates for micro-crating. They can always be moved
into `bevy_platform_support` at a later date if desired.
2025-01-23 16:46:08 +00:00
Sven Niederberger
68c19defb6
Readback: Add support for texture depth/array layers (#17479)
# Objective

Fixes #16963

## Solution

I am - no pun intended - somewhat out of my depth here but this worked
in my testing. The validation error is gone and the data read from the
GPU looks sensible. I'd greatly appreciate if somebody more familiar
with the matter could double-check this.

## References

Relevant documentation in
[WebGPU](https://gpuweb.github.io/gpuweb/#gputexelcopybufferlayout) and
[wgpu](https://github.com/gfx-rs/wgpu/blob/v23/wgpu-types/src/lib.rs#L6350).

## Testing

<details><summary>Example code for testing</summary>
<p>

```rust
use bevy::{
    image::{self as bevy_image, TextureFormatPixelInfo},
    prelude::*,
    render::{
        render_asset::RenderAssetUsages,
        render_resource::{
            Extent3d, TextureDescriptor, TextureDimension, TextureFormat, TextureUsages,
        },
    },
};

fn main() {
    let mut app = App::new();
    app.add_plugins(DefaultPlugins)
        .add_systems(Startup, setup)
        .add_systems(Update, readback_system);
    app.run();
}

#[derive(Resource)]
struct ImageResource(Handle<Image>);

const TEXTURE_HEIGHT: u32 = 64;
const TEXTURE_WIDTH: u32 = 32;
const TEXTURE_LAYERS: u32 = 4;
const FORMAT: TextureFormat = TextureFormat::Rgba8Uint;

fn setup(mut commands: Commands, mut images: ResMut<Assets<Image>>) {
    let layer_pixel_count = (TEXTURE_WIDTH * TEXTURE_HEIGHT) as usize;
    let layer_size = layer_pixel_count * FORMAT.pixel_size();
    let data: Vec<u8> = (0..TEXTURE_LAYERS as u8)
        .flat_map(|layer| (0..layer_size).map(move |_| layer))
        .collect();
    let image_size = data.len();
    println!("{image_size}");
    let image = Image {
        data,
        texture_descriptor: TextureDescriptor {
            label: Some("image"),
            size: Extent3d {
                width: TEXTURE_WIDTH,
                height: TEXTURE_HEIGHT,
                depth_or_array_layers: TEXTURE_LAYERS,
            },
            mip_level_count: 1,
            sample_count: 1,
            dimension: TextureDimension::D2,
            format: FORMAT,
            usage: TextureUsages::COPY_DST | TextureUsages::COPY_SRC,
            view_formats: &[],
        },
        sampler: bevy_image::ImageSampler::Default,
        texture_view_descriptor: None,
        asset_usage: RenderAssetUsages::RENDER_WORLD,
    };

    commands.insert_resource(ImageResource(images.add(image)));
}

fn readback_system(
    mut commands: Commands,
    keys: Res<ButtonInput<KeyCode>>,
    image: Res<ImageResource>,
) {
    if !keys.just_pressed(KeyCode::KeyR) {
        return;
    }

    commands
        .spawn(bevy::render::gpu_readback::Readback::Texture(
            image.0.clone(),
        ))
        .observe(
            |trigger: Trigger<bevy::render::gpu_readback::ReadbackComplete>,
             mut commands: Commands| {
                info!("readback complete");

                println!("{:#?}", &trigger.0);

                commands.entity(trigger.observer()).despawn();
            },
        );
}

```

</p>
</details>
2025-01-23 05:25:40 +00:00
Zachary Harrold
41e79ae826
Refactored ComponentHook Parameters into HookContext (#17503)
# Objective

- Make the function signature for `ComponentHook` less verbose

## Solution

- Refactored `Entity`, `ComponentId`, and `Option<&Location>` into a new
`HookContext` struct.

## Testing

- CI

---

## Migration Guide

Update the function signatures for your component hooks to only take 2
arguments, `world` and `context`. Note that because `HookContext` is
plain data with all members public, you can use de-structuring to
simplify migration.

```rust
// Before
fn my_hook(
    mut world: DeferredWorld,
    entity: Entity,
    component_id: ComponentId,
) { ... }

// After
fn my_hook(
    mut world: DeferredWorld,
    HookContext { entity, component_id, caller }: HookContext,
) { ... }
``` 

Likewise, if you were discarding certain parameters, you can use `..` in
the de-structuring:

```rust
// Before
fn my_hook(
    mut world: DeferredWorld,
    entity: Entity,
    _: ComponentId,
) { ... }

// After
fn my_hook(
    mut world: DeferredWorld,
    HookContext { entity, .. }: HookContext,
) { ... }
```
2025-01-23 02:45:24 +00:00
SpecificProtagonist
f32a6fb205
Track callsite for observers & hooks (#15607)
# Objective

Fixes #14708

Also fixes some commands not updating tracked location.


## Solution

`ObserverTrigger` has a new `caller` field with the
`track_change_detection` feature;
hooks take an additional caller parameter (which is `Some(…)` or `None`
depending on the feature).

## Testing

See the new tests in `src/observer/mod.rs`

---

## Showcase

Observers now know from where they were triggered (if
`track_change_detection` is enabled):
```rust
world.observe(move |trigger: Trigger<OnAdd, Foo>| {
    println!("Added Foo from {}", trigger.caller());
});
```

## Migration

- hooks now take an additional `Option<&'static Location>` argument

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2025-01-22 20:02:39 +00:00
Alice Cecile
44ad3bf62b
Move Resource trait to its own file (#17469)
# Objective

`bevy_ecs`'s `system` module is something of a grab bag, and *very*
large. This is particularly true for the `system_param` module, which is
more than 2k lines long!

While it could be defensible to put `Res` and `ResMut` there (lol no
they're in change_detection.rs, obviously), it doesn't make any sense to
put the `Resource` trait there. This is confusing to navigate (and
painful to work on and review).

## Solution

- Create a root level `bevy_ecs/resource.rs` module to mirror
`bevy_ecs/component.rs`
- move the `Resource` trait to that module
- move the `Resource` derive macro to that module as well (Rust really
likes when you pun on the names of the derive macro and trait and put
them in the same path)
- fix all of the imports

## Notes to reviewers

- We could probably move more stuff into here, but I wanted to keep this
PR as small as possible given the absurd level of import changes.
- This PR is ground work for my upcoming attempts to store resource data
on components (resources-as-entities). Splitting this code out will make
the work and review a bit easier, and is the sort of overdue refactor
that's good to do as part of more meaningful work.

## Testing

cargo build works!

## Migration Guide

`bevy_ecs::system::Resource` has been moved to
`bevy_ecs::resource::Resource`.
2025-01-21 19:47:08 +00:00
Carter Anderson
ba5e71f53d
Parent -> ChildOf (#17427)
Fixes #17412

## Objective

`Parent` uses the "has a X" naming convention. There is increasing
sentiment that we should use the "is a X" naming convention for
relationships (following #17398). This leaves `Children` as-is because
there is prevailing sentiment that `Children` is clearer than `ParentOf`
in many cases (especially when treating it like a collection).

This renames `Parent` to `ChildOf`.

This is just the implementation PR. To discuss the path forward, do so
in #17412.

## Migration Guide

- The `Parent` component has been renamed to `ChildOf`.
2025-01-20 22:13:29 +00:00
Alice Cecile
5a9bc28502
Support non-Vec data structures in relations (#17447)
# Objective

The existing `RelationshipSourceCollection` uses `Vec` as the only
possible backing for our relationships. While a reasonable choice,
benchmarking use cases might reveal that a different data type is better
or faster.

For example:

- Not all relationships require a stable ordering between the
relationship sources (i.e. children). In cases where we a) have many
such relations and b) don't care about the ordering between them, a hash
set is likely a better datastructure than a `Vec`.
- The number of children-like entities may be small on average, and a
`smallvec` may be faster

## Solution

- Implement `RelationshipSourceCollection` for `EntityHashSet`, our
custom entity-optimized `HashSet`.
-~~Implement `DoubleEndedIterator` for `EntityHashSet` to make things
compile.~~
   -  This implementation was cursed and very surprising.
- Instead, by moving the iterator type on `RelationshipSourceCollection`
from an erased RPTIT to an explicit associated type we can add a trait
bound on the offending methods!
- Implement `RelationshipSourceCollection` for `SmallVec`

## Testing

I've added a pair of new tests to make sure this pattern compiles
successfully in practice!

## Migration Guide

`EntityHashSet` and `EntityHashMap` are no longer re-exported in
`bevy_ecs::entity` directly. If you were not using `bevy_ecs` / `bevy`'s
`prelude`, you can access them through their now-public modules,
`hash_set` and `hash_map` instead.

## Notes to reviewers

The `EntityHashSet::Iter` type needs to be public for this impl to be
allowed. I initially renamed it to something that wasn't ambiguous and
re-exported it, but as @Victoronz pointed out, that was somewhat
unidiomatic.

In
1a8564898f,
I instead made the `entity_hash_set` public (and its `entity_hash_set`)
sister public, and removed the re-export. I prefer this design (give me
module docs please), but it leads to a lot of churn in this PR.

Let me know which you'd prefer, and if you'd like me to split that
change out into its own micro PR.
2025-01-20 21:26:08 +00:00
Carter Anderson
21f1e3045c
Relationships (non-fragmenting, one-to-many) (#17398)
This adds support for one-to-many non-fragmenting relationships (with
planned paths for fragmenting and non-fragmenting many-to-many
relationships). "Non-fragmenting" means that entities with the same
relationship type, but different relationship targets, are not forced
into separate tables (which would cause "table fragmentation").

Functionally, this fills a similar niche as the current Parent/Children
system. The biggest differences are:

1. Relationships have simpler internals and significantly improved
performance and UX. Commands and specialized APIs are no longer
necessary to keep everything in sync. Just spawn entities with the
relationship components you want and everything "just works".
2. Relationships are generalized. Bevy can provide additional built in
relationships, and users can define their own.

**REQUEST TO REVIEWERS**: _please don't leave top level comments and
instead comment on specific lines of code. That way we can take
advantage of threaded discussions. Also dont leave comments simply
pointing out CI failures as I can read those just fine._

## Built on top of what we have

Relationships are implemented on top of the Bevy ECS features we already
have: components, immutability, and hooks. This makes them immediately
compatible with all of our existing (and future) APIs for querying,
spawning, removing, scenes, reflection, etc. The fewer specialized APIs
we need to build, maintain, and teach, the better.

## Why focus on one-to-many non-fragmenting first?

1. This allows us to improve Parent/Children relationships immediately,
in a way that is reasonably uncontroversial. Switching our hierarchy to
fragmenting relationships would have significant performance
implications. ~~Flecs is heavily considering a switch to non-fragmenting
relations after careful considerations of the performance tradeoffs.~~
_(Correction from @SanderMertens: Flecs is implementing non-fragmenting
storage specialized for asset hierarchies, where asset hierarchies are
many instances of small trees that have a well defined structure)_
2. Adding generalized one-to-many relationships is currently a priority
for the [Next Generation Scene / UI
effort](https://github.com/bevyengine/bevy/discussions/14437).
Specifically, we're interested in building reactions and observers on
top.

## The changes

This PR does the following:

1. Adds a generic one-to-many Relationship system
3. Ports the existing Parent/Children system to Relationships, which now
lives in `bevy_ecs::hierarchy`. The old `bevy_hierarchy` crate has been
removed.
4. Adds on_despawn component hooks
5. Relationships can opt-in to "despawn descendants" behavior, meaning
that the entire relationship hierarchy is despawned when
`entity.despawn()` is called. The built in Parent/Children hierarchies
enable this behavior, and `entity.despawn_recursive()` has been removed.
6. `world.spawn` now applies commands after spawning. This ensures that
relationship bookkeeping happens immediately and removes the need to
manually flush. This is in line with the equivalent behaviors recently
added to the other APIs (ex: insert).
7. Removes the ValidParentCheckPlugin (system-driven / poll based) in
favor of a `validate_parent_has_component` hook.

## Using Relationships

The `Relationship` trait looks like this:

```rust
pub trait Relationship: Component + Sized {
    type RelationshipSources: RelationshipSources<Relationship = Self>;
    fn get(&self) -> Entity;
    fn from(entity: Entity) -> Self;
}
```

A relationship is a component that:

1. Is a simple wrapper over a "target" Entity.
2. Has a corresponding `RelationshipSources` component, which is a
simple wrapper over a collection of entities. Every "target entity"
targeted by a "source entity" with a `Relationship` has a
`RelationshipSources` component, which contains every "source entity"
that targets it.

For example, the `Parent` component (as it currently exists in Bevy) is
the `Relationship` component and the entity containing the Parent is the
"source entity". The entity _inside_ the `Parent(Entity)` component is
the "target entity". And that target entity has a `Children` component
(which implements `RelationshipSources`).

In practice, the Parent/Children relationship looks like this:

```rust
#[derive(Relationship)]
#[relationship(relationship_sources = Children)]
pub struct Parent(pub Entity);

#[derive(RelationshipSources)]
#[relationship_sources(relationship = Parent)]
pub struct Children(Vec<Entity>);
```

The Relationship and RelationshipSources derives automatically implement
Component with the relevant configuration (namely, the hooks necessary
to keep everything in sync).

The most direct way to add relationships is to spawn entities with
relationship components:

```rust
let a = world.spawn_empty().id();
let b = world.spawn(Parent(a)).id();

assert_eq!(world.entity(a).get::<Children>().unwrap(), &[b]);
```

There are also convenience APIs for spawning more than one entity with
the same relationship:

```rust
world.spawn_empty().with_related::<Children>(|s| {
    s.spawn_empty();
    s.spawn_empty();
})
```

The existing `with_children` API is now a simpler wrapper over
`with_related`. This makes this change largely non-breaking for existing
spawn patterns.

```rust
world.spawn_empty().with_children(|s| {
    s.spawn_empty();
    s.spawn_empty();
})
```

There are also other relationship APIs, such as `add_related` and
`despawn_related`.

## Automatic recursive despawn via the new on_despawn hook

`RelationshipSources` can opt-in to "despawn descendants" behavior,
which will despawn all related entities in the relationship hierarchy:

```rust
#[derive(RelationshipSources)]
#[relationship_sources(relationship = Parent, despawn_descendants)]
pub struct Children(Vec<Entity>);
```

This means that `entity.despawn_recursive()` is no longer required.
Instead, just use `entity.despawn()` and the relevant related entities
will also be despawned.

To despawn an entity _without_ despawning its parent/child descendants,
you should remove the `Children` component first, which will also remove
the related `Parent` components:

```rust
entity
    .remove::<Children>()
    .despawn()
```

This builds on the on_despawn hook introduced in this PR, which is fired
when an entity is despawned (before other hooks).

## Relationships are the source of truth

`Relationship` is the _single_ source of truth component.
`RelationshipSources` is merely a reflection of what all the
`Relationship` components say. By embracing this, we are able to
significantly improve the performance of the system as a whole. We can
rely on component lifecycles to protect us against duplicates, rather
than needing to scan at runtime to ensure entities don't already exist
(which results in quadratic runtime). A single source of truth gives us
constant-time inserts. This does mean that we cannot directly spawn
populated `Children` components (or directly add or remove entities from
those components). I personally think this is a worthwhile tradeoff,
both because it makes the performance much better _and_ because it means
theres exactly one way to do things (which is a philosophy we try to
employ for Bevy APIs).

As an aside: treating both sides of the relationship as "equivalent
source of truth relations" does enable building simple and flexible
many-to-many relationships. But this introduces an _inherent_ need to
scan (or hash) to protect against duplicates.
[`evergreen_relations`](https://github.com/EvergreenNest/evergreen_relations)
has a very nice implementation of the "symmetrical many-to-many"
approach. Unfortunately I think the performance issues inherent to that
approach make it a poor choice for Bevy's default relationship system.

## Followup Work

* Discuss renaming `Parent` to `ChildOf`. I refrained from doing that in
this PR to keep the diff reasonable, but I'm personally biased toward
this change (and using that naming pattern generally for relationships).
* [Improved spawning
ergonomics](https://github.com/bevyengine/bevy/discussions/16920)
* Consider adding relationship observers/triggers for "relationship
targets" whenever a source is added or removed. This would replace the
current "hierarchy events" system, which is unused upstream but may have
existing users downstream. I think triggers are the better fit for this
than a buffered event queue, and would prefer not to add that back.
* Fragmenting relations: My current idea hinges on the introduction of
"value components" (aka: components whose type _and_ value determines
their ComponentId, via something like Hashing / PartialEq). By labeling
a Relationship component such as `ChildOf(Entity)` as a "value
component", `ChildOf(e1)` and `ChildOf(e2)` would be considered
"different components". This makes the transition between fragmenting
and non-fragmenting a single flag, and everything else continues to work
as expected.
* Many-to-many support
* Non-fragmenting: We can expand Relationship to be a list of entities
instead of a single entity. I have largely already written the code for
this.
* Fragmenting: With the "value component" impl mentioned above, we get
many-to-many support "for free", as it would allow inserting multiple
copies of a Relationship component with different target entities.

Fixes #3742 (If this PR is merged, I think we should open more targeted
followup issues for the work above, with a fresh tracking issue free of
the large amount of less-directed historical context)
Fixes #17301
Fixes #12235 
Fixes #15299
Fixes #15308 

## Migration Guide

* Replace `ChildBuilder` with `ChildSpawnerCommands`.
* Replace calls to `.set_parent(parent_id)` with
`.insert(Parent(parent_id))`.
* Replace calls to `.replace_children()` with `.remove::<Children>()`
followed by `.add_children()`. Note that you'll need to manually despawn
any children that are not carried over.
* Replace calls to `.despawn_recursive()` with `.despawn()`.
* Replace calls to `.despawn_descendants()` with
`.despawn_related::<Children>()`.
* If you have any calls to `.despawn()` which depend on the children
being preserved, you'll need to remove the `Children` component first.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2025-01-18 22:20:30 +00:00
Patrick Walton
35101f3ed5
Use multi_draw_indirect_count where available, in preparation for two-phase occlusion culling. (#17211)
This commit allows Bevy to use `multi_draw_indirect_count` for drawing
meshes. The `multi_draw_indirect_count` feature works just like
`multi_draw_indirect`, but it takes the number of indirect parameters
from a GPU buffer rather than specifying it on the CPU.

Currently, the CPU constructs the list of indirect draw parameters with
the instance count for each batch set to zero, uploads the resulting
buffer to the GPU, and dispatches a compute shader that bumps the
instance count for each mesh that survives culling. Unfortunately, this
is inefficient when we support `multi_draw_indirect_count`. Draw
commands corresponding to meshes for which all instances were culled
will remain present in the list when calling
`multi_draw_indirect_count`, causing overhead. Proper use of
`multi_draw_indirect_count` requires eliminating these empty draw
commands.

To address this inefficiency, this PR makes Bevy fully construct the
indirect draw commands on the GPU instead of on the CPU. Instead of
writing instance counts to the draw command buffer, the mesh
preprocessing shader now writes them to a separate *indirect metadata
buffer*. A second compute dispatch known as the *build indirect
parameters* shader runs after mesh preprocessing and converts the
indirect draw metadata into actual indirect draw commands for the GPU.
The build indirect parameters shader operates on a batch at a time,
rather than an instance at a time, and as such each thread writes only 0
or 1 indirect draw parameters, simplifying the current logic in
`mesh_preprocessing`, which currently has to have special cases for the
first mesh in each batch. The build indirect parameters shader emits
draw commands in a tightly packed manner, enabling maximally efficient
use of `multi_draw_indirect_count`.

Along the way, this patch switches mesh preprocessing to dispatch one
compute invocation per render phase per view, instead of dispatching one
compute invocation per view. This is preparation for two-phase occlusion
culling, in which we will have two mesh preprocessing stages. In that
scenario, the first mesh preprocessing stage must only process opaque
and alpha tested objects, so the work items must be separated into those
that are opaque or alpha tested and those that aren't. Thus this PR
splits out the work items into a separate buffer for each phase. As this
patch rewrites so much of the mesh preprocessing infrastructure, it was
simpler to just fold the change into this patch instead of deferring it
to the forthcoming occlusion culling PR.

Finally, this patch changes mesh preprocessing so that it runs
separately for indexed and non-indexed meshes. This is because draw
commands for indexed and non-indexed meshes have different sizes and
layouts. *The existing code is actually broken for non-indexed meshes*,
as it attempts to overlay the indirect parameters for non-indexed meshes
on top of those for indexed meshes. Consequently, right now the
parameters will be read incorrectly when multiple non-indexed meshes are
multi-drawn together. *This is a bug fix* and, as with the change to
dispatch phases separately noted above, was easiest to include in this
patch as opposed to separately.

## Migration Guide

* Systems that add custom phase items now need to populate the indirect
drawing-related buffers. See the `specialized_mesh_pipeline` example for
an example of how this is done.
2025-01-14 21:19:20 +00:00
Patrick Walton
141b7673ab
Key render phases off the main world view entity, not the render world view entity. (#16942)
We won't be able to retain render phases from frame to frame if the keys
are unstable. It's not as simple as simply keying off the main world
entity, however, because some main world entities extract to multiple
render world entities. For example, directional lights extract to
multiple shadow cascades, and point lights extract to one view per
cubemap face. Therefore, we key off a new type, `RetainedViewEntity`,
which contains the main entity plus a *subview ID*.

This is part of the preparation for retained bins.

---------

Co-authored-by: ickshonpe <david.curthoys@googlemail.com>
2025-01-12 20:24:17 +00:00
MichiRecRoom
3742e621ef
Allow clippy::too_many_arguments to lint without warnings (#17249)
# Objective
Many instances of `clippy::too_many_arguments` linting happen to be on
systems - functions which we don't call manually, and thus there's not
much reason to worry about the argument count.

## Solution
Allow `clippy::too_many_arguments` globally, and remove all lint
attributes related to it.
2025-01-09 07:26:15 +00:00
MichiRecRoom
71cd5f813e
Fix up the reason given for a couple of too_many_arguments lints (#17251)
# Objective
In my crusade to give every lint attribute a reason, it appears I got
too complacent and copy-pasted this expect onto non-system functions.

## Solution
Fix up the reason on those non-system functions

## Testing
N/A
2025-01-09 02:39:10 +00:00
MichiRecRoom
27802e6975
bevy_render: Apply #![deny(clippy::allow_attributes, clippy::allow_attributes_without_reason)] (#17194)
# Objective
- https://github.com/bevyengine/bevy/issues/17111

## Solution
Set the `clippy::allow_attributes` and
`clippy::allow_attributes_without_reason` lints to `deny`, and bring
`bevy_render` in line with the new restrictions.

## Testing
`cargo clippy` and `cargo test --package bevy_render` were run, and no
errors were encountered.
2025-01-06 23:10:58 +00:00
Zachary Harrold
a371ee3019
Remove tracing re-export from bevy_utils (#17161)
# Objective

- Contributes to #11478

## Solution

- Made `bevy_utils::tracing` `doc(hidden)`
- Re-exported `tracing` from `bevy_log` for end-users
- Added `tracing` directly to crates that need it.

## Testing

- CI

---

## Migration Guide

If you were importing `tracing` via `bevy::utils::tracing`, instead use
`bevy::log::tracing`. Note that many items within `tracing` are also
directly re-exported from `bevy::log` as well, so you may only need
`bevy::log` for the most common items (e.g., `warn!`, `trace!`, etc.).
This also applies to the `log_once!` family of macros.

## Notes

- While this doesn't reduce the line-count in `bevy_utils`, it further
decouples the internal crates from `bevy_utils`, making its eventual
removal more feasible in the future.
- I have just imported `tracing` as we do for all dependencies. However,
a workspace dependency may be more appropriate for version management.
2025-01-05 23:06:34 +00:00