# Objective - Contributes to #15460 ## Solution - Added the following features: - `std` (default) - `async_executor` (default) - `edge_executor` - `critical-section` - `portable-atomic` - Gated `tracing` in `bevy_utils` to allow compilation on certain platforms - Switched from `tracing` to `log` for simple message logging within `bevy_ecs`. Note that `tracing` supports capturing from `log` so this should be an uncontroversial change. - Fixed imports and added feature gates as required - Made `bevy_tasks` optional within `bevy_ecs`. Turns out it's only needed for parallel operations which are already gated behind `multi_threaded` anyway. ## Testing - Added to `compile-check-no-std` CI command - `cargo check -p bevy_ecs --no-default-features --features edge_executor,critical-section,portable-atomic --target thumbv6m-none-eabi` - `cargo check -p bevy_ecs --no-default-features --features edge_executor,critical-section` - `cargo check -p bevy_ecs --no-default-features` ## Draft Release Notes Bevy's core ECS now supports `no_std` platforms. In prior versions of Bevy, it was not possible to work with embedded or niche platforms due to our reliance on the standard library, `std`. This has blocked a number of novel use-cases for Bevy, such as an embedded database for IoT devices, or for creating games on retro consoles. With this release, `bevy_ecs` no longer requires `std`. To use Bevy on a `no_std` platform, you must disable default features and enable the new `edge_executor` and `critical-section` features. You may also need to enable `portable-atomic` and `critical-section` if your platform does not natively support all atomic types and operations used by Bevy. ```toml [dependencies] bevy_ecs = { version = "0.16", default-features = false, features = [ # Required for platforms with incomplete atomics (e.g., Raspberry Pi Pico) "portable-atomic", "critical-section", # Optional "bevy_reflect", "serialize", "bevy_debug_stepping", "edge_executor" ] } ``` Currently, this has been tested on bare-metal x86 and the Raspberry Pi Pico. If you have trouble using `bevy_ecs` on a particular platform, please reach out either through a GitHub issue or in the `no_std` working group on the Bevy Discord server. Keep an eye out for future `no_std` updates as we continue to improve the parity between `std` and `no_std`. We look forward to seeing what kinds of applications are now possible with Bevy! ## Notes - Creating PR in draft to ensure CI is passing before requesting reviews. - This implementation has no support for multithreading in `no_std`, especially due to `NonSend` being unsound if allowed in multithreading. The reason is we cannot check the `ThreadId` in `no_std`, so we have no mechanism to at-runtime determine if access is sound. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Vic <59878206+Victoronz@users.noreply.github.com>
280 lines
8.8 KiB
Rust
280 lines
8.8 KiB
Rust
use crate as bevy_ecs;
|
||
#[cfg(feature = "multi_threaded")]
|
||
use bevy_ecs::batching::BatchingStrategy;
|
||
use bevy_ecs::event::{Event, EventCursor, EventId, EventInstance, Events};
|
||
use core::{iter::Chain, slice::IterMut};
|
||
|
||
/// An iterator that yields any unread events from an [`EventMutator`] or [`EventCursor`].
|
||
///
|
||
/// [`EventMutator`]: super::EventMutator
|
||
#[derive(Debug)]
|
||
pub struct EventMutIterator<'a, E: Event> {
|
||
iter: EventMutIteratorWithId<'a, E>,
|
||
}
|
||
|
||
impl<'a, E: Event> Iterator for EventMutIterator<'a, E> {
|
||
type Item = &'a mut E;
|
||
fn next(&mut self) -> Option<Self::Item> {
|
||
self.iter.next().map(|(event, _)| event)
|
||
}
|
||
|
||
fn size_hint(&self) -> (usize, Option<usize>) {
|
||
self.iter.size_hint()
|
||
}
|
||
|
||
fn count(self) -> usize {
|
||
self.iter.count()
|
||
}
|
||
|
||
fn last(self) -> Option<Self::Item>
|
||
where
|
||
Self: Sized,
|
||
{
|
||
self.iter.last().map(|(event, _)| event)
|
||
}
|
||
|
||
fn nth(&mut self, n: usize) -> Option<Self::Item> {
|
||
self.iter.nth(n).map(|(event, _)| event)
|
||
}
|
||
}
|
||
|
||
impl<'a, E: Event> ExactSizeIterator for EventMutIterator<'a, E> {
|
||
fn len(&self) -> usize {
|
||
self.iter.len()
|
||
}
|
||
}
|
||
|
||
/// An iterator that yields any unread events (and their IDs) from an [`EventMutator`] or [`EventCursor`].
|
||
///
|
||
/// [`EventMutator`]: super::EventMutator
|
||
#[derive(Debug)]
|
||
pub struct EventMutIteratorWithId<'a, E: Event> {
|
||
mutator: &'a mut EventCursor<E>,
|
||
chain: Chain<IterMut<'a, EventInstance<E>>, IterMut<'a, EventInstance<E>>>,
|
||
unread: usize,
|
||
}
|
||
|
||
impl<'a, E: Event> EventMutIteratorWithId<'a, E> {
|
||
/// Creates a new iterator that yields any `events` that have not yet been seen by `mutator`.
|
||
pub fn new(mutator: &'a mut EventCursor<E>, events: &'a mut Events<E>) -> Self {
|
||
let a_index = mutator
|
||
.last_event_count
|
||
.saturating_sub(events.events_a.start_event_count);
|
||
let b_index = mutator
|
||
.last_event_count
|
||
.saturating_sub(events.events_b.start_event_count);
|
||
let a = events.events_a.get_mut(a_index..).unwrap_or_default();
|
||
let b = events.events_b.get_mut(b_index..).unwrap_or_default();
|
||
|
||
let unread_count = a.len() + b.len();
|
||
|
||
mutator.last_event_count = events.event_count - unread_count;
|
||
// Iterate the oldest first, then the newer events
|
||
let chain = a.iter_mut().chain(b.iter_mut());
|
||
|
||
Self {
|
||
mutator,
|
||
chain,
|
||
unread: unread_count,
|
||
}
|
||
}
|
||
|
||
/// Iterate over only the events.
|
||
pub fn without_id(self) -> EventMutIterator<'a, E> {
|
||
EventMutIterator { iter: self }
|
||
}
|
||
}
|
||
|
||
impl<'a, E: Event> Iterator for EventMutIteratorWithId<'a, E> {
|
||
type Item = (&'a mut E, EventId<E>);
|
||
fn next(&mut self) -> Option<Self::Item> {
|
||
match self
|
||
.chain
|
||
.next()
|
||
.map(|instance| (&mut instance.event, instance.event_id))
|
||
{
|
||
Some(item) => {
|
||
#[cfg(feature = "detailed_trace")]
|
||
tracing::trace!("EventMutator::iter() -> {}", item.1);
|
||
self.mutator.last_event_count += 1;
|
||
self.unread -= 1;
|
||
Some(item)
|
||
}
|
||
None => None,
|
||
}
|
||
}
|
||
|
||
fn size_hint(&self) -> (usize, Option<usize>) {
|
||
self.chain.size_hint()
|
||
}
|
||
|
||
fn count(self) -> usize {
|
||
self.mutator.last_event_count += self.unread;
|
||
self.unread
|
||
}
|
||
|
||
fn last(self) -> Option<Self::Item>
|
||
where
|
||
Self: Sized,
|
||
{
|
||
let EventInstance { event_id, event } = self.chain.last()?;
|
||
self.mutator.last_event_count += self.unread;
|
||
Some((event, *event_id))
|
||
}
|
||
|
||
fn nth(&mut self, n: usize) -> Option<Self::Item> {
|
||
if let Some(EventInstance { event_id, event }) = self.chain.nth(n) {
|
||
self.mutator.last_event_count += n + 1;
|
||
self.unread -= n + 1;
|
||
Some((event, *event_id))
|
||
} else {
|
||
self.mutator.last_event_count += self.unread;
|
||
self.unread = 0;
|
||
None
|
||
}
|
||
}
|
||
}
|
||
|
||
impl<'a, E: Event> ExactSizeIterator for EventMutIteratorWithId<'a, E> {
|
||
fn len(&self) -> usize {
|
||
self.unread
|
||
}
|
||
}
|
||
|
||
/// A parallel iterator over `Event`s.
|
||
#[derive(Debug)]
|
||
#[cfg(feature = "multi_threaded")]
|
||
pub struct EventMutParIter<'a, E: Event> {
|
||
mutator: &'a mut EventCursor<E>,
|
||
slices: [&'a mut [EventInstance<E>]; 2],
|
||
batching_strategy: BatchingStrategy,
|
||
unread: usize,
|
||
}
|
||
|
||
#[cfg(feature = "multi_threaded")]
|
||
impl<'a, E: Event> EventMutParIter<'a, E> {
|
||
/// Creates a new parallel iterator over `events` that have not yet been seen by `mutator`.
|
||
pub fn new(mutator: &'a mut EventCursor<E>, events: &'a mut Events<E>) -> Self {
|
||
let a_index = mutator
|
||
.last_event_count
|
||
.saturating_sub(events.events_a.start_event_count);
|
||
let b_index = mutator
|
||
.last_event_count
|
||
.saturating_sub(events.events_b.start_event_count);
|
||
let a = events.events_a.get_mut(a_index..).unwrap_or_default();
|
||
let b = events.events_b.get_mut(b_index..).unwrap_or_default();
|
||
|
||
let unread_count = a.len() + b.len();
|
||
mutator.last_event_count = events.event_count - unread_count;
|
||
|
||
Self {
|
||
mutator,
|
||
slices: [a, b],
|
||
batching_strategy: BatchingStrategy::default(),
|
||
unread: unread_count,
|
||
}
|
||
}
|
||
|
||
/// Changes the batching strategy used when iterating.
|
||
///
|
||
/// For more information on how this affects the resultant iteration, see
|
||
/// [`BatchingStrategy`].
|
||
pub fn batching_strategy(mut self, strategy: BatchingStrategy) -> Self {
|
||
self.batching_strategy = strategy;
|
||
self
|
||
}
|
||
|
||
/// Runs the provided closure for each unread event in parallel.
|
||
///
|
||
/// Unlike normal iteration, the event order is not guaranteed in any form.
|
||
///
|
||
/// # Panics
|
||
/// If the [`ComputeTaskPool`] is not initialized. If using this from an event reader that is being
|
||
/// initialized and run from the ECS scheduler, this should never panic.
|
||
///
|
||
/// [`ComputeTaskPool`]: bevy_tasks::ComputeTaskPool
|
||
pub fn for_each<FN: Fn(&'a mut E) + Send + Sync + Clone>(self, func: FN) {
|
||
self.for_each_with_id(move |e, _| func(e));
|
||
}
|
||
|
||
/// Runs the provided closure for each unread event in parallel, like [`for_each`](Self::for_each),
|
||
/// but additionally provides the `EventId` to the closure.
|
||
///
|
||
/// Note that the order of iteration is not guaranteed, but `EventId`s are ordered by send order.
|
||
///
|
||
/// # Panics
|
||
/// If the [`ComputeTaskPool`] is not initialized. If using this from an event reader that is being
|
||
/// initialized and run from the ECS scheduler, this should never panic.
|
||
///
|
||
/// [`ComputeTaskPool`]: bevy_tasks::ComputeTaskPool
|
||
pub fn for_each_with_id<FN: Fn(&'a mut E, EventId<E>) + Send + Sync + Clone>(
|
||
mut self,
|
||
func: FN,
|
||
) {
|
||
#[cfg(target_arch = "wasm32")]
|
||
{
|
||
self.into_iter().for_each(|(e, i)| func(e, i));
|
||
}
|
||
|
||
#[cfg(not(target_arch = "wasm32"))]
|
||
{
|
||
let pool = bevy_tasks::ComputeTaskPool::get();
|
||
let thread_count = pool.thread_num();
|
||
if thread_count <= 1 {
|
||
return self.into_iter().for_each(|(e, i)| func(e, i));
|
||
}
|
||
|
||
let batch_size = self
|
||
.batching_strategy
|
||
.calc_batch_size(|| self.len(), thread_count);
|
||
let chunks = self.slices.map(|s| s.chunks_mut(batch_size));
|
||
|
||
pool.scope(|scope| {
|
||
for batch in chunks.into_iter().flatten() {
|
||
let func = func.clone();
|
||
scope.spawn(async move {
|
||
for event in batch {
|
||
func(&mut event.event, event.event_id);
|
||
}
|
||
});
|
||
}
|
||
});
|
||
|
||
// Events are guaranteed to be read at this point.
|
||
self.mutator.last_event_count += self.unread;
|
||
self.unread = 0;
|
||
}
|
||
}
|
||
|
||
/// Returns the number of [`Event`]s to be iterated.
|
||
pub fn len(&self) -> usize {
|
||
self.slices.iter().map(|s| s.len()).sum()
|
||
}
|
||
|
||
/// Returns [`true`] if there are no events remaining in this iterator.
|
||
pub fn is_empty(&self) -> bool {
|
||
self.slices.iter().all(|x| x.is_empty())
|
||
}
|
||
}
|
||
|
||
#[cfg(feature = "multi_threaded")]
|
||
impl<'a, E: Event> IntoIterator for EventMutParIter<'a, E> {
|
||
type IntoIter = EventMutIteratorWithId<'a, E>;
|
||
type Item = <Self::IntoIter as Iterator>::Item;
|
||
|
||
fn into_iter(self) -> Self::IntoIter {
|
||
let EventMutParIter {
|
||
mutator: reader,
|
||
slices: [a, b],
|
||
..
|
||
} = self;
|
||
let unread = a.len() + b.len();
|
||
let chain = a.iter_mut().chain(b);
|
||
EventMutIteratorWithId {
|
||
mutator: reader,
|
||
chain,
|
||
unread,
|
||
}
|
||
}
|
||
}
|