A refreshingly simple data-driven game engine built in Rust

Overview

Bevy

Crates.io license Crates.io Rust iOS cron CI Discord

What is Bevy?

Bevy is a refreshingly simple data-driven game engine built in Rust. It is free and open-source forever!

WARNING

Bevy is still in the very early stages of development. APIs can and will change (now is the time to make suggestions!). Important features are missing. Documentation is sparse. Please don't build any serious projects in Bevy unless you are prepared to be broken by API changes constantly.

Design Goals

  • Capable: Offer a complete 2D and 3D feature set
  • Simple: Easy for newbies to pick up, but infinitely flexible for power users
  • Data Focused: Data-oriented architecture using the Entity Component System paradigm
  • Modular: Use only what you need. Replace what you don't like
  • Fast: App logic should run quickly, and when possible, in parallel
  • Productive: Changes should compile quickly ... waiting isn't fun

About

Docs

Community

Before contributing or participating in discussions with the community, you should familiarize yourself with our Code of Conduct and How to Contribute

Getting Started

We recommend checking out The Bevy Book for a full tutorial.

Follow the Setup guide to ensure your development environment is set up correctly. Once set up, you can quickly try out the examples by cloning this repo and running the following commands:

# Switch to the correct version (latest release, default is main development branch)
git checkout latest
# Runs the "breakout" example
cargo run --example breakout

Fast Compiles

Bevy can be built just fine using default configuration on stable Rust. However for really fast iterative compiles, you should enable the "fast compiles" setup by following the instructions here.

Focus Areas

Bevy has the following Focus Areas. We are currently focusing our development efforts in these areas, and they will receive priority for Bevy developers' time. If you would like to contribute to Bevy, you are heavily encouraged to join in on these efforts:

Editor-Ready UI

PBR / Clustered Forward Rendering

Scenes

Libraries Used

Bevy is only possible because of the hard work put into these foundational technologies:

  • wgpu-rs: modern / low-level / cross-platform graphics library inspired by Vulkan
  • glam-rs: a simple and fast 3D math library for games and graphics
  • winit: cross-platform window creation and management in Rust
  • spirv-reflect: Reflection API in rust for SPIR-V shader byte code

Bevy Cargo Features

This list outlines the different cargo features supported by Bevy. These allow you to customize the Bevy feature set for your use-case.

Third Party Plugins

Plugins are very welcome to extend Bevy's features. Guidelines are available to help integration and usage.

Thanks and Alternatives

Additionally, we would like to thank the Amethyst, macroquad, coffee, ggez, rg3d, and Piston projects for providing solid examples of game engine development in Rust. If you are looking for a Rust game engine, it is worth considering all of your options. Each engine has different design goals, and some will likely resonate with you more than others.

Issues
  • Relicense Bevy under dual MIT/Apache-2.0

    Relicense Bevy under dual MIT/Apache-2.0

    What?

    We would like to relicense Bevy under the "dual MIT / Apache-2.0 license". This allows users to select either license according to their own preferences. There are Very Good Reasons for this (see the "Why?" section below). However I can't just arbitrarily relicense open source code, despite being the maintainer / lead developer. Bevy has hundreds of contributors and they all agreed to license their contributions exclusively under our current instance of the MIT license.

    If you are mentioned in this issue, we need your help to make this happen

    To agree to this relicense, please read the details in this issue, then leave a comment with the following message:

    I license past and future contributions under the dual MIT/Apache-2.0 license, allowing licensees to choose either at their option.
    

    If you disagree, please respond with your reasoning (just don't expect us to change course at this point). Anyone who doesn't agree to the relicense will have any Bevy contributions that qualify as "copyrightable" removed or re-implemented.

    Why?

    I originally chose to license Bevy exclusively under MIT for a variety of reasons:

    1. People and companies generally know and trust the MIT license more than any other license. Apache 2.0 is less known and trusted.
    2. It is short and easy to understand
    3. Many people aren't familiar with the "multiple license options ... choose your favorite" approach. I didn't want to scare people away unnecessarily.
    4. Other open source engines like Godot have had a lot of success with MIT-only licensing

    However there were a variety of issues that have come up that make dual-licensing Bevy under both MIT and Apache-2.0 compelling:

    1. The MIT license (arguably) requires binaries to reproduce countless copies of the same license boilerplate for every MIT library in use. MIT-only engines like Godot have complicated license compliance rules as a result
    2. The Apache-2.0 license has protections from patent trolls and an explicit contribution licensing clause.
    3. The Rust ecosystem is largely Apache-2.0. Being available under that license is good for interoperation and opens the doors to upstreaming Bevy code into other projects (Rust, the async ecosystem, etc).
    4. The Apache license is incompatible with GPLv2, but MIT is compatible.

    Additionally, Bevy's current MIT license includes the Copyright (c) 2020 Carter Anderson line. I don't want to force Bevy users to credit me (and no one else) for the rest of time. If you agree to the relicense, you also agree to allow us to remove this copyright line crediting me.

    What will this look like?

    After getting explicit approval from each contributor of copyrightable work (as not all contributions qualify for copyright, due to not being a "creative work", e.g. a typo fix), we will add the following file to our README:

    ### License
    
    Licensed under either of
    
     * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
     * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
    
    at your option.
    
    ### Contribution
    
    Unless you explicitly state otherwise, any contribution intentionally submitted
    for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any
    additional terms or conditions.
    

    We will add LICENSE-{MIT,APACHE} files containing the text of each license. We will also update the license metadata in our Cargo.toml to:

    license = "MIT OR Apache-2.0"

    Contributor checkoff

    • [x] @0x22fe
    • [x] @8bit-pudding
    • [x] @aclysma
    • [x] @adam-bates
    • [x] @Adamaq01
    • [x] @aevyrie
    • [x] @ak-1
    • [x] @akiross
    • [x] @Alainx277
    • [x] @alec-deason
    • [x] @alexb910
    • [x] @alexschrod
    • [x] @alice-i-cecile
    • [x] @alilee
    • [x] @AlisCode
    • [x] @aloucks
    • [x] @amberkowalski
    • [x] @AmionSky
    • [x] @anchpop
    • [x] @andoco
    • [x] @andreheringer
    • [x] @andrewhickman
    • [x] @AngelicosPhosphoros
    • [x] @AngelOnFira
    • [x] @Archina
    • [x] @ashneverdawn
    • [x] @BafDyce
    • [x] @bgourlie
    • [x] @BimDav
    • [x] @bitshifter
    • [x] @bjorn3
    • [x] @blaind
    • [x] @blamelessgames
    • [x] @blunted2night
    • [x] @Bobox214
    • [x] @Boiethios
    • [x] @bonsairobo
    • [x] @BoxyUwU
    • [x] @Byteron
    • [x] @CAD97
    • [x] @CaelumLaron
    • [x] @caelunshun
    • [x] @cart
    • [x] @CGMossa
    • [x] @CleanCut
    • [x] @ColdIce1605
    • [x] @Cupnfish
    • [x] @dallenng
    • [x] @Dash-L
    • [x] @Davier
    • [x] @dburrows0
    • [x] @dependabot[bot]
    • [x] @deprilula28
    • [x] @DGriffin91
    • [x] @dintho
    • [x] @Dispersia
    • [x] @DivineGod
    • [x] @Divoolej
    • [x] @DJMcNab
    • [x] @e00E
    • [x] @easynam
    • [x] @ElArtista
    • [x] @eliaspekkala
    • [x] @enfipy
    • [x] @figsoda
    • [x] @fintelia
    • [x] @Fishrock123
    • [x] @FlyingRatBull
    • [x] @forbjok
    • [x] @frewsxcv
    • [x] @freylint
    • [x] @Frizi
    • [x] @FSMaxB
    • [x] @FuriouZz
    • [x] @GabLotus
    • [x] @gcoakes
    • [x] @gfreezy
    • [x] @Git0Shuai
    • [x] @giusdp
    • [x] @GrantMoyer
    • [x] @Gregoor
    • [x] @Grindv1k
    • [x] @guimcaballero
    • [x] @Halfwhit
    • [x] @Havvy
    • [x] @HackerFoo
    • [x] @Hugheth
    • [x] @huhlig
    • [x] @hymm
    • [x] @ifletsomeclaire
    • [x] @iMplode-nZ
    • [x] @Incipium
    • [x] @iwikal
    • [x] @Ixentus
    • [x] @J-F-Liu
    • [x] @jacobgardner
    • [x] @jak6jak
    • [x] @jakerr
    • [x] @jakobhellermann
    • [x] @jamadazi
    • [x] @Jbat1Jumper
    • [x] @JCapucho
    • [x] @jcornaz
    • [x] @Jerald
    • [x] @jesseviikari
    • [x] @jihiggins
    • [x] @jleflang
    • [x] @jngbsn
    • [x] @joejoepie
    • [x] @JohnDoneth
    • [x] @Josh015
    • [x] @joshuajbouw
    • [x] @julhe
    • [x] @kaflu
    • [x] @karroffel
    • [x] @kedodrill
    • [x] @kokounet
    • [x] @Kurble
    • [x] @lachlansneff
    • [x] @lambdagolem
    • [x] @lassade
    • [x] @lberrymage
    • [x] @lee-orr
    • [x] @logannc
    • [x] @Lowentwickler
    • [x] @lukors
    • [x] @Lythenas
    • [x] @M2WZ
    • [x] @marcusbuffett
    • [x] @MarekLg
    • [x] @marius851000
    • [x] @MatteoGgl
    • [x] @maxwellodri
    • [x] @mccludav
    • [x] @memoryruins
    • [x] @mfrancis107
    • [x] @MGlolenstine
    • [x] @MichaelHills
    • [x] @MilanVasko
    • [x] @MinerSebas
    • [x] @mjhostet
    • [x] @mkhan45
    • [x] @mnmaita
    • [x] @mockersf
    • [x] @Moxinilian
    • [x] @MrEmanuel
    • [x] @mrk-its
    • [x] @msklywenn
    • [x] @mtsr
    • [x] @multun
    • [x] @mvlabat
    • [x] @naithar
    • [x] @NathanSWard
    • [x] @navaati
    • [x] @ncallaway
    • [x] @ndarilek
    • [x] @Nazariglez
    • [x] @Neo-Zhixing
    • [x] @nic96
    • [x] @NiklasEi
    • [x] @Nilirad
    • [x] @no1hitjam
    • [x] @notsimon
    • [x] @nside
    • [x] @ocornoc
    • [x] @Olaren15
    • [x] @OptimisticPeach
    • [x] @payload
    • [x] @Philipp-M
    • [x] @Plecra
    • [x] @PrototypeNM1
    • [x] @r00ster91
    • [x] @Ratysz
    • [x] @RedlineTriad
    • [x] @refnil
    • [x] @reidbhuntley
    • [x] @Restioson
    • [x] @RichoDemus
    • [x] @rmsc
    • [x] @rmsthebest
    • [x] @RobDavenport
    • [x] @robertwayne
    • [x] @rod-salazar
    • [x] @rparrett
    • [x] @ryanleecode
    • [x] @sapir
    • [x] @saveriomiroddi
    • [x] @sburris0
    • [x] @schell
    • [x] @sdfgeoff
    • [x] @ShadowMitia
    • [x] @simensgreen
    • [x] @simlay
    • [x] @simpuid
    • [x] @SmiteWindows
    • [x] @smokku
    • [x] @StarArawn
    • [x] @stefee
    • [x] @superdump
    • [x] @SvenTS
    • [x] @sY9sE33
    • [x] @szunami
    • [x] @tangmi
    • [x] @tarkah
    • [x] @TehPers
    • [x] @Telzhaak
    • [x] @termhn
    • [x] @tiagolam
    • [x] @the-notable
    • [x] @thebluefish
    • [x] @TheNeikos
    • [x] @TheRawMeatball
    • [x] @therealstork
    • [x] @thirdsgames
    • [x] @Tiagojdferreira
    • [x] @tigregalis
    • [x] @Toniman20
    • [x] @toothbrush7777777
    • [x] @TotalKrill
    • [x] @tristanpemble
    • [x] @trolleyman
    • [x] @turboMaCk
    • [x] @TypicalFork
    • [x] @undinococo
    • [x] @verzuz
    • [x] @Veykril
    • [x] @vgel
    • [x] @VitalyAnkh
    • [x] @w1th0utnam3
    • [x] @W4RH4WK
    • [x] @Waridley
    • [x] @Weibye
    • [x] @Weibye-Breach
    • [x] @wilk10
    • [x] @will-hart
    • [x] @willcrichton
    • [x] @WilliamTCarroll
    • [x] @woubuc
    • [x] @wyhaya
    • [x] @Xavientois
    • [x] @YohDeadfall
    • [x] @yrns
    • [x] @zaszi
    • [x] @zgotsch
    • [x] @zicklag
    • [x] @Zooce

    Contributors with "obsolete" changes (no need for approval)

    • adekau
    • ColonisationCaptain
    • temhotaokeaha

    Contributors with "trivial" changes that are ok to keep

    • follower
    • HyperLightKitsune
    • liufuyang
    • Raymond26
    • themilkybit
    • walterpie
    • EthanYidong

    Contributors with changes we reverted to unblock the relicense

    • TomBebb
    C-Enhancement A-Meta 
    opened by cart 278
  • Schedule v2

    Schedule v2

    Draft Note

    This is a draft because I'm looking for feedback on this api. Please let me know if you can think of improvements or gaps.

    Bevy Schedule V2

    Bevy's old Schedule was simple, easy to read, and easy to use. But it also had significant limitations:

    • Only one Schedule allowed
    • Very static: you are limited to using the tools we gave you (stages are lists of systems, you can add stages to schedules)
    • Couldn't switch between schedules at runtime
    • Couldn't easily support "fixed timestep" scenarios

    V2 of Bevy Schedule aims to solve these problems while still maintaining the ergonomics we all love:

    Stage Trait

    Stage is now a trait. You can implement your own stage types!

    struct MyStage;
    
    impl Stage for MyStage {
        fn run(&mut self, world: &mut World, resources: &mut Resources);
    }
    

    There are now multiple built in Stage types:

    SystemStage

    // runs systems in parallel
    let parallel_stage =
        SystemStage::parallel()
            .with_system(a)
            .with_system(b);
    
    // runs systems serially (in registration order)
    let serial_stage =
        SystemStage::serial()
            .with_system(a)
            .with_system(b);
    
    // you can also write your own custom SystemStageExecutor
    let custom_executor_stage =
        SystemStage::new(MyCustomExecutor::new())
            .with_system(a)
            .with_system(b);
    

    StateStage<T>

    Bevy now supports states. More on this below!

    Schedule

    You read that right! Schedules are also stages, which means you can nest Schedules

    let schedule = Schedule::default()
        .with_stage("update", SystemStage::parallel()
            .with_system(a)
            .with_system(b)
        )
        .with_stage("nested",
            Schedule::default()
                .with_stage("nested_stage", SystemStage::serial()
                    .with_system(b)
                )
        );
    
    // schedule stages can be downcasted
    let mut update_stage = schedule.get_stage_mut::<SystemStage>("update").unwrap();
    update_stage.add_system(something_new);
    

    States

    By popular demand, we now support States!

    • Each state value has its own "enter", "update", and "exit" schedule
    • You can queue up state changes from any system
    • When a StateStage runs, it will dequeue all state changes and run through each state's lifecycle
    • If at the end of a StateStage, new states have been queued, they will immediately be applied. This means moving between states will not be delayed across frames.

    The new state.rs example is the best illustrator of this feature. It shows how to transition between a Menu state and an InGame state. The texture_atlas.rs example has been adapt to use states to transition between a Loading state and a Finished state.

    This enables much more elaborate app models:

    #[derive(Clone, PartialEq, Eq, Hash)]
    enum AppState {
        Loading,
        InGame,
    } 
    
    App::build()
        // This initializes the state (adds the State<AppState> resource and adds StateStage<T> to the schedule)
        // State stages are added right after UPDATE by default, but you also manually add StateStage<T> anywhere
        .add_state(AppState::Loading)
        // A state's "enter" schedule is run once when the state is entered
        .state_enter(AppState::Loading, SystemStage::parallel()
            .with_system(setup)     
            .with_system(load_textures)     
        )
        // A state's "update" schedule is run once on every app update
        // Note: Systems implement IntoStage, so you can do this:
        .state_update(AppState::Loading, check_asset_loads)
        // A state's "exit" schedule is run once when the state is exited 
        .state_exit(AppState::Loading, setup_world)
        .state_update(AppState::InGame, SystemStage::parallel()
            .with_system(movement)
            .with_system(collision)
        )
        // You can of course still compose your schedule "normally"
        .add_system(do_thing)
        // add_system_to_stage assumes that the stage is a SystemStage
        .add_system_to_stage(stage::POST_UPDATE, do_another_thing)
    
    // this system checks to see if assets are loaded and transitions to the InGame state when they are finished 
    fn check_asset_loads(state: Res<State<AppState>>, asset_server: Res<AssetServer>) {
        if assets_finished_loading(&asset_server) {
            // state changes are put into a queue, which the StateStage consumes during execution
            state.queue(AppState::InGame)
        }
    }
    
    fn setup_world(commands: &mut Commands, state: Res<State<AppState>>, textures: Res<Assets<Textures>>) {
        // This system only runs after check_asset_loads has checked that all assets have loaded
        // This means we can now freely access asset data
        let texture = textures.get(SOME_HANDLE).unwrap();
    
        commands
            .spawn(Camera2dBundle::default())
            // spawn more things here
            .spawn(SpriteBundle::default());
    }
    

    Run Criteria

    Criteria driven stages (and schedules): only run stages or schedules when a certain criteria is met.

    app
        .add_stage_after(stage::UPDATE, "only_on_10_stage", SystemStage::parallel()
            .with_run_criteria(|value: Res<usize>| if *value == 10 { ShouldRun::Yes } else { ShouldRun::No } )
            .with_system(a)
        )
        .add_stage_after(stage::RUN_ONCE, "one_and_done", Schedule::default()
            .with_run_criteria(RunOnce::default())
            .with_system(a)
        )
    

    Fixed Timestep:

    app.add_stage_after(stage::UPDATE, "fixed_update", SystemStage::parallel()
        .with_run_criteria(FixedTimestep::steps_per_second(40.0))
        .with_system(a)
    )
    

    Schedule Building

    Adding stages now takes a Stage value:

    App::build()
        .add_stage_after(stage::UPDATE, SystemStage::parallel())
    

    Typed stage building with nesting:

    app
        .stage("my_stage", |my_stage: &mut Schedule|
            my_stage
                .add_stage_after("substage1", "substage2", SystemStage::parallel()
                    .with_system(some_system)
                )
                .add_system_to_stage("substage_2", some_other_system)
                .stage("a_2", |a_2: &mut SystemStage| 
                    a_2.add_stage("a_2_1", StateStage::<MyState>::default())
                )
        )
        .add_stage("b", SystemStage::serial())
    )
    

    Unified Schedule

    No more separate "startup" schedule! It has been moved into the main schedule with a RunOnce criteria

    startup_stage::STARTUP (and variants) have been removed in favor of this:

    app
        // this:
        .add_startup_system(setup)
        // is equivalent to this:
        .stage(stage::STARTUP, |startup: &mut Schedule| {
            startup.add_system_to_stage(startup_stage::STARTUP, setup)
        }) 
        // choose whichever one works for you!
    

    this is a non-breaking change. you can continue using the AppBuilder .add_startup_system() shorthand

    Discussion Topics

    • General API thoughts: What do you like? What do you dislike?
    • Do States behave as expected? Are they missing anything?
    • Does FixedTimestep behave as expected?
    • I added support for "owned builders" and "borrowed builders" for most schedule/stage building:
      // borrowed (add_x and in some cases set_x)
      app
          .add_stage("my_stage", SystemStage::parallel())
          .stage("my_stage", |my_stage: &mut Schedule|
              my_stage
                  .add_stage("a", )
                  .add_system_to_stage("a", some_system)
          )
      
      // owned (with_x)
      app
          .add_stage("my_stage", Schedule::default()
              .with_stage("a", SystemStage::parallel())
              .with_system_in_stage("a", some_system)
          )
      )
      
      • Does this make sense? We could remove with_x in favor of borrowed add_x in most cases. This would reduce the api surface, but it would mean slightly more cumbersome initialization. We also definitely want with_x in some cases (such as stage.with_run_criteria())

    Next steps:

    • (Maybe) Support queuing up App changes (which will be applied right before the next update):
      commands.app_change(|app: &mut App| {app.schedule = Schedule::default();})
      
    • (Maybe) Event driven stages
      app.add_stage_after(stage::UPDATE, EventStage::<SomeEvent>::default().with_system(a))
      
      • These could easily build on top of the existing schedule features. It might be worth letting people experiment with their own implementations for a bit.
      • We could also somehow try to work in "system inputs" to this. Aka when an event comes in, pass it in to each system in the schedule as input.
    C-Enhancement A-ECS 
    opened by cart 64
  • System-order-independent ECS change tracking

    System-order-independent ECS change tracking

    The current change tracking system is very lightweight, both for event generation and consumption (which is why we can enable it by default), but it has the following problem:

    schedule.run()
      System1: Change<T> query
      System2: mutates T
      world.clear_trackers()
    schedule.run()
      System1: Change<T> query ... misses System2's changes
    

    In most cases, this can be worked around via system registration order and stages, but I anticipate some cases that cannot work around this behavior, as well as general confusion from users like: "why isn't my system running ... im definitely mutating this component?"

    I think it would be useful to (optionally) allow stateful "order independent" change tracking. I posted some ideas here: #54

    A-ECS 
    opened by cart 57
  • [Merged by Bors] - Add a method `iter_combinations` on query to iterate over combinations of query results

    [Merged by Bors] - Add a method `iter_combinations` on query to iterate over combinations of query results

    Related to discussion on discord

    With const generics, it is now possible to write generic iterator over multiple entities at once.

    This enables patterns of query iterations like

    for [e1, e2, e3] in query.iter_combinations() {
       // do something with relation of all three entities
    }
    

    The compiler is able to infer the correct iterator for given size of array, so either of those work

    for [e1, e2] in query.iter_combinations()  { ... }
    for [e1, e2, e3] in query.iter_combinations()  { ... }
    

    This feature can be very useful for systems like collision detection.

    When you ask for permutations of size K of N entities:

    • if K == N, you get one result of all entities
    • if K < N, you get all possible subsets of N with size K, without repetition
    • if K > N, the result set is empty (no permutation of size K exist)
    C-Enhancement A-ECS C-Usability S-Ready-For-Final-Review 
    opened by Frizi 51
  • Dynamic Systems and Components

    Dynamic Systems and Components

    Related to: #32 Resolves: #142

    Note: How this is related to #32 ( StableTypeId discussion ) this is related to #32 because it changes the TypeId struct used to identify components, but it does not directly address the problem of dynamic loading of Rust plugins. See https://github.com/bevyengine/bevy/issues/32#issuecomment-703303754.

    This PR will attempt to establish the ability to create systems and components that have been determined at runtime instead of compile time. I'm going to try to implement this in one PR because I won't be sure about the sufficiency of the design until I actually get a working example of a dynamically registered system and component ( which I will include with this PR ).

    Note: If we want to merge pieces of this PR one at a time, I am perfectly fine with that. I am making sure that each step is cleanly separated into it's own commit and can easily be ported to an individual PR.

    I'm going to try to attack this problem one step at a time:

    Steps

    Non-Rust Component Ids

    Status: Completed in commit: "Implement Custom Component Ids"

    Currently bevy_hecs uses std::any::TypeId to uniquely identify the component IDs, but this does not allow us to define components that have a non-Rust origin. The first commit in the PR has migrated all of the internals of the bevy ECS to use a new ComponentId instead that is defined as:

    /// Uniquely identifies a type of component. This is conceptually similar to
    /// Rust's [`TypeId`], but allows for external type IDs to be defined.
    #[derive(Eq, PartialEq, Hash, Debug, Clone, Copy)]
    pub enum ComponentId {
        /// A Rust-native [`TypeId`]
        RustTypeId(TypeId),
        /// An arbitrary ID that allows you to identify types defined outside of
        /// this Rust compilation
        ExternalId(u64),
    }
    

    Establish Dynamic Queries

    Status: Completed in commit: "Implement Dynamic Systems"

    This adds a new state type parameter to the Fetch trait. This allows compile-time constructed queries like to be constructed with State = () and runtime constructed queries with the state parameter set to a DynamicComponentQuery.

    Establish Dynamic Component Insertion

    Status: Completed in commit: "Add Dynamic Component Support"

    This adds a new implementation of the Bundle trait, RuntimeBundle which can be used to insert untyped components at runtime by providing a buffer of bytes and the required type information.

    Create Examples

    Status: Completed in respective commits

    We also have new examples for dynamic_systems and dynamic_components.

    Remaining Work

    The biggest thing necessary at this point is a solid review. There's a lot of code changed, but thankfully not a ton of new logic. The bulk of the changes are repetitive changes required to add the necessary type parameters and such for the new Fetch `State type parameter.

    Otherwise, there are a couple of todo!()'s in bevy_scene because I don't know enough about the Bevy scene architecture to integrate scenes with external components yet. I think that could reasonably be left to a separate PR, but I'm fine either way.

    C-Enhancement A-ECS 
    opened by zicklag 48
  • [Merged by Bors] - Reliable change detection

    [Merged by Bors] - Reliable change detection

    Problem Definition

    The current change tracking (via flags for both components and resources) fails to detect changes made by systems that are scheduled to run earlier in the frame than they are.

    This issue is discussed at length in #68 and #54.

    This is very much a draft PR, and contributions are welcome and needed.

    Criteria

    1. Each change is detected at least once, no matter the ordering.
    2. Each change is detected at most once, no matter the ordering.
    3. Changes should be detected the same frame that they are made.
    4. Competitive ergonomics. Ideally does not require opting-in.
    5. Low CPU overhead of computation.
    6. Memory efficient. This must not increase over time, except where the number of entities / resources does.
    7. Changes should not be lost for systems that don't run.
    8. A frame needs to act as a pure function. Given the same set of entities / components it needs to produce the same end state without side-effects.

    Exact change-tracking proposals satisfy criteria 1 and 2. Conservative change-tracking proposals satisfy criteria 1 but not 2. Flaky change tracking proposals satisfy criteria 2 but not 1.

    Code Base Navigation

    There are three types of flags:

    • Added: A piece of data was added to an entity / Resources.
    • Mutated: A piece of data was able to be modified, because its DerefMut was accessed
    • Changed: The bitwise OR of Added and Changed

    The special behavior of ChangedRes, with respect to the scheduler is being removed in #1313 and does not need to be reproduced.

    ChangedRes and friends can be found in "bevy_ecs/core/resources/resource_query.rs".

    The Flags trait for Components can be found in "bevy_ecs/core/query.rs".

    ComponentFlags are stored in "bevy_ecs/core/archetypes.rs", defined on line 446.

    Proposals

    Proposal 5 was selected for implementation.

    Proposal 0: No Change Detection

    The baseline, where computations are performed on everything regardless of whether it changed.

    Type: Conservative

    Pros:

    • already implemented
    • will never miss events
    • no overhead

    Cons:

    • tons of repeated work
    • doesn't allow users to avoid repeating work (or monitoring for other changes)

    Proposal 1: Earlier-This-Tick Change Detection

    The current approach as of Bevy 0.4. Flags are set, and then flushed at the end of each frame.

    Type: Flaky

    Pros:

    • already implemented
    • simple to understand
    • low memory overhead (2 bits per component)
    • low time overhead (clear every flag once per frame)

    Cons:

    • misses systems based on ordering
    • systems that don't run every frame miss changes
    • duplicates detection when looping
    • can lead to unresolvable circular dependencies

    Proposal 2: Two-Tick Change Detection

    Flags persist for two frames, using a double-buffer system identical to that used in events.

    A change is observed if it is found in either the current frame's list of changes or the previous frame's.

    Type: Conservative

    Pros:

    • easy to understand
    • easy to implement
    • low memory overhead (4 bits per component)
    • low time overhead (bit mask and shift every flag once per frame)

    Cons:

    • can result in a great deal of duplicated work
    • systems that don't run every frame miss changes
    • duplicates detection when looping

    Proposal 3: Last-Tick Change Detection

    Flags persist for two frames, using a double-buffer system identical to that used in events.

    A change is observed if it is found in the previous frame's list of changes.

    Type: Exact

    Pros:

    • exact
    • easy to understand
    • easy to implement
    • low memory overhead (4 bits per component)
    • low time overhead (bit mask and shift every flag once per frame)

    Cons:

    • change detection is always delayed, possibly causing painful chained delays
    • systems that don't run every frame miss changes
    • duplicates detection when looping

    Proposal 4: Flag-Doubling Change Detection

    Combine Proposal 2 and Proposal 3. Differentiate between JustChanged (current behavior) and Changed (Proposal 3).

    Pack this data into the flags according to this implementation proposal.

    Type: Flaky + Exact

    Pros:

    • allows users to acc
    • easy to implement
    • low memory overhead (4 bits per component)
    • low time overhead (bit mask and shift every flag once per frame)

    Cons:

    • users must specify the type of change detection required
    • still quite fragile to system ordering effects when using the flaky JustChanged form
    • cannot get immediate + exact results
    • systems that don't run every frame miss changes
    • duplicates detection when looping

    [SELECTED] Proposal 5: Generation-Counter Change Detection

    A global counter is increased after each system is run. Each component saves the time of last mutation, and each system saves the time of last execution. Mutation is detected when the component's counter is greater than the system's counter. Discussed here. How to handle addition detection is unsolved; the current proposal is to use the highest bit of the counter as in proposal 1.

    Type: Exact (for mutations), flaky (for additions)

    Pros:

    • low time overhead (set component counter on access, set system counter after execution)
    • robust to systems that don't run every frame
    • robust to systems that loop

    Cons:

    • moderately complex implementation
    • must be modified as systems are inserted dynamically
    • medium memory overhead (4 bytes per component + system)
    • unsolved addition detection

    Proposal 6: System-Data Change Detection

    For each system, track which system's changes it has seen. This approach is only worth fully designing and implementing if Proposal 5 fails in some way.

    Type: Exact

    Pros:

    • exact
    • conceptually simple

    Cons:

    • requires storing data on each system
    • implementation is complex
    • must be modified as systems are inserted dynamically

    Proposal 7: Total-Order Change Detection

    Discussed here. This proposal is somewhat complicated by the new scheduler, but I believe it should still be conceptually feasible. This approach is only worth fully designing and implementing if Proposal 5 fails in some way.

    Type: Exact

    Pros:

    • exact
    • efficient data storage relative to other exact proposals

    Cons:

    • requires access to the scheduler
    • complex implementation and difficulty grokking
    • must be modified as systems are inserted dynamically

    Tests

    • We will need to verify properties 1, 2, 3, 7 and 8. Priority: 1 > 2 = 3 > 8 > 7
    • Ideally we can use identical user-facing syntax for all proposals, allowing us to re-use the same syntax for each.
    • When writing tests, we need to carefully specify order using explicit dependencies.
    • These tests will need to be duplicated for both components and resources.
    • We need to be sure to handle cases where ambiguous system orders exist.

    changing_system is always the system that makes the changes, and detecting_system always detects the changes.

    The component / resource changed will be simple boolean wrapper structs.

    Basic Added / Mutated / Changed

    2 x 3 design:

    • Resources vs. Components
    • Added vs. Changed vs. Mutated
    • changing_system runs before detecting_system
    • verify at the end of tick 2

    At Least Once

    2 x 3 design:

    • Resources vs. Components
    • Added vs. Changed vs. Mutated
    • changing_system runs after detecting_system
    • verify at the end of tick 2

    At Most Once

    2 x 3 design:

    • Resources vs. Components
    • Added vs. Changed vs. Mutated
    • changing_system runs once before detecting_system
    • increment a counter based on the number of changes detected
    • verify at the end of tick 2

    Fast Detection

    2 x 3 design:

    • Resources vs. Components
    • Added vs. Changed vs. Mutated
    • changing_system runs before detecting_system
    • verify at the end of tick 1

    Ambiguous System Ordering Robustness

    2 x 3 x 2 design:

    • Resources vs. Components
    • Added vs. Changed vs. Mutated
    • changing_system runs [before/after] detecting_system in tick 1
    • changing_system runs [after/before] detecting_system in tick 2

    System Pausing

    2 x 3 design:

    • Resources vs. Components
    • Added vs. Changed vs. Mutated
    • changing_system runs in tick 1, then is disabled by run criteria
    • detecting_system is disabled by run criteria until it is run once during tick 3
    • verify at the end of tick 3

    Addition Causes Mutation

    2 design:

    • Resources vs. Components
    • adding_system_1 adds a component / resource
    • adding system_2 adds the same component / resource
    • verify the Mutated flag at the end of the tick
    • verify the Added flag at the end of the tick

    First check tests for: https://github.com/bevyengine/bevy/issues/333 Second check tests for: https://github.com/bevyengine/bevy/issues/1443

    Changes Made By Commands

    • adding_system runs in Update in tick 1, and sends a command to add a component
    • detecting_system runs in Update in tick 1 and 2, after adding_system
    • We can't detect the changes in tick 1, since they haven't been processed yet
    • If we were to track these changes as being emitted by adding_system, we can't detect the changes in tick 2 either, since detecting_system has already run once after adding_system :(

    Benchmarks

    See: general advice, Criterion crate

    There are several critical parameters to vary:

    1. entity count (1 to 10^9)
    2. fraction of entities that are changed (0% to 100%)
    3. cost to perform work on changed entities, i.e. workload (1 ns to 1s)

    1 and 2 should be varied between benchmark runs. 3 can be added on computationally.

    We want to measure:

    • memory cost
    • run time

    We should collect these measurements across several frames (100?) to reduce bootup effects and accurately measure the mean, variance and drift.

    Entity-component change detection is much more important to benchmark than resource change detection, due to the orders of magnitude higher number of pieces of data.

    No change detection at all should be included in benchmarks as a second control for cases where missing changes is unacceptable.

    Graphs

    1. y: performance, x: log_10(entity count), color: proposal, facet: performance metric. Set cost to perform work to 0.
    2. y: run time, x: cost to perform work, color: proposal, facet: fraction changed. Set number of entities to 10^6
    3. y: memory, x: frames, color: proposal

    Conclusions

    1. Is the theoretical categorization of the proposals correct according to our tests?
    2. How does the performance of the proposals compare without any load?
    3. How does the performance of the proposals compare with realistic loads?
    4. At what workload does more exact change tracking become worth the (presumably) higher overhead?
    5. When does adding change-detection to save on work become worthwhile?
    6. Is there enough divergence in performance between the best solutions in each class to ship more than one change-tracking solution?

    Implementation Plan

    1. Write a test suite.
    2. Verify that tests fail for existing approach.
    3. Write a benchmark suite.
    4. Get performance numbers for existing approach.
    5. Implement, test and benchmark various solutions using a Git branch per proposal.
    6. Create a draft PR with all solutions and present results to team.
    7. Select a solution and replace existing change detection.
    E-Help-Wanted A-ECS P-High A-Core 
    opened by alice-i-cecile 45
  • System sets and parallel executor v2

    System sets and parallel executor v2

    It's a wall of text, I know - you don't have to read all of it, its purpose is to clarify the details should questions arise.

    History

    Prior discussion: this comment and on.

    This PR builds on top of the incomplete SystemSet implementation branch that didn't make it into 0.4; it was split from schedule-v2 branch before the merge.

    The branch introduced system sets: SystemStage is now made up of one or more SystemSet, each has its own run criterion and contains the systems whose execution said criterion specifies.

    It also moved ShouldRun to schedule's level in the file hierarchy, and implemented run criteria and related methods as a reusable struct.

    The rebase that was required to make this branch mergeable was messy beyond belief, so I opted to merge in the master instead. It's still rather messy, but at least the commits are coherent.

    Glossary

    • exclusive system - system that requires exclusive (mutable) access to entirety of world and/or resources. The new implementation automatically executes these sequentially, either at the start or end (not fully implemented) of the stage.
    • parallel/parallelizable system - any system that doesn't require exclusive access. These are executed in the middle of the stage, as many as possible at once.
    • thread-local system - parallelizable system that accesses one or several thread-local resources (the ones in !Send storage).
    • thread-agnostic system - parallelizable system that doesn't access any thread-local resources.

    Collateral changes

    2021-01-23: this is not exhaustive, and outdated - the list does not cover commits made since original post.

    Necessary (and not so much) changes that aren't directly related to the implementation:

    • Renamed access::Access to ArchetypeAccess, made it private. No particular reason.
    • Removed ThreadLocalExecution, System::thread_local_execution() - this information is now encoded in ::archetype_component_access(), ::resource_access(), and new ::is_thread_local().
    • Extended TypeAccess with support for "reads/writes everything" to facilitate the above.
    • Added CondensedTypeAccess.
    • Moved behavior of thread_local_func (command buffer merging) and similar fields from parallelizable System implementors into their run_exclusive() implementation.
    • Renamed ThreadLocalSystemFn and the module into_thread_local to ExclusiveSystemFn and into_exclusive respectively, to differentiate systems with this access pattern from actually thread-local systems.
    • Implemented IntoSystem for FnMut(&mut World) and FnMut(&mut Resources), both result in ExclusiveSystemFn. No particular reason.
    • Implemented ThreadLocal system parameter.
    • Implemented system insertion methods with labels and dependencies (with string labels for now) for SystemSet and SystemStage.
    • Added ShouldRun::NoAndLoop.
    • Renamed System::update() to System::update_access(), for clarity.
    • Changed system sets to store systems in NonNull rather than Box. Requires auditing, and a sanity check.

    Algorithm

    2021-01-23: this is slightly outdated now, due to changes to exclusive systems and availability of local-to-thread tasks.

    Reading is not required. It's here because people were curious and picking apart someone else's code is unpleasant, annotated or not.

    The idea is similar to those I used in yaks and this prototype. I also wrote an entire long thing about the topic.

    Abridged:

    1. Evaluate run criteria of system sets.
    2. If any of the sets have changed, rebuild the cached scheduling data.
    3. Run exclusive systems that want to be at the start of stage.
    4. If world's archetypes were changed since the last time parallel systems were ran, update their access and the relevant part of scheduling data.
    5. Enter outer compute pool scope.
    6. Prepare parallel systems - spawn tasks, reset counters, queue systems with no dependencies, etc.
    7. Try running a queued thread-local system. Queue its dependants if this was the last system they were waiting on.
    8. Enter inner compute pool scope.
    9. Try starting some queued thread-agnostic systems.
    10. See if any thread-agnostic systems have finished, queue their dependants if those were the last systems they were waiting on.
    11. Exit inner scope.
    12. If there are any queued or running systems, continue from step 7.
    13. Exit outer scope.
    14. Merge in command buffers.
    15. Run exclusive systems that want to be at the end of stage.
    16. Re-evaluate run criteria, continue from step 3 if needed.

    Full(ish):

    1. As needed, stage calls <ParallelSystemStageExecutor as SystemStageExecutor>::execute_stage() with &mut [SystemSet] that contains the systems with their uncondensed access sets, &HashMap<SystemIndex, Vec<SystemIndex>> that encodes the dependency tree, &mut World, &mut Resources.
    2. Run criteria of the sets are evaluated. If no sets should be ran, algorithm returns early; if no sets should be ran yet any number of them request their run criteria to be reevaluated, it panics (to avoid looping infinitely).
    3. If any of the system sets have had new systems inserted, pre-existing scheduling data is discarded, and fresh data is constructed:
      1. Any cached data is cleared.
      2. Systems of sets are iterated, collecting all distinct (not "access everything") accessed types of parallel systems into a pair of hash sets (one for archetype-components, another for resources). If the world's archetypes were also changed, systems' access is updated before collecting its types.
      3. Hash sets of types are converted into vectors. Bitsets are resized to be able to fit as many bits as there are parallel systems.
      4. Systems of sets are iterated again, sorting indices of systems with exclusive access into list of exclusive systems, generating partial parallel scheduling data by "condensing" access sets to bitsets (the bitset is a mask that produces the systems' access set when applied to the vector of all types accessed by the stage). At the same time, a map of SystemIndex to the respective system's usize index in the parallel systems vector is populated.
      5. Dependencies map is iterated, inserting dependants' indices into their dependencies' parallel scheduling data, using the map from the previous step.
    4. All exclusive systems that want to run at the start of stage (currently all exclusive systems) are executed, if their parent system sets' criteria call for it.
    5. If the archetypes were changed since the last time parallel systems were ran, said systems' accesses are updated, and their archetype-component bitsets are recondensed (types are collected and sets are converted into bitset masks).
    6. Compute pool scope is created and entered.
    7. Each parallel system is prepared for execution:
      1. Safety bit is reset.
      2. Whether the system should be ran this iteration (the result of its parent system set's run criterion evaluation) is cached into a bitset.
      3. If the system should be ran, its task is spawned (if it's not a thread-local system) into the scope, and it's queued to run if it has no dependencies, or has its dependency counter reset otherwise.
    8. Running a thread-local system on the main thread is attempted:
      1. Queued systems are filtered by thread-local systems (bitset intersection).
      2. The first system out of those that passes the parallelization check (see below) is executed on the main thread.
      3. Its index is removed from the queued systems, and any of its dependants that should run this iteration have their dependency counters decremented by one.
      4. Dependants that have their counter reach zero are queued.
    9. A new inner compute pool scope is created and entered, and thread-agnostic system execution task is spawned into it:
      1. Queued systems are filtered by not thread-local systems (bitset difference).
      2. Any systems that pass the parallelization check (see below) are signalled to start via a channel and are marked as running.
      3. "Active access" sets representing the union of all running systems' access sets are extended with the newly running systems' sets.
      4. Newly running systems are removed from the queue.
      5. If there are any systems running, wait for at least one of them to finish, unmark it as running.
      6. Any of now finished systems' dependants that should run this iteration have their dependency counters decremented by one.
      7. Dependants that have their counter reach zero are queued.
      8. Active access sets are rebuild with access sets of all still running systems.
    10. When the inner scope is done, if there are any running or queued systems, continue from step 8. Otherwise, exit outer scope.
    11. Command buffers of parallel systems (that should have ran this iteration) are merged in by running their exclusive part.
    12. All exclusive systems that want to run at the end of stage (currently none of exclusive systems) are executed, if their parent system sets' criteria call for it.
    13. System sets' run criteria that requested to be checked again are reevaluated. If no sets should be ran, algorithm returns; if no sets should be ran yet any number of them request their run criteria to be reevaluated, it panics (to avoid looping infinitely). Otherwise, continue from step 4.

    Parallelization check

    System's condensed resource access is checked against active resource access, ditto for archetype-component access. "Checked against" here means "if one reads all, other can't write anything, otherwise check if bitsets of writes are disjoint with bitsets of reads and writes". That's it.

    Notes

    • SystemIndex is a pair of usize indices: system's parent set and its position in it.
    • Labels used by the stage are not used by the executor, it instead relies solely on the SystemIndex mapping.
    • Hot-path machinery uses usize indices (corresponding to systems' position in the vector of parallel scheduling data) and bitsets.
    • Parallel systems can request immutable access to all of world and/or resources and are seamlessly scheduled along others.
    • Thread-agnostic systems are not required to finish in the same inner scope that starts them.
    • Systems belonging to sets whose criteria disable them are simply never queued, and their tasks (for thread-agnostic systems) are not spawned.

    Things to do

    Numbered for discussion convenience:

    1. Audit unsafe code: ThreadLocal system parameter implementation, NonNull use in SystemSet.
    2. Targeted tests of new behaviors and invariants. Smoke tests for future development convenience.
    3. ~~Dependency tree verification. I think the best place to do it is in SystemStage::run_once(), where all the necessary information is first seen in one place.~~ mostly done, still refactoring
    4. ~~Exclusive systems dependencies and sorting. Should be done at the same time as point 3. There are stubs in the implementation that allow executing exclusive systems at either start of stage or end of stage, but no way to specify when such system would like to be ran. I think this could be handled by dependencies: exclusive systems that depend on parallel systems are automatically shoved to the end of stage, and vice versa; conflicts result in a panic during tree verification.~~ partially irrelevant due to full exclusive/parallel schism, sorting is tolopogical now but will
    5. ~~Serial stage executor. As of now, it's barely functional(?) and does no dependency-based sorting whatsoever; this should probably be handled after point 4.~~ mostly done, still refactoring
    6. ~~Decide if we want the safety bit. I've never seen that check fail, but it's about as lightweight as it can be, so I say we keep it.~~ removed, it became completely trivial with introduction of local-to-thread tasks
    7. Decide how to handle the run criteria infinite loop (when all criteria evaluate to a combination of ShouldRun::No and ShouldRun::NoAndLoop). Current strategy is to panic, could use a more meaningful panic message.
    8. ~~Decide if we want to sort parallel systems' command buffer merging in any way. Currently, it's not defined, but should be ordered by set then insertion order into set.~~ it's topologically sorted now, exploiting a side-effect of validating the dependency graph
    9. ~~Decide if we want to merge in parallel systems' command buffers before or after end-of-stage exclusives are run. Currently, it's before.~~ we have both options now, thanks to point 12
    10. ~~Consider "optional dependencies". Right now, if system A depends on system B and B gets disabled by a run criterion of its parent system set, A will not run and there is no way to make it run without running B. This should not be the case: execution order dependencies should only specify execution order and never execution fact.~~ all dependencies are now "soft", disabling a system does not disable its dependants; considering supporting "hard" dependencies
    11. ~~Consider simplifying the TypeAccess implementation by merging AccessSet into it. See CondensedTypeAccess for comparison.~~ done in c4e826166810a66102904024ea45d41e1b8e1073
    12. ~~Consider better API for system insertion into stage/set. Related: #1060.~~ adopted the "system descriptor" builder pattern
    13. Consider system labels that aren't &'static str.
    14. ~~Plumb the labels/dependencies API throughout the hierarchy; best done after point 12.~~
    15. Actually make use of system sets in StateStage, etc.
    16. Tighten API in general: currently some of the things that have no business being public are public.
    17. Minor refactors and internal tweaks, mostly for my own peace of mind regarding maintainability.
    18. Documentation, examples. I haven't done any public documentation, but I think there's plenty of internal documentation of the executor itself.
    19. Propagate the newly available/required patterns to the rest of the engine and examples, benchmark. This should be done after the API-related concerns are addressed, of course. For what it's worth, the examples I've tried running so far all have Just Worked.

    This should cover vast majority of things, but I'm bound to have missed something. I will slowly start on some of the items in the todo list, however, most of them will benefit from discussion at this point, and some can be tackled by someone else and/or in a different PR.

    Happy holidays!

    C-Enhancement A-ECS 
    opened by Ratysz 45
  • RFC: Create an RFC Process

    RFC: Create an RFC Process

    Large PRs that are either complex or out of scope of current "focus area" efforts can often sit around, gated by feedback from cart. Creating an RFC process would give the community a useful tool to discuss complex proposals, especially if the proposed PR has implications for features that will get built on top.

    A standard format would make it more likely that all the information needed to make a decision is accounted for and in a predictable format for cart and the broader community to digest. The intent here is not to create burden for contributors, but give the community a way of making headway on big decisions without PRs rotting in the queue.

    I'd like to collaborate on a template, and discuss this meta-RFC here! We can make this RFC into the first RFC as a way of testing out the process. I've copied the Rust RFC template, per @alice-i-cecile's recommendation, as a starting point.

    Edit 2021-04-01:

    Current Status

    Summary

    We have consensus on:

    • Supporting an implementation-first process that has minimal overhead for contributors
    • Supporting a design-first process if community members want to collect information before implementing anything
    • Hosting the RFCs in a separate repo

    Next Steps

    1. Create a new RFC repo in the bevy org
    2. Add the RFC template to the RFC repo
    3. Add the welcome message to the RFC repo README
    4. Remove the template and RFC folder from this PR
    5. Add info about RFCs to the bevy CONTRIBUTING.md as part of this PR
    A-Meta 
    opened by aevyrie 44
  • [Merged by Bors] - Rebase of existing PBR work

    [Merged by Bors] - Rebase of existing PBR work

    This is a rebase of StarArawns PBR work from #261 with IngmarBitters work from #1160 cherry-picked on top.

    I had to make a few minor changes to make some intermediate commits compile and the end result is not yet 100% what I expected, so there's a bit more work to do.

    E-Help-Wanted A-Rendering 
    opened by mtsr 39
  • 0.4 -> 0.5 Migration Guide

    0.4 -> 0.5 Migration Guide

    0.5 will have breaking changes, and some of them will not be intuitive. We should have an official migration guide to help our users upgrade.

    Lets use this issue as a way to track the important breaking changes to include. I'll start with:

    1. commands: &mut Commands has changed back to mut commands: Commands. This will cause previously valid foo.system() calls to fail to compile.
    2. The max number of SystemParams has gone from 15 to 12. This is because they now rely on Rust's Default impl for tuples, which only extends to tuples of length 12. Users can use nested tuples and/or SystemParam derives to work around this. Both options help organize parameters, which is a good thing to do anyway for systems with a large number of params.
    3. commands.insert_one(component) is now commands.insert(component). commands.insert(bundle) is now commands.insert_bundle(bundle). This means that 0.4 code that does commands.insert(bundle) will now attempt to insert bundle as a component instead of a bundle, which will cause breakage. This will cause confusion, but this change was made to help make bundles less easy to confuse with components, so I think that it is ultimately the right call.
    C-Docs E-Good-First-Issue P-High 
    opened by cart 37
  • Entity Events. Round 2.

    Entity Events. Round 2.

    In https://github.com/bevyengine/bevy/issues/2070 was proposed following mechanism:

    • Each Entity have opt-in/opt-out EventComponent
    • EventComponent added when events pushed to entity. EventComponent removed when events consumed by all systems/readers.

    While working on Small Buffer Optimized version of https://github.com/tower120/rc_event_queue / https://github.com/bevyengine/rfcs/pull/32 for experimental implementation of this, I found following:

    • If SystemA read all events from Event, and SystemB not, we still can not remove EventComponent safely, because it contains unread events. Under this circumstances, on each run, SystemA will still access EventComponent to found that it is empty.
    • In order to add/remove EventComponent we either should have excessive memory moves, using Sequential ECS Storage; either we use Spatial Storage and lose benefits of linear memory access. Since this is event system I assume that Spatial Storage usage is the only way.
    • In the end... Spatial Storage access is not that different from Component access by EntityId.

    So... I come to the idea, that if we will fetch components from Sequential Storage by EntityId, in the order they lay in storage - we still should benefit from linear memory access.

    The idea is - to have EventQueue<(EntityId, EventMessage)> attached to each archetype_storage/table, and have EventQueue queue "sorted" by Entity index in correspondent archetype storage. So, in the end, readers will read storage as linear as possible. More over, systems that emit event messages, will most certainly, do this in the order they iterate storage - hence emitted messages should already be mostly sorted(!).

    In order to reduce overhead from "sorting", we could temporary store Vec<(EntityId, EventMessage)> for each event writing system. Sort each of them (they probably will be in sorted state already), and then, at some point (end of the stage?), push messages from vecs in a sorted way (pop_front vec with min first entity index) into EventQueue.

    C-Enhancement S-Needs-Triage 
    opened by tower120 1
  • Tracy profiling in pipelined-rendering

    Tracy profiling in pipelined-rendering

    This has already been reviewed and merged into main but we need it on pipelined-rendering too for awesome profiling tools!

    Tracy is:

    A real time, nanosecond resolution, remote telemetry, hybrid frame and sampling profiler for games and other applications.

    With the trace_tracy feature enabled, you run your bevy app and either a headless server (capture) or a live, interactive profiler UI (Tracy), and connect that to your bevy application to then stream the metric data and events, and save it or inspect it live/offline.

    Previously when I implemented the spans across systems and stages and I was trying out different profiling tools, Tracy was too unstable on macOS to use. But now, quite some months later, it is working stably with Tracy 0.7.8. You can see timelines, aggregate statistics of mean system/stage execution times, and much more. It's very useful!

    Screenshot_2021-09-15_at_18 07 19

    • Use the tracing-tracy crate which supports our tracing spans
    • Expose via the non-default feature trace_tracy for consistency with other trace* features

    Objective

    • Describe the objective or issue this PR addresses.
    • If you're fixing a specific issue, say "Fixes #X".

    Solution

    • Describe the solution used to achieve the objective above.
    opened by superdump 0
  • Modular Rendering

    Modular Rendering

    This changes how render logic is composed to make it much more modular. Previously, all extraction logic was centralized for a given "type" of rendered thing. For example, we extracted meshes into a vector of ExtractedMesh, which contained the mesh and material asset handles, the transform, etc. We looked up bindings for "drawn things" using their index in the Vec<ExtractedMesh>. This worked fine for built in rendering, but made it hard to reuse logic for "custom" rendering. It also prevented us from reusing things like "extracted transforms" across contexts.

    To make rendering more modular, I made a number of changes:

    • Entities now drive rendering:
      • We extract "render components" from "app components" and store them on entities. No more centralized uber lists! We now have true "ECS-driven rendering"
      • To make this perform well, I implemented #2673 in upstream Bevy for fast batch insertions into specific entities. This was merged into the pipelined-rendering branch here: #2815
    • Reworked the Draw abstraction:
      • Generic PhaseItems: each draw phase can define its own type of "rendered thing", which can define its own "sort key"
      • Ported the 2d, 3d, and shadow phases to the new PhaseItem impl (currently Transparent2d, Transparent3d, and Shadow PhaseItems)
      • Draw trait and and DrawFunctions are now generic on PhaseItem
      • Modular / Ergonomic DrawFunctions via RenderCommands
        • RenderCommand is a trait that runs an ECS query and produces one or more RenderPass calls. Types implementing this trait can be composed to create a final DrawFunction. For example the DrawPbr DrawFunction is created from the following DrawCommand tuple. Const generics are used to set specific bind group locations:
           pub type DrawPbr = (
              SetPbrPipeline,
              SetMeshViewBindGroup<0>,
              SetStandardMaterialBindGroup<1>,
              SetTransformBindGroup<2>,
              DrawMesh,
          );
          
        • The new custom_shader_pipelined example illustrates how the commands above can be reused to create a custom draw function:
          type DrawCustom = (
              SetCustomMaterialPipeline,
              SetMeshViewBindGroup<0>,
              SetTransformBindGroup<2>,
              DrawMesh,
          );
          
    • ExtractComponentPlugin and UniformComponentPlugin:
      • Simple, standardized ways to easily extract individual components and write them to GPU buffers
    • Ported PBR and Sprite rendering to the new primitives above.
    • Removed staging buffer from UniformVec in favor of direct Queue usage
      • Makes UniformVec much easier to use and more ergonomic. Completely removes the need for custom render graph nodes in these contexts (see the PbrNode and view Node removals and the much simpler call patterns in the relevant Prepare systems).
    • Added a many_cubes_pipelined example to benchmark baseline 3d rendering performance and ensure there were no major regressions during this port. Avoiding regressions was challenging given that the old approach of extracting into centralized vectors is basically the "optimal" approach. However thanks to a various ECS optimizations and render logic rephrasing, we pretty much break even on this benchmark!
    • Lifetimeless SystemParams: this will be a bit divisive, but as we continue to embrace "trait driven systems" (ex: ExtractComponentPlugin, UniformComponentPlugin, DrawCommand), the ergonomics of (Query<'static, 'static, (&'static A, &'static B, &'static)>, Res<'static, C>) were getting very hard to bear. As a compromise, I added "static type aliases" for the relevant SystemParams. The previous example can now be expressed like this: (SQuery<(Read<A>, Read<B>)>, SRes<C>). If anyone has better ideas / conflicting opinions, please let me know!
    • RunSystem trait: a way to define Systems via a trait with a SystemParam associated type. This is used to implement the various plugins mentioned above. I also added SystemParamItem and QueryItem type aliases to make "trait stye" ecs interactions nicer on the eyes (and fingers).
    • RenderAsset retrying: ensures that render assets are only created when they are "ready" and allows us to create bind groups directly inside render assets (which significantly simplified the StandardMaterial code). I think ultimately we should swap this out on "asset dependency" events to wait for dependencies to load, but this will require significant asset system changes.
    • Updated some built in shaders to account for missing MeshUniform fields
    A-Rendering 
    opened by cart 0
  • fix world.is_resource_added() panicing

    fix world.is_resource_added() panicing

    Objective

    Fixes #2828

    Solution

    replaced unwrap with match(and added a test to make sure everything works without crashing)

    S-Needs-Triage 
    opened by RustyStriker 3
  • app.world().is_resource_added::<T>() panics instead of returning false

    app.world().is_resource_added::() panics instead of returning false

    Bevy version

    0.5

    Operating system & version

    Arch(tho not relevant)

    What you did

    Calling app.world().is_resource_added::<T>() on a resource that wasn't added.

    What you expected to happen

    function to return false

    What actually happened

    It paniced

    Additional information

    Crashed at this line

    bevy_ecs/src/world/mod.rs:574:79
    

    because of the unwrap() at the end.

    C-Bug E-Good-First-Issue A-ECS 
    opened by RustyStriker 0
  • Unique WorldId

    Unique WorldId

    Objective

    Fixes these issues:

    • WorldIds currently aren't necessarily unique
      • I want to guarantee that they're unique to safeguard my librarified version of https://github.com/bevyengine/bevy/discussions/2805
      • There probably hasn't been a collision yet, but they could technically collide
    • SystemId isn't used for anything
      • It's no longer used now that Locals are stored within the System.
    • bevy_ecs depends on rand

    Solution

    • Instead of randomly generating WorldIds, just use an incrementing atomic counter, panicing on overflow.
    • Remove SystemId
      • We do need to allow Locals for exclusive systems at some point, but exclusive systems couldn't access their own SystemId anyway.
    • Now that these don't depend on rand, move it to a dev-dependency

    Todo

    Determine if WorldId should be u32 based instead

    A-ECS C-Code-Quality S-Ready-For-Final-Review 
    opened by DJMcNab 8
  • error: failed to run custom build command for `libudev-sys v0.1.4` when running examples

    error: failed to run custom build command for `libudev-sys v0.1.4` when running examples

    Bevy version

    The release number or commit hash of the version you're using.

    Operating system & version

    Artix(runit)

    What you did

    Tried running an example (Hello World failed too).

    What you expected to happen

    The example game starts.

    What actually happened

    The example fails to compile.

    Additional information

    Error log:

       Compiling libudev-sys v0.1.4
       Compiling quote v1.0.9
       Compiling memoffset v0.6.4
       Compiling getrandom v0.2.3
    error: failed to run custom build command for `libudev-sys v0.1.4`
    
    Caused by:
      process didn't exit successfully: `/home/<user>/tmp/bevy/target/debug/build/libudev-sys-50d1508ede402556/build-script-build` (exit status: 101)
      --- stdout
      cargo:rerun-if-env-changed=LIBUDEV_NO_PKG_CONFIG
      cargo:rerun-if-env-changed=PKG_CONFIG
      cargo:rerun-if-env-changed=LIBUDEV_STATIC
      cargo:rerun-if-env-changed=LIBUDEV_DYNAMIC
      cargo:rerun-if-env-changed=PKG_CONFIG_ALL_STATIC
      cargo:rerun-if-env-changed=PKG_CONFIG_ALL_DYNAMIC
      cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64-unknown-linux-gnu
      cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64_unknown_linux_gnu
      cargo:rerun-if-env-changed=HOST_PKG_CONFIG_PATH
      cargo:rerun-if-env-changed=PKG_CONFIG_PATH
      cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64-unknown-linux-gnu
      cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64_unknown_linux_gnu
      cargo:rerun-if-env-changed=HOST_PKG_CONFIG_LIBDIR
      cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR
      cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64-unknown-linux-gnu
      cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64_unknown_linux_gnu
      cargo:rerun-if-env-changed=HOST_PKG_CONFIG_SYSROOT_DIR
      cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR
    
      --- stderr
      thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "`\"pkg-config\" \"--libs\" \"--cflags\" \"libudev\"` did not exit successfully: exit status: 1\n--- stderr\nPackage libudev was not found in the pkg-config search path.\nPerhaps you should add the directory containing `libudev.pc'\nto the PKG_CONFIG_PATH environment variable\nNo package 'libudev' found\n"', /home/<user>/.cargo/registry/src/github.com-1ecc6299db9ec823/libudev-sys-0.1.4/build.rs:38:41
      stack backtrace:
         0: rust_begin_unwind
                   at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/panicking.rs:517:5
         1: core::panicking::panic_fmt
                   at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/core/src/panicking.rs:96:14
         2: core::result::unwrap_failed
                   at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/core/src/result.rs:1617:5
         3: core::result::Result<T,E>::unwrap
                   at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/core/src/result.rs:1299:23
         4: build_script_build::main
                   at ./build.rs:38:5
         5: core::ops::function::FnOnce::call_once
                   at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/core/src/ops/function.rs:227:5
      note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
    warning: build failed, waiting for other jobs to finish...
    error: build failed```
    
    C-Bug O-Linux C-Dependencies 
    opened by LeSnake04 4
  • bevy::utils::HashMap API is a bit unexpected compared to std::collections::HashMap

    bevy::utils::HashMap API is a bit unexpected compared to std::collections::HashMap

    Bevy version

    b91541b6efff2236f6aefca0f2baa09aba54f505

    Operating system & version

    Win10/Debian, nightly rust

    What you did

    I spent a while trying to initialize a bevy::utils::HashMap, expecting it to be similar to the std lib one, but there is no new function, and the signature of from takes another HashMap, instead of letting you initialize from an array of (key, value) like in std::utils. The latter is really useful for inline initialization of known data into a HashMap.

    What you expected to happen

    I only had the 0.5 docs to go off and those made me think ::new and ::from worked the same way here (they just link to the std::utils::HashMap docs). This probably needs some specific documentation if it's changed intentionally. I think keeping the same ::from() behaviour as std::utils should be considered too as it's rather useful when creating HashMaps from already known data.

    What actually happened

    N/A

    Additional information

    C-Docs E-Good-First-Issue C-Usability 
    opened by pcone 2
  • Placing an imagebundle with transparent pixels over a node bundle cuts through the node bundle

    Placing an imagebundle with transparent pixels over a node bundle cuts through the node bundle

    Bevy version

    Main as of time of writing, commit hash b91541b6efff2236f6aefca0f2baa09aba54f505

    Operating system & version

    Windows 10

    What you did

    I spawned an ImageBundle with an image that contains transparent pixels, then spawned a visible nodebundle

    What you expected to happen

    I expected the transparent pixels to reveal the underlying nodebundle

    What actually happened

    The transparent pixels revealed the background instead

    Additional information

    Minimal recreation here, press any key on the keyboard to spawn the nodebundle and notice how the image cuts through it to the pink background

    C-Bug A-Rendering A-UI 
    opened by Sheepyhead 0
  • bevy_render2::render_resource does not expose StencilOperation

    bevy_render2::render_resource does not expose StencilOperation

    Bevy version

    59bfbd3

    What you did

    Constructing my own render pipeline with stencil attachment.

    What you expected to happen

    All wgpu types exposed.

    What actually happened

    StencilOperation not exposed.

    A-Rendering C-Usability 
    opened by VVishion 0
Releases(v0.5.0)
Owner
Bevy Engine
A modular game engine built in Rust, with a focus on developer productivity and performance
Bevy Engine
Rust library to create a Good Game Easily

ggez What is this? ggez is a Rust library to create a Good Game Easily. The current version is 0.6.0-rc0. This is a RELEASE CANDIDATE version, which m

null 3k Sep 17, 2021
Extensible open world rogue like game with pixel art. Players can explore the wilderness and ruins.

Rusted Ruins Extensible open world rogue like game with pixel art. Players can explore the wilderness and ruins. This game is written in Rust. Screens

T. Okubo 314 Sep 16, 2021
A personal etude into rust software (RPG<-it's more fun to debug) development: Tales of the Great White Moose

TGWM (Tales of the Great White Moose) NB: Currently compiles. Should compile and run on both 1.28.0 and 1.31.1 if the Cargo.lock files are deleted. A

null 15 Jul 21, 2020
4fun open-source Cave Story reimplementation written in Rust

doukutsu-rs Download latest Nightly builds (Requires being logged in to GitHub) A re-implementation of Cave Story (Doukutsu Monogatari) engine written

null 238 Sep 15, 2021
Snake implemented in rust.

rsnake - An implementation of classic snake in rust This game was built using the piston_window window wrapper. Download the game If youre using mac-o

Maximilian Schulke 50 Sep 8, 2021
Minesweeper game developed with Rust, WebAssembly (Wasm), and Canvas

?? click here to play the game ?? Minesweeper game Revealing all the cells without hitting the mines is the task. Each number in the cell denotes how

Karthik Nedunchezhiyan 20 Sep 10, 2021
This is a simple implementation of the classic snake game in rust

My snake game Looks like this. This is with Roboto Mono Nerd Font. If you use a different font it may look different or distorted. Install rust In ord

Konstantinos Kyriakou 16 Apr 4, 2021
The video game for Fonts of Power. A tabletop roleplaying game made in Rust with Bevy!

The code and rules for Fonts of Power, a tactical TTRPG / video game about exploring magical places. You can follow its development in our Discord ser

null 13 Aug 13, 2021
😠⚔️😈 A minimalistic 2D turn-based tactical game in Rust

Zemeroth is a turn-based hexagonal tactical game written in Rust. Support: patreon.com/ozkriff News: @ozkriff on twitter | ozkriff.games | facebook |

Andrey Lesnikóv 1.1k Sep 10, 2021
⬡ Zone of Control is a hexagonal turn-based strategy game written in Rust. [DISCONTINUED]

Zone of Control The project is discontinued Sorry, friends. ZoC is discontinued. See https://ozkriff.github.io/2017-08-17--devlog.html Downloads Preco

Andrey Lesnikóv 339 Aug 28, 2021
A Doom Renderer written in Rust.

Rust Doom A little Doom 1 & 2 Renderer written in Rust. Mostly written while I was learning the language about 2 years ago, so it might not the best e

Cristi Cobzarenco 2k Sep 17, 2021
Angolmois BMS player, Rust edition

Angolmois Rust Edition This is a direct, one-to-one translation of Angolmois to Rust programming language. Angolmois is a BM98-like minimalistic music

Kang Seonghoon 89 Jun 17, 2021
An implementation of Sokoban in Rust

sokoban-rs This is an implementation of Sokoban in the Rust Programming Language. An example level: Build Instructions Before building sokoban-rs, you

Sébastien Watteau 124 Jul 27, 2021
A work-in-progress, open-source, multi-player city simulation game.

Citybound is a city building game with a focus on realism, collaborative planning and simulation of microscopic details. It is independently developed

Citybound 6.5k Sep 10, 2021
ASCII terminal hexagonal map roguelike written in Rust

rhex Contributors welcome! Rhex is looking for contributors. See Contributing page for details. Introduction Simple ASCII terminal hexagonal map rogue

Dawid Ciężarkiewicz 125 Sep 8, 2021
Tool to view and solve puzzles from the lichess puzzle database

offline-chess-puzzles Tool to view and solve puzzles from the lichess puzzle database. It's a very simple tool for those who want to practice offline,

null 26 Sep 15, 2021
The classic tetris game written in Rust using ncurses

tetris.rs This is the classic tetris game I wrote to have a bit of fun with Rust. Installation and playing cargo install --

null 72 Aug 17, 2021
Rate my game setup 😜

minesweeper.rs Rate my game setup ??

Vitaly Domnikov 9 Jul 24, 2021
A roguelike game in Rust

A fantasy deathcrawl in Rust Work in progress. To run, with Rust compiler and Cargo package manager installed: cargo run --release When building on W

Risto Saarelma 320 Sep 9, 2021