A refreshingly simple data-driven game engine built in Rust



Crates.io license Crates.io Rust iOS cron CI Discord

What is Bevy?

Bevy is a refreshingly simple data-driven game engine built in Rust. It is free and open-source forever!


Bevy is still in the very early stages of development. APIs can and will change (now is the time to make suggestions!). Important features are missing. Documentation is sparse. Please don't build any serious projects in Bevy unless you are prepared to be broken by API changes constantly.

Design Goals

  • Capable: Offer a complete 2D and 3D feature set
  • Simple: Easy for newbies to pick up, but infinitely flexible for power users
  • Data Focused: Data-oriented architecture using the Entity Component System paradigm
  • Modular: Use only what you need. Replace what you don't like
  • Fast: App logic should run quickly, and when possible, in parallel
  • Productive: Changes should compile quickly ... waiting isn't fun




Before contributing or participating in discussions with the community, you should familiarize yourself with our Code of Conduct and How to Contribute

Getting Started

We recommend checking out The Bevy Book for a full tutorial.

Follow the Setup guide to ensure your development environment is set up correctly. Once set up, you can quickly try out the examples by cloning this repo and running the following command:

# Runs the "breakout" example
cargo run --example breakout

Fast Compiles

Bevy can be built just fine using default configuration on stable Rust. However for really fast iterative compiles, you should enable the "fast compiles" setup by following the instructions here.

Focus Areas

Bevy has the following Focus Areas. We are currently focusing our development efforts in these areas, and they will receive priority for Bevy developers' time. If you would like to contribute to Bevy, you are heavily encouraged to join in on these efforts:

Editor-Ready UI

PBR / Clustered Forward Rendering


Libraries Used

Bevy is only possible because of the hard work put into these foundational technologies:

  • wgpu-rs: modern / low-level / cross-platform graphics library inspired by Vulkan
  • glam-rs: a simple and fast 3D math library for games and graphics
  • winit: cross-platform window creation and management in Rust
  • spirv-reflect: Reflection API in rust for SPIR-V shader byte code

Bevy Cargo Features

This list outlines the different cargo features supported by Bevy. These allow you to customize the Bevy feature set for your use-case.

Third Party Plugins

Plugins are very welcome to extend Bevy's features. Guidelines are available to help integration and usage.

Thanks and Alternatives

Additionally, we would like to thank the Amethyst, macroquad, coffee, ggez, rg3d, and Piston projects for providing solid examples of game engine development in Rust. If you are looking for a Rust game engine, it is worth considering all of your options. Each engine has different design goals, and some will likely resonate with you more than others.

  • Relicense Bevy under dual MIT/Apache-2.0

    Relicense Bevy under dual MIT/Apache-2.0


    We would like to relicense Bevy under the "dual MIT / Apache-2.0 license". This allows users to select either license according to their own preferences. There are Very Good Reasons for this (see the "Why?" section below). However I can't just arbitrarily relicense open source code, despite being the maintainer / lead developer. Bevy has hundreds of contributors and they all agreed to license their contributions exclusively under our current instance of the MIT license.

    If you are mentioned in this issue, we need your help to make this happen

    To agree to this relicense, please read the details in this issue, then leave a comment with the following message:

    I license past and future contributions under the dual MIT/Apache-2.0 license, allowing licensees to choose either at their option.

    If you disagree, please respond with your reasoning (just don't expect us to change course at this point). Anyone who doesn't agree to the relicense will have any Bevy contributions that qualify as "copyrightable" removed or re-implemented.


    I originally chose to license Bevy exclusively under MIT for a variety of reasons:

    1. People and companies generally know and trust the MIT license more than any other license. Apache 2.0 is less known and trusted.
    2. It is short and easy to understand
    3. Many people aren't familiar with the "multiple license options ... choose your favorite" approach. I didn't want to scare people away unnecessarily.
    4. Other open source engines like Godot have had a lot of success with MIT-only licensing

    However there were a variety of issues that have come up that make dual-licensing Bevy under both MIT and Apache-2.0 compelling:

    1. The MIT license (arguably) requires binaries to reproduce countless copies of the same license boilerplate for every MIT library in use. MIT-only engines like Godot have complicated license compliance rules as a result
    2. The Apache-2.0 license has protections from patent trolls and an explicit contribution licensing clause.
    3. The Rust ecosystem is largely Apache-2.0. Being available under that license is good for interoperation and opens the doors to upstreaming Bevy code into other projects (Rust, the async ecosystem, etc).
    4. The Apache license is incompatible with GPLv2, but MIT is compatible.

    Additionally, Bevy's current MIT license includes the Copyright (c) 2020 Carter Anderson line. I don't want to force Bevy users to credit me (and no one else) for the rest of time. If you agree to the relicense, you also agree to allow us to remove this copyright line crediting me.

    What will this look like?

    After getting explicit approval from each contributor of copyrightable work (as not all contributions qualify for copyright, due to not being a "creative work", e.g. a typo fix), we will add the following file to our README:

    ### License
    Licensed under either of
     * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
     * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
    at your option.
    ### Contribution
    Unless you explicitly state otherwise, any contribution intentionally submitted
    for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any
    additional terms or conditions.

    We will add LICENSE-{MIT,APACHE} files containing the text of each license. We will also update the license metadata in our Cargo.toml to:

    license = "MIT OR Apache-2.0"

    Contributor checkoff

    • [x] @0x22fe
    • [x] @8bit-pudding
    • [x] @aclysma
    • [x] @adam-bates
    • [x] @Adamaq01
    • [x] @aevyrie
    • [x] @ak-1
    • [x] @akiross
    • [x] @Alainx277
    • [x] @alec-deason
    • [x] @alexb910
    • [x] @alexschrod
    • [x] @alice-i-cecile
    • [x] @alilee
    • [x] @AlisCode
    • [x] @aloucks
    • [x] @amberkowalski
    • [x] @AmionSky
    • [x] @anchpop
    • [x] @andoco
    • [x] @andreheringer
    • [x] @andrewhickman
    • [x] @AngelicosPhosphoros
    • [x] @AngelOnFira
    • [x] @Archina
    • [x] @ashneverdawn
    • [x] @BafDyce
    • [x] @bgourlie
    • [x] @BimDav
    • [x] @bitshifter
    • [x] @bjorn3
    • [x] @blaind
    • [x] @blamelessgames
    • [x] @blunted2night
    • [x] @Bobox214
    • [x] @Boiethios
    • [x] @bonsairobo
    • [x] @BoxyUwU
    • [x] @Byteron
    • [x] @CAD97
    • [x] @CaelumLaron
    • [x] @caelunshun
    • [x] @cart
    • [x] @CGMossa
    • [x] @CleanCut
    • [x] @ColdIce1605
    • [x] @Cupnfish
    • [x] @dallenng
    • [x] @Dash-L
    • [x] @Davier
    • [x] @dburrows0
    • [x] @dependabot[bot]
    • [x] @deprilula28
    • [x] @DGriffin91
    • [x] @dintho
    • [x] @Dispersia
    • [x] @DivineGod
    • [x] @Divoolej
    • [x] @DJMcNab
    • [x] @e00E
    • [x] @easynam
    • [x] @ElArtista
    • [x] @eliaspekkala
    • [x] @enfipy
    • [x] @figsoda
    • [x] @fintelia
    • [x] @Fishrock123
    • [x] @FlyingRatBull
    • [x] @forbjok
    • [x] @frewsxcv
    • [x] @freylint
    • [x] @Frizi
    • [x] @FSMaxB
    • [x] @FuriouZz
    • [x] @GabLotus
    • [x] @gcoakes
    • [x] @gfreezy
    • [x] @Git0Shuai
    • [x] @giusdp
    • [x] @GrantMoyer
    • [x] @Gregoor
    • [x] @Grindv1k
    • [x] @guimcaballero
    • [x] @Halfwhit
    • [x] @Havvy
    • [x] @HackerFoo
    • [x] @Hugheth
    • [x] @huhlig
    • [x] @hymm
    • [x] @ifletsomeclaire
    • [x] @iMplode-nZ
    • [x] @Incipium
    • [x] @iwikal
    • [x] @Ixentus
    • [x] @J-F-Liu
    • [x] @jacobgardner
    • [x] @jak6jak
    • [x] @jakerr
    • [x] @jakobhellermann
    • [x] @jamadazi
    • [x] @Jbat1Jumper
    • [x] @JCapucho
    • [x] @jcornaz
    • [x] @Jerald
    • [x] @jesseviikari
    • [x] @jihiggins
    • [x] @jleflang
    • [x] @jngbsn
    • [x] @joejoepie
    • [x] @JohnDoneth
    • [x] @Josh015
    • [x] @joshuajbouw
    • [x] @julhe
    • [x] @kaflu
    • [x] @karroffel
    • [x] @kedodrill
    • [x] @kokounet
    • [x] @Kurble
    • [x] @lachlansneff
    • [x] @lambdagolem
    • [x] @lassade
    • [x] @lberrymage
    • [x] @lee-orr
    • [x] @logannc
    • [x] @Lowentwickler
    • [x] @lukors
    • [x] @Lythenas
    • [x] @M2WZ
    • [x] @marcusbuffett
    • [x] @MarekLg
    • [x] @marius851000
    • [x] @MatteoGgl
    • [x] @maxwellodri
    • [x] @mccludav
    • [x] @memoryruins
    • [x] @mfrancis107
    • [x] @MGlolenstine
    • [x] @MichaelHills
    • [x] @MilanVasko
    • [x] @MinerSebas
    • [x] @mjhostet
    • [x] @mkhan45
    • [x] @mnmaita
    • [x] @mockersf
    • [x] @Moxinilian
    • [x] @MrEmanuel
    • [x] @mrk-its
    • [x] @msklywenn
    • [x] @mtsr
    • [x] @multun
    • [x] @mvlabat
    • [x] @naithar
    • [x] @NathanSWard
    • [x] @navaati
    • [x] @ncallaway
    • [x] @ndarilek
    • [x] @Nazariglez
    • [x] @Neo-Zhixing
    • [x] @nic96
    • [x] @NiklasEi
    • [x] @Nilirad
    • [x] @no1hitjam
    • [x] @notsimon
    • [x] @nside
    • [x] @ocornoc
    • [x] @Olaren15
    • [x] @OptimisticPeach
    • [x] @payload
    • [x] @Philipp-M
    • [x] @Plecra
    • [x] @PrototypeNM1
    • [x] @r00ster91
    • [x] @Ratysz
    • [x] @RedlineTriad
    • [x] @refnil
    • [x] @reidbhuntley
    • [x] @Restioson
    • [x] @RichoDemus
    • [x] @rmsc
    • [x] @rmsthebest
    • [x] @RobDavenport
    • [x] @robertwayne
    • [x] @rod-salazar
    • [x] @rparrett
    • [x] @ryanleecode
    • [x] @sapir
    • [x] @saveriomiroddi
    • [x] @sburris0
    • [x] @schell
    • [x] @sdfgeoff
    • [x] @ShadowMitia
    • [x] @simensgreen
    • [x] @simlay
    • [x] @simpuid
    • [x] @SmiteWindows
    • [x] @smokku
    • [x] @StarArawn
    • [x] @stefee
    • [x] @superdump
    • [x] @SvenTS
    • [x] @sY9sE33
    • [x] @szunami
    • [x] @tangmi
    • [x] @tarkah
    • [x] @TehPers
    • [x] @Telzhaak
    • [x] @termhn
    • [x] @tiagolam
    • [x] @the-notable
    • [x] @thebluefish
    • [x] @TheNeikos
    • [x] @TheRawMeatball
    • [x] @therealstork
    • [x] @thirdsgames
    • [x] @Tiagojdferreira
    • [x] @tigregalis
    • [x] @Toniman20
    • [x] @toothbrush7777777
    • [x] @TotalKrill
    • [x] @tristanpemble
    • [x] @trolleyman
    • [x] @turboMaCk
    • [x] @TypicalFork
    • [x] @undinococo
    • [x] @verzuz
    • [x] @Veykril
    • [x] @vgel
    • [x] @VitalyAnkh
    • [x] @w1th0utnam3
    • [x] @W4RH4WK
    • [x] @Waridley
    • [x] @Weibye
    • [x] @Weibye-Breach
    • [x] @wilk10
    • [x] @will-hart
    • [x] @willcrichton
    • [x] @WilliamTCarroll
    • [x] @woubuc
    • [x] @wyhaya
    • [x] @Xavientois
    • [x] @YohDeadfall
    • [x] @yrns
    • [x] @zaszi
    • [x] @zgotsch
    • [x] @zicklag
    • [x] @Zooce

    Contributors with "obsolete" changes (no need for approval)

    • adekau
    • ColonisationCaptain
    • temhotaokeaha

    Contributors with "trivial" changes that are ok to keep

    • follower
    • HyperLightKitsune
    • liufuyang
    • Raymond26
    • themilkybit
    • walterpie
    • EthanYidong

    Contributors with changes we reverted to unblock the relicense

    • TomBebb
    C-Enhancement A-Meta 
    opened by cart 290
  • Renderer Rework: Initial Merge Tracking Issue

    Renderer Rework: Initial Merge Tracking Issue

    The Bevy Renderer Rework is starting to stabilize and it is time to start planning how to upstream it! The intent of this issue is to track the work required to merge the renderer rework as soon as possible. This isn't a "renderer feature wishlist". But feel free to discuss what you think should be on this list!

    This is an issue to track work that still needs to be done. If there is a name in parentheses, that work has been claimed by someone.

    For some history on this effort, check out:

    • #2265
    • #2351
    • experimental pull requests in @cart's repo: https://github.com/cart/bevy/pulls?q=is%3Apr

    Here is a list of open prs against the pipelined-rendering branch

    Missing Required Features

    The new renderer must have (approximate) feature parity with the old renderer.

    • [ ] actual pipelining
      • currently ownership in the new renderer is set up in a way to facilitate this, but actual parallel pipelining in the context of winit will require some additional work
    • [x] simple custom shaders (@cart)
    • [x] #2653 (@Davier)
      • [x] text (@StarArawn)
    • [x] #2560 (@StarArawn)
    • [x] #2537 (@superdump)
      • [x] re-enable gltf tangent imports #2741 (@superdump)
    • [x] crevice bugfixing
      • [x] https://github.com/LPGhatguy/crevice/issues/29
      • [ ] investigate potential array alignment corner case (@cart)
    • [x] #2543 (@superdump)
    • [x] #2552 (@cart)
    • [x] #2695 (@zicklag)
      • [x] #2717 (@cart)
    • [x] #3042 ~~#2541~~ (@superdump @cart)
    • [x] Update BufferVec to use Queue (like UniformVec does) (#2847)
    • [ ] Better run_sub_graph() input api
      • it shouldn't require correctly-ordered inputs. accept a "named" map instead
    • [x] #2741(@superdump)
    • [x] #2631 (@zicklag)
    • [x] #2700 (@superdump)
    • [x] #2555 (@cart)
    • [x] Add Opaque 3d render phase, sort that front-to-back, and sort transparent phase back-to-front (@superdump)

    Missing Nice-To-Have Features

    These aren't required for a merge, but would be very nice to have.

    • [x] Visibility culling (probably requires merging and implementing https://github.com/bevyengine/rfcs/pull/12) (#2861)
      • This is an important part of the renderer api that isn't fleshed out yet and might meaningfully change the architecture.
    • [x] #3060 ~~#2642~~ (@cart @zicklag)
    • [x] #2726 (@superdump)
    • [x] #2700 (@superdump)
    • [ ] #2876 (@ChangeCaps)
    • [x] #3153 (@superdump)
    • [ ] shadow filtering (@superdump)
    • [ ] make querying for "inner lifetimes" more ergonomic
    • [ ] DrawFunction ergonomics (@cart)
    • [ ] "high level" data binding (@cart)
    • [x] shader defs (@cart)
    • [x] #3137 (@cart)
    • [x] ~~depth pre-pass~~ (@superdump)
      • Tabled for now because it generally reduces performance in our tests. In the future new cases like using the depth prepass for early SSAO might justify adding it.
    • [ ] nicer api for flexible/custom vertex attributes (@cart)
    • [ ] Generic MaterialPlugin<T: Material> for low boilerplate custom shaders (@cart)
    • [x] #3193

    Discussions To Have Before Merging

    • [ ] Consider RenderDevice + RenderQueue -> GpuDevice + GpuQueue
    • [ ] Consider using atomic counter instead of UUID for render resource ids
    • [ ] Should we keep the BevyDefault trait?
    • [ ] Agree on the final "shadow enable/disable" api

    Steps to Merge

    • [x] Move work from https://github.com/cart/bevy/tree/pipelined-rendering to https://github.com/bevyengine/bevy/tree/pipelined-rendering
    • [x] Merge as many small, orthogonal, easy-to-retain-attribution changes from the pipelined-rendering branch as possible:
      • [x] AppBuilder -> App: #2531 (@bjorn3)
      • [x] SubApps (@cart)
      • [x] Break out SystemParam state and world lifetimes #2605 (@cart)
      • [x] "Lifetimeless" system params (@cart)
      • [x] #2283 (@cart)
      • [x] #2673 (@cart)
        • [x] world entity clearing (@cart)
        • [x] "invalid archetype" (@cart)
        • [x] stage configuration to disable automatic buffer application (@cart)
    • [x] Decide whether or not to "vendor" crevice (we will)
    • [ ] RFC for new renderer architecture (@cart)
    • [x] Swap out old renderer crates for new renderer crates (after Missing Required Features section is complete)
      • [x] Port examples
      • [x] update docs
    • [x] Manually merge (to prevent bors from squashing history and making attribution difficult)
    C-Enhancement A-Rendering A-Meta C-Tracking-Issue 
    opened by cart 83
  • Bevy Development Process

    Bevy Development Process


    What problem does this solve or what need does it fill?

    A clearer defined set of rules and practices of project management issues for the bevy development process.

    Since bevy is exponentially growing and more and more contributors are entering, we need a better defined practice for managing and organizing issues and pull requests.

    What solution would you like?

    Introduction of a proper bevy project board inside github, with automated movement between columns based on reviews/approvals etc. Revamp of github labels. A triage-team or something of the sort for those in change of assigning proper labels.

    What alternative(s) have you considered?

    Relying on the current tag/labeling scheme.

    • This however is not the best, since there is not a clear/defined structure for adding labels and how to organize the issues/prs that come in.

    Using external kanban board software to track issues.

    • Relying on external software should be avoided, especially if Github provides the necessary features.

    A few helpful resources

    Github project board automation


    • [x] Revamp github labels
    • [ ] Integrate github project board (and setup automation)
    • [x] Add a PR template
    • [x] Automated labeling for PRs (https://github.com/actions/labeler)
    C-Enhancement A-Meta 
    opened by NathanSWard 72
  • Schedule v2

    Schedule v2

    Draft Note

    This is a draft because I'm looking for feedback on this api. Please let me know if you can think of improvements or gaps.

    Bevy Schedule V2

    Bevy's old Schedule was simple, easy to read, and easy to use. But it also had significant limitations:

    • Only one Schedule allowed
    • Very static: you are limited to using the tools we gave you (stages are lists of systems, you can add stages to schedules)
    • Couldn't switch between schedules at runtime
    • Couldn't easily support "fixed timestep" scenarios

    V2 of Bevy Schedule aims to solve these problems while still maintaining the ergonomics we all love:

    Stage Trait

    Stage is now a trait. You can implement your own stage types!

    struct MyStage;
    impl Stage for MyStage {
        fn run(&mut self, world: &mut World, resources: &mut Resources);

    There are now multiple built in Stage types:


    // runs systems in parallel
    let parallel_stage =
    // runs systems serially (in registration order)
    let serial_stage =
    // you can also write your own custom SystemStageExecutor
    let custom_executor_stage =


    Bevy now supports states. More on this below!


    You read that right! Schedules are also stages, which means you can nest Schedules

    let schedule = Schedule::default()
        .with_stage("update", SystemStage::parallel()
                .with_stage("nested_stage", SystemStage::serial()
    // schedule stages can be downcasted
    let mut update_stage = schedule.get_stage_mut::<SystemStage>("update").unwrap();


    By popular demand, we now support States!

    • Each state value has its own "enter", "update", and "exit" schedule
    • You can queue up state changes from any system
    • When a StateStage runs, it will dequeue all state changes and run through each state's lifecycle
    • If at the end of a StateStage, new states have been queued, they will immediately be applied. This means moving between states will not be delayed across frames.

    The new state.rs example is the best illustrator of this feature. It shows how to transition between a Menu state and an InGame state. The texture_atlas.rs example has been adapt to use states to transition between a Loading state and a Finished state.

    This enables much more elaborate app models:

    #[derive(Clone, PartialEq, Eq, Hash)]
    enum AppState {
        // This initializes the state (adds the State<AppState> resource and adds StateStage<T> to the schedule)
        // State stages are added right after UPDATE by default, but you also manually add StateStage<T> anywhere
        // A state's "enter" schedule is run once when the state is entered
        .state_enter(AppState::Loading, SystemStage::parallel()
        // A state's "update" schedule is run once on every app update
        // Note: Systems implement IntoStage, so you can do this:
        .state_update(AppState::Loading, check_asset_loads)
        // A state's "exit" schedule is run once when the state is exited 
        .state_exit(AppState::Loading, setup_world)
        .state_update(AppState::InGame, SystemStage::parallel()
        // You can of course still compose your schedule "normally"
        // add_system_to_stage assumes that the stage is a SystemStage
        .add_system_to_stage(stage::POST_UPDATE, do_another_thing)
    // this system checks to see if assets are loaded and transitions to the InGame state when they are finished 
    fn check_asset_loads(state: Res<State<AppState>>, asset_server: Res<AssetServer>) {
        if assets_finished_loading(&asset_server) {
            // state changes are put into a queue, which the StateStage consumes during execution
    fn setup_world(commands: &mut Commands, state: Res<State<AppState>>, textures: Res<Assets<Textures>>) {
        // This system only runs after check_asset_loads has checked that all assets have loaded
        // This means we can now freely access asset data
        let texture = textures.get(SOME_HANDLE).unwrap();
            // spawn more things here

    Run Criteria

    Criteria driven stages (and schedules): only run stages or schedules when a certain criteria is met.

        .add_stage_after(stage::UPDATE, "only_on_10_stage", SystemStage::parallel()
            .with_run_criteria(|value: Res<usize>| if *value == 10 { ShouldRun::Yes } else { ShouldRun::No } )
        .add_stage_after(stage::RUN_ONCE, "one_and_done", Schedule::default()

    Fixed Timestep:

    app.add_stage_after(stage::UPDATE, "fixed_update", SystemStage::parallel()

    Schedule Building

    Adding stages now takes a Stage value:

        .add_stage_after(stage::UPDATE, SystemStage::parallel())

    Typed stage building with nesting:

        .stage("my_stage", |my_stage: &mut Schedule|
                .add_stage_after("substage1", "substage2", SystemStage::parallel()
                .add_system_to_stage("substage_2", some_other_system)
                .stage("a_2", |a_2: &mut SystemStage| 
                    a_2.add_stage("a_2_1", StateStage::<MyState>::default())
        .add_stage("b", SystemStage::serial())

    Unified Schedule

    No more separate "startup" schedule! It has been moved into the main schedule with a RunOnce criteria

    startup_stage::STARTUP (and variants) have been removed in favor of this:

        // this:
        // is equivalent to this:
        .stage(stage::STARTUP, |startup: &mut Schedule| {
            startup.add_system_to_stage(startup_stage::STARTUP, setup)
        // choose whichever one works for you!

    this is a non-breaking change. you can continue using the AppBuilder .add_startup_system() shorthand

    Discussion Topics

    • General API thoughts: What do you like? What do you dislike?
    • Do States behave as expected? Are they missing anything?
    • Does FixedTimestep behave as expected?
    • I added support for "owned builders" and "borrowed builders" for most schedule/stage building:
      // borrowed (add_x and in some cases set_x)
          .add_stage("my_stage", SystemStage::parallel())
          .stage("my_stage", |my_stage: &mut Schedule|
                  .add_stage("a", )
                  .add_system_to_stage("a", some_system)
      // owned (with_x)
          .add_stage("my_stage", Schedule::default()
              .with_stage("a", SystemStage::parallel())
              .with_system_in_stage("a", some_system)
      • Does this make sense? We could remove with_x in favor of borrowed add_x in most cases. This would reduce the api surface, but it would mean slightly more cumbersome initialization. We also definitely want with_x in some cases (such as stage.with_run_criteria())

    Next steps:

    • (Maybe) Support queuing up App changes (which will be applied right before the next update):
      commands.app_change(|app: &mut App| {app.schedule = Schedule::default();})
    • (Maybe) Event driven stages
      app.add_stage_after(stage::UPDATE, EventStage::<SomeEvent>::default().with_system(a))
      • These could easily build on top of the existing schedule features. It might be worth letting people experiment with their own implementations for a bit.
      • We could also somehow try to work in "system inputs" to this. Aka when an event comes in, pass it in to each system in the schedule as input.
    C-Enhancement A-ECS 
    opened by cart 64
  • System-order-independent ECS change tracking

    System-order-independent ECS change tracking

    The current change tracking system is very lightweight, both for event generation and consumption (which is why we can enable it by default), but it has the following problem:

      System1: Change<T> query
      System2: mutates T
      System1: Change<T> query ... misses System2's changes

    In most cases, this can be worked around via system registration order and stages, but I anticipate some cases that cannot work around this behavior, as well as general confusion from users like: "why isn't my system running ... im definitely mutating this component?"

    I think it would be useful to (optionally) allow stateful "order independent" change tracking. I posted some ideas here: #54

    opened by cart 57
  • [Merged by Bors] - Add a method `iter_combinations` on query to iterate over combinations of query results

    [Merged by Bors] - Add a method `iter_combinations` on query to iterate over combinations of query results

    Related to discussion on discord

    With const generics, it is now possible to write generic iterator over multiple entities at once.

    This enables patterns of query iterations like

    for [e1, e2, e3] in query.iter_combinations() {
       // do something with relation of all three entities

    The compiler is able to infer the correct iterator for given size of array, so either of those work

    for [e1, e2] in query.iter_combinations()  { ... }
    for [e1, e2, e3] in query.iter_combinations()  { ... }

    This feature can be very useful for systems like collision detection.

    When you ask for permutations of size K of N entities:

    • if K == N, you get one result of all entities
    • if K < N, you get all possible subsets of N with size K, without repetition
    • if K > N, the result set is empty (no permutation of size K exist)
    C-Enhancement A-ECS C-Usability S-Ready-For-Final-Review 
    opened by Frizi 51
  • [Merged by Bors] - log system info on startup

    [Merged by Bors] - log system info on startup


    • We already log the adapter info on startup when bevy_render is present. It would be nice to have more info about the system to be able to ask users to submit it in bug reports


    • Use the sysinfo crate to get all the information
      • I made sure it only gets the required informations to avoid unnecessary system request
    • Add a system that logs this on startup
      • This system is currently in bevy_diagnostics because I didn't really know where to put it.

    Here's an example log from my system:

    INFO bevy_diagnostic: SystemInformation { os: "Windows 10 Pro", kernel: "19044", cpu: "AMD Ryzen 7 5800X 8-Core Processor", core_count: "8", memory: "34282242 KB" }


    • Added a new default log when starting a bevy app that logs the system information
    C-Usability A-Diagnostics 
    opened by IceSentry 50
  • Dynamic Systems and Components

    Dynamic Systems and Components

    Related to: #32 Resolves: #142

    Note: How this is related to #32 ( StableTypeId discussion ) this is related to #32 because it changes the TypeId struct used to identify components, but it does not directly address the problem of dynamic loading of Rust plugins. See https://github.com/bevyengine/bevy/issues/32#issuecomment-703303754.

    This PR will attempt to establish the ability to create systems and components that have been determined at runtime instead of compile time. I'm going to try to implement this in one PR because I won't be sure about the sufficiency of the design until I actually get a working example of a dynamically registered system and component ( which I will include with this PR ).

    Note: If we want to merge pieces of this PR one at a time, I am perfectly fine with that. I am making sure that each step is cleanly separated into it's own commit and can easily be ported to an individual PR.

    I'm going to try to attack this problem one step at a time:


    Non-Rust Component Ids

    Status: Completed in commit: "Implement Custom Component Ids"

    Currently bevy_hecs uses std::any::TypeId to uniquely identify the component IDs, but this does not allow us to define components that have a non-Rust origin. The first commit in the PR has migrated all of the internals of the bevy ECS to use a new ComponentId instead that is defined as:

    /// Uniquely identifies a type of component. This is conceptually similar to
    /// Rust's [`TypeId`], but allows for external type IDs to be defined.
    #[derive(Eq, PartialEq, Hash, Debug, Clone, Copy)]
    pub enum ComponentId {
        /// A Rust-native [`TypeId`]
        /// An arbitrary ID that allows you to identify types defined outside of
        /// this Rust compilation

    Establish Dynamic Queries

    Status: Completed in commit: "Implement Dynamic Systems"

    This adds a new state type parameter to the Fetch trait. This allows compile-time constructed queries like to be constructed with State = () and runtime constructed queries with the state parameter set to a DynamicComponentQuery.

    Establish Dynamic Component Insertion

    Status: Completed in commit: "Add Dynamic Component Support"

    This adds a new implementation of the Bundle trait, RuntimeBundle which can be used to insert untyped components at runtime by providing a buffer of bytes and the required type information.

    Create Examples

    Status: Completed in respective commits

    We also have new examples for dynamic_systems and dynamic_components.

    Remaining Work

    The biggest thing necessary at this point is a solid review. There's a lot of code changed, but thankfully not a ton of new logic. The bulk of the changes are repetitive changes required to add the necessary type parameters and such for the new Fetch `State type parameter.

    Otherwise, there are a couple of todo!()'s in bevy_scene because I don't know enough about the Bevy scene architecture to integrate scenes with external components yet. I think that could reasonably be left to a separate PR, but I'm fine either way.

    C-Enhancement A-ECS 
    opened by zicklag 48
  • OpenGL support

    OpenGL support

    Wgpu now has a working OpenGL backend. I know bevy intends to support that, but it does not work yet.

    Creating this issue to make it easier to track progress on that front.

    C-Enhancement A-Rendering 
    opened by inodentry 46
  • [Merged by Bors] - Reliable change detection

    [Merged by Bors] - Reliable change detection

    Problem Definition

    The current change tracking (via flags for both components and resources) fails to detect changes made by systems that are scheduled to run earlier in the frame than they are.

    This issue is discussed at length in #68 and #54.

    This is very much a draft PR, and contributions are welcome and needed.


    1. Each change is detected at least once, no matter the ordering.
    2. Each change is detected at most once, no matter the ordering.
    3. Changes should be detected the same frame that they are made.
    4. Competitive ergonomics. Ideally does not require opting-in.
    5. Low CPU overhead of computation.
    6. Memory efficient. This must not increase over time, except where the number of entities / resources does.
    7. Changes should not be lost for systems that don't run.
    8. A frame needs to act as a pure function. Given the same set of entities / components it needs to produce the same end state without side-effects.

    Exact change-tracking proposals satisfy criteria 1 and 2. Conservative change-tracking proposals satisfy criteria 1 but not 2. Flaky change tracking proposals satisfy criteria 2 but not 1.

    Code Base Navigation

    There are three types of flags:

    • Added: A piece of data was added to an entity / Resources.
    • Mutated: A piece of data was able to be modified, because its DerefMut was accessed
    • Changed: The bitwise OR of Added and Changed

    The special behavior of ChangedRes, with respect to the scheduler is being removed in #1313 and does not need to be reproduced.

    ChangedRes and friends can be found in "bevy_ecs/core/resources/resource_query.rs".

    The Flags trait for Components can be found in "bevy_ecs/core/query.rs".

    ComponentFlags are stored in "bevy_ecs/core/archetypes.rs", defined on line 446.


    Proposal 5 was selected for implementation.

    Proposal 0: No Change Detection

    The baseline, where computations are performed on everything regardless of whether it changed.

    Type: Conservative


    • already implemented
    • will never miss events
    • no overhead


    • tons of repeated work
    • doesn't allow users to avoid repeating work (or monitoring for other changes)

    Proposal 1: Earlier-This-Tick Change Detection

    The current approach as of Bevy 0.4. Flags are set, and then flushed at the end of each frame.

    Type: Flaky


    • already implemented
    • simple to understand
    • low memory overhead (2 bits per component)
    • low time overhead (clear every flag once per frame)


    • misses systems based on ordering
    • systems that don't run every frame miss changes
    • duplicates detection when looping
    • can lead to unresolvable circular dependencies

    Proposal 2: Two-Tick Change Detection

    Flags persist for two frames, using a double-buffer system identical to that used in events.

    A change is observed if it is found in either the current frame's list of changes or the previous frame's.

    Type: Conservative


    • easy to understand
    • easy to implement
    • low memory overhead (4 bits per component)
    • low time overhead (bit mask and shift every flag once per frame)


    • can result in a great deal of duplicated work
    • systems that don't run every frame miss changes
    • duplicates detection when looping

    Proposal 3: Last-Tick Change Detection

    Flags persist for two frames, using a double-buffer system identical to that used in events.

    A change is observed if it is found in the previous frame's list of changes.

    Type: Exact


    • exact
    • easy to understand
    • easy to implement
    • low memory overhead (4 bits per component)
    • low time overhead (bit mask and shift every flag once per frame)


    • change detection is always delayed, possibly causing painful chained delays
    • systems that don't run every frame miss changes
    • duplicates detection when looping

    Proposal 4: Flag-Doubling Change Detection

    Combine Proposal 2 and Proposal 3. Differentiate between JustChanged (current behavior) and Changed (Proposal 3).

    Pack this data into the flags according to this implementation proposal.

    Type: Flaky + Exact


    • allows users to acc
    • easy to implement
    • low memory overhead (4 bits per component)
    • low time overhead (bit mask and shift every flag once per frame)


    • users must specify the type of change detection required
    • still quite fragile to system ordering effects when using the flaky JustChanged form
    • cannot get immediate + exact results
    • systems that don't run every frame miss changes
    • duplicates detection when looping

    [SELECTED] Proposal 5: Generation-Counter Change Detection

    A global counter is increased after each system is run. Each component saves the time of last mutation, and each system saves the time of last execution. Mutation is detected when the component's counter is greater than the system's counter. Discussed here. How to handle addition detection is unsolved; the current proposal is to use the highest bit of the counter as in proposal 1.

    Type: Exact (for mutations), flaky (for additions)


    • low time overhead (set component counter on access, set system counter after execution)
    • robust to systems that don't run every frame
    • robust to systems that loop


    • moderately complex implementation
    • must be modified as systems are inserted dynamically
    • medium memory overhead (4 bytes per component + system)
    • unsolved addition detection

    Proposal 6: System-Data Change Detection

    For each system, track which system's changes it has seen. This approach is only worth fully designing and implementing if Proposal 5 fails in some way.

    Type: Exact


    • exact
    • conceptually simple


    • requires storing data on each system
    • implementation is complex
    • must be modified as systems are inserted dynamically

    Proposal 7: Total-Order Change Detection

    Discussed here. This proposal is somewhat complicated by the new scheduler, but I believe it should still be conceptually feasible. This approach is only worth fully designing and implementing if Proposal 5 fails in some way.

    Type: Exact


    • exact
    • efficient data storage relative to other exact proposals


    • requires access to the scheduler
    • complex implementation and difficulty grokking
    • must be modified as systems are inserted dynamically


    • We will need to verify properties 1, 2, 3, 7 and 8. Priority: 1 > 2 = 3 > 8 > 7
    • Ideally we can use identical user-facing syntax for all proposals, allowing us to re-use the same syntax for each.
    • When writing tests, we need to carefully specify order using explicit dependencies.
    • These tests will need to be duplicated for both components and resources.
    • We need to be sure to handle cases where ambiguous system orders exist.

    changing_system is always the system that makes the changes, and detecting_system always detects the changes.

    The component / resource changed will be simple boolean wrapper structs.

    Basic Added / Mutated / Changed

    2 x 3 design:

    • Resources vs. Components
    • Added vs. Changed vs. Mutated
    • changing_system runs before detecting_system
    • verify at the end of tick 2

    At Least Once

    2 x 3 design:

    • Resources vs. Components
    • Added vs. Changed vs. Mutated
    • changing_system runs after detecting_system
    • verify at the end of tick 2

    At Most Once

    2 x 3 design:

    • Resources vs. Components
    • Added vs. Changed vs. Mutated
    • changing_system runs once before detecting_system
    • increment a counter based on the number of changes detected
    • verify at the end of tick 2

    Fast Detection

    2 x 3 design:

    • Resources vs. Components
    • Added vs. Changed vs. Mutated
    • changing_system runs before detecting_system
    • verify at the end of tick 1

    Ambiguous System Ordering Robustness

    2 x 3 x 2 design:

    • Resources vs. Components
    • Added vs. Changed vs. Mutated
    • changing_system runs [before/after] detecting_system in tick 1
    • changing_system runs [after/before] detecting_system in tick 2

    System Pausing

    2 x 3 design:

    • Resources vs. Components
    • Added vs. Changed vs. Mutated
    • changing_system runs in tick 1, then is disabled by run criteria
    • detecting_system is disabled by run criteria until it is run once during tick 3
    • verify at the end of tick 3

    Addition Causes Mutation

    2 design:

    • Resources vs. Components
    • adding_system_1 adds a component / resource
    • adding system_2 adds the same component / resource
    • verify the Mutated flag at the end of the tick
    • verify the Added flag at the end of the tick

    First check tests for: https://github.com/bevyengine/bevy/issues/333 Second check tests for: https://github.com/bevyengine/bevy/issues/1443

    Changes Made By Commands

    • adding_system runs in Update in tick 1, and sends a command to add a component
    • detecting_system runs in Update in tick 1 and 2, after adding_system
    • We can't detect the changes in tick 1, since they haven't been processed yet
    • If we were to track these changes as being emitted by adding_system, we can't detect the changes in tick 2 either, since detecting_system has already run once after adding_system :(


    See: general advice, Criterion crate

    There are several critical parameters to vary:

    1. entity count (1 to 10^9)
    2. fraction of entities that are changed (0% to 100%)
    3. cost to perform work on changed entities, i.e. workload (1 ns to 1s)

    1 and 2 should be varied between benchmark runs. 3 can be added on computationally.

    We want to measure:

    • memory cost
    • run time

    We should collect these measurements across several frames (100?) to reduce bootup effects and accurately measure the mean, variance and drift.

    Entity-component change detection is much more important to benchmark than resource change detection, due to the orders of magnitude higher number of pieces of data.

    No change detection at all should be included in benchmarks as a second control for cases where missing changes is unacceptable.


    1. y: performance, x: log_10(entity count), color: proposal, facet: performance metric. Set cost to perform work to 0.
    2. y: run time, x: cost to perform work, color: proposal, facet: fraction changed. Set number of entities to 10^6
    3. y: memory, x: frames, color: proposal


    1. Is the theoretical categorization of the proposals correct according to our tests?
    2. How does the performance of the proposals compare without any load?
    3. How does the performance of the proposals compare with realistic loads?
    4. At what workload does more exact change tracking become worth the (presumably) higher overhead?
    5. When does adding change-detection to save on work become worthwhile?
    6. Is there enough divergence in performance between the best solutions in each class to ship more than one change-tracking solution?

    Implementation Plan

    1. Write a test suite.
    2. Verify that tests fail for existing approach.
    3. Write a benchmark suite.
    4. Get performance numbers for existing approach.
    5. Implement, test and benchmark various solutions using a Git branch per proposal.
    6. Create a draft PR with all solutions and present results to team.
    7. Select a solution and replace existing change detection.
    A-ECS P-High A-Core 
    opened by alice-i-cecile 45
  • System sets and parallel executor v2

    System sets and parallel executor v2

    It's a wall of text, I know - you don't have to read all of it, its purpose is to clarify the details should questions arise.


    Prior discussion: this comment and on.

    This PR builds on top of the incomplete SystemSet implementation branch that didn't make it into 0.4; it was split from schedule-v2 branch before the merge.

    The branch introduced system sets: SystemStage is now made up of one or more SystemSet, each has its own run criterion and contains the systems whose execution said criterion specifies.

    It also moved ShouldRun to schedule's level in the file hierarchy, and implemented run criteria and related methods as a reusable struct.

    The rebase that was required to make this branch mergeable was messy beyond belief, so I opted to merge in the master instead. It's still rather messy, but at least the commits are coherent.


    • exclusive system - system that requires exclusive (mutable) access to entirety of world and/or resources. The new implementation automatically executes these sequentially, either at the start or end (not fully implemented) of the stage.
    • parallel/parallelizable system - any system that doesn't require exclusive access. These are executed in the middle of the stage, as many as possible at once.
    • thread-local system - parallelizable system that accesses one or several thread-local resources (the ones in !Send storage).
    • thread-agnostic system - parallelizable system that doesn't access any thread-local resources.

    Collateral changes

    2021-01-23: this is not exhaustive, and outdated - the list does not cover commits made since original post.

    Necessary (and not so much) changes that aren't directly related to the implementation:

    • Renamed access::Access to ArchetypeAccess, made it private. No particular reason.
    • Removed ThreadLocalExecution, System::thread_local_execution() - this information is now encoded in ::archetype_component_access(), ::resource_access(), and new ::is_thread_local().
    • Extended TypeAccess with support for "reads/writes everything" to facilitate the above.
    • Added CondensedTypeAccess.
    • Moved behavior of thread_local_func (command buffer merging) and similar fields from parallelizable System implementors into their run_exclusive() implementation.
    • Renamed ThreadLocalSystemFn and the module into_thread_local to ExclusiveSystemFn and into_exclusive respectively, to differentiate systems with this access pattern from actually thread-local systems.
    • Implemented IntoSystem for FnMut(&mut World) and FnMut(&mut Resources), both result in ExclusiveSystemFn. No particular reason.
    • Implemented ThreadLocal system parameter.
    • Implemented system insertion methods with labels and dependencies (with string labels for now) for SystemSet and SystemStage.
    • Added ShouldRun::NoAndLoop.
    • Renamed System::update() to System::update_access(), for clarity.
    • Changed system sets to store systems in NonNull rather than Box. Requires auditing, and a sanity check.


    2021-01-23: this is slightly outdated now, due to changes to exclusive systems and availability of local-to-thread tasks.

    Reading is not required. It's here because people were curious and picking apart someone else's code is unpleasant, annotated or not.

    The idea is similar to those I used in yaks and this prototype. I also wrote an entire long thing about the topic.


    1. Evaluate run criteria of system sets.
    2. If any of the sets have changed, rebuild the cached scheduling data.
    3. Run exclusive systems that want to be at the start of stage.
    4. If world's archetypes were changed since the last time parallel systems were ran, update their access and the relevant part of scheduling data.
    5. Enter outer compute pool scope.
    6. Prepare parallel systems - spawn tasks, reset counters, queue systems with no dependencies, etc.
    7. Try running a queued thread-local system. Queue its dependants if this was the last system they were waiting on.
    8. Enter inner compute pool scope.
    9. Try starting some queued thread-agnostic systems.
    10. See if any thread-agnostic systems have finished, queue their dependants if those were the last systems they were waiting on.
    11. Exit inner scope.
    12. If there are any queued or running systems, continue from step 7.
    13. Exit outer scope.
    14. Merge in command buffers.
    15. Run exclusive systems that want to be at the end of stage.
    16. Re-evaluate run criteria, continue from step 3 if needed.


    1. As needed, stage calls <ParallelSystemStageExecutor as SystemStageExecutor>::execute_stage() with &mut [SystemSet] that contains the systems with their uncondensed access sets, &HashMap<SystemIndex, Vec<SystemIndex>> that encodes the dependency tree, &mut World, &mut Resources.
    2. Run criteria of the sets are evaluated. If no sets should be ran, algorithm returns early; if no sets should be ran yet any number of them request their run criteria to be reevaluated, it panics (to avoid looping infinitely).
    3. If any of the system sets have had new systems inserted, pre-existing scheduling data is discarded, and fresh data is constructed:
      1. Any cached data is cleared.
      2. Systems of sets are iterated, collecting all distinct (not "access everything") accessed types of parallel systems into a pair of hash sets (one for archetype-components, another for resources). If the world's archetypes were also changed, systems' access is updated before collecting its types.
      3. Hash sets of types are converted into vectors. Bitsets are resized to be able to fit as many bits as there are parallel systems.
      4. Systems of sets are iterated again, sorting indices of systems with exclusive access into list of exclusive systems, generating partial parallel scheduling data by "condensing" access sets to bitsets (the bitset is a mask that produces the systems' access set when applied to the vector of all types accessed by the stage). At the same time, a map of SystemIndex to the respective system's usize index in the parallel systems vector is populated.
      5. Dependencies map is iterated, inserting dependants' indices into their dependencies' parallel scheduling data, using the map from the previous step.
    4. All exclusive systems that want to run at the start of stage (currently all exclusive systems) are executed, if their parent system sets' criteria call for it.
    5. If the archetypes were changed since the last time parallel systems were ran, said systems' accesses are updated, and their archetype-component bitsets are recondensed (types are collected and sets are converted into bitset masks).
    6. Compute pool scope is created and entered.
    7. Each parallel system is prepared for execution:
      1. Safety bit is reset.
      2. Whether the system should be ran this iteration (the result of its parent system set's run criterion evaluation) is cached into a bitset.
      3. If the system should be ran, its task is spawned (if it's not a thread-local system) into the scope, and it's queued to run if it has no dependencies, or has its dependency counter reset otherwise.
    8. Running a thread-local system on the main thread is attempted:
      1. Queued systems are filtered by thread-local systems (bitset intersection).
      2. The first system out of those that passes the parallelization check (see below) is executed on the main thread.
      3. Its index is removed from the queued systems, and any of its dependants that should run this iteration have their dependency counters decremented by one.
      4. Dependants that have their counter reach zero are queued.
    9. A new inner compute pool scope is created and entered, and thread-agnostic system execution task is spawned into it:
      1. Queued systems are filtered by not thread-local systems (bitset difference).
      2. Any systems that pass the parallelization check (see below) are signalled to start via a channel and are marked as running.
      3. "Active access" sets representing the union of all running systems' access sets are extended with the newly running systems' sets.
      4. Newly running systems are removed from the queue.
      5. If there are any systems running, wait for at least one of them to finish, unmark it as running.
      6. Any of now finished systems' dependants that should run this iteration have their dependency counters decremented by one.
      7. Dependants that have their counter reach zero are queued.
      8. Active access sets are rebuild with access sets of all still running systems.
    10. When the inner scope is done, if there are any running or queued systems, continue from step 8. Otherwise, exit outer scope.
    11. Command buffers of parallel systems (that should have ran this iteration) are merged in by running their exclusive part.
    12. All exclusive systems that want to run at the end of stage (currently none of exclusive systems) are executed, if their parent system sets' criteria call for it.
    13. System sets' run criteria that requested to be checked again are reevaluated. If no sets should be ran, algorithm returns; if no sets should be ran yet any number of them request their run criteria to be reevaluated, it panics (to avoid looping infinitely). Otherwise, continue from step 4.

    Parallelization check

    System's condensed resource access is checked against active resource access, ditto for archetype-component access. "Checked against" here means "if one reads all, other can't write anything, otherwise check if bitsets of writes are disjoint with bitsets of reads and writes". That's it.


    • SystemIndex is a pair of usize indices: system's parent set and its position in it.
    • Labels used by the stage are not used by the executor, it instead relies solely on the SystemIndex mapping.
    • Hot-path machinery uses usize indices (corresponding to systems' position in the vector of parallel scheduling data) and bitsets.
    • Parallel systems can request immutable access to all of world and/or resources and are seamlessly scheduled along others.
    • Thread-agnostic systems are not required to finish in the same inner scope that starts them.
    • Systems belonging to sets whose criteria disable them are simply never queued, and their tasks (for thread-agnostic systems) are not spawned.

    Things to do

    Numbered for discussion convenience:

    1. Audit unsafe code: ThreadLocal system parameter implementation, NonNull use in SystemSet.
    2. Targeted tests of new behaviors and invariants. Smoke tests for future development convenience.
    3. ~~Dependency tree verification. I think the best place to do it is in SystemStage::run_once(), where all the necessary information is first seen in one place.~~ mostly done, still refactoring
    4. ~~Exclusive systems dependencies and sorting. Should be done at the same time as point 3. There are stubs in the implementation that allow executing exclusive systems at either start of stage or end of stage, but no way to specify when such system would like to be ran. I think this could be handled by dependencies: exclusive systems that depend on parallel systems are automatically shoved to the end of stage, and vice versa; conflicts result in a panic during tree verification.~~ partially irrelevant due to full exclusive/parallel schism, sorting is tolopogical now but will
    5. ~~Serial stage executor. As of now, it's barely functional(?) and does no dependency-based sorting whatsoever; this should probably be handled after point 4.~~ mostly done, still refactoring
    6. ~~Decide if we want the safety bit. I've never seen that check fail, but it's about as lightweight as it can be, so I say we keep it.~~ removed, it became completely trivial with introduction of local-to-thread tasks
    7. Decide how to handle the run criteria infinite loop (when all criteria evaluate to a combination of ShouldRun::No and ShouldRun::NoAndLoop). Current strategy is to panic, could use a more meaningful panic message.
    8. ~~Decide if we want to sort parallel systems' command buffer merging in any way. Currently, it's not defined, but should be ordered by set then insertion order into set.~~ it's topologically sorted now, exploiting a side-effect of validating the dependency graph
    9. ~~Decide if we want to merge in parallel systems' command buffers before or after end-of-stage exclusives are run. Currently, it's before.~~ we have both options now, thanks to point 12
    10. ~~Consider "optional dependencies". Right now, if system A depends on system B and B gets disabled by a run criterion of its parent system set, A will not run and there is no way to make it run without running B. This should not be the case: execution order dependencies should only specify execution order and never execution fact.~~ all dependencies are now "soft", disabling a system does not disable its dependants; considering supporting "hard" dependencies
    11. ~~Consider simplifying the TypeAccess implementation by merging AccessSet into it. See CondensedTypeAccess for comparison.~~ done in c4e826166810a66102904024ea45d41e1b8e1073
    12. ~~Consider better API for system insertion into stage/set. Related: #1060.~~ adopted the "system descriptor" builder pattern
    13. Consider system labels that aren't &'static str.
    14. ~~Plumb the labels/dependencies API throughout the hierarchy; best done after point 12.~~
    15. Actually make use of system sets in StateStage, etc.
    16. Tighten API in general: currently some of the things that have no business being public are public.
    17. Minor refactors and internal tweaks, mostly for my own peace of mind regarding maintainability.
    18. Documentation, examples. I haven't done any public documentation, but I think there's plenty of internal documentation of the executor itself.
    19. Propagate the newly available/required patterns to the rest of the engine and examples, benchmark. This should be done after the API-related concerns are addressed, of course. For what it's worth, the examples I've tried running so far all have Just Worked.

    This should cover vast majority of things, but I'm bound to have missed something. I will slowly start on some of the items in the todo list, however, most of them will benefit from discussion at this point, and some can be tackled by someone else and/or in a different PR.

    Happy holidays!

    C-Enhancement A-ECS 
    opened by Ratysz 45
  • The documentation for `ReflectComponent` is inconsistent with how `Reflect` is implemented for lists

    The documentation for `ReflectComponent` is inconsistent with how `Reflect` is implemented for lists

    Bevy version


    What you did

    The documentation for ReflectComponent currently reads:

        /// Uses reflection to set the value of this [`Component`] type in the entity to the given value.

    However, internally, the implementation of apply for lists:

    /// Applies the elements of `b` to the corresponding elements of `a`.
    /// If the length of `b` is greater than that of `a`, the excess elements of `b`
    /// are cloned and appended to `a`.
    /// # Panics
    /// This function panics if `b` is not a list.
    pub fn list_apply<L: List>(a: &mut L, b: &dyn Reflect) {
        if let ReflectRef::List(list_value) = b.reflect_ref() {
            for (i, value) in list_value.iter().enumerate() {
                if i < a.len() {
                    if let Some(v) = a.get_mut(i) {
                } else {
                    List::push(a, value.clone_value());
        } else {
            panic!("Attempted to apply a non-list type to a list type.");

    This means List[1, 2, 3].apply(List[4, 5]) == List[4, 5, 3].

    So the result of calling ReflectComponent::apply() on a component with a list in it would not actually result in the components being equal afterwards.

    In particular, this was causing problems in bevy_ggrs, where the Children component is being rolled back. We used ReflectComponent::apply since that sounded like the right thing to do, but since list_apply didn't remove elements if self.len() > value.len(), we were getting leftover children that shouldn't be there.

    To work around this, we used ReflectComponent::remove followed by ReflectComponent::insert. However, by doing this it is causing unnecessary copies of the entire children hierarchy on every snapshot restore, and also needlessly triggering Added queries, causing further performance regressions.

    What went wrong

    I think the least confusing thing for users, would be if Reflect::apply for lists actually removed excess elements, so List[1, 2, 3].apply(List[4, 5]) == List[4, 5]

    Alternatively, the documentation for ReflectComponent::apply should specifically warn about this gotcha, and we need some alternative way of performing in-place updates of components.

    Additional information

    C-Bug S-Needs-Triage 
    opened by johanhelsing 2
  • bevy_render: Run calculate_bounds in the end-of-update exclusive systems

    bevy_render: Run calculate_bounds in the end-of-update exclusive systems


    • Avoid slower than necessary first frame after spawning many entities due to them not having Aabbs and so being marked visible
      • Avoids unnecessarily large system and VRAM allocations as a consequence


    • I noticed when debugging the many_cubes stress test in Xcode that the MeshUniform binding was much larger than it needed to be. I realised that this was because initially, all mesh entities are marked as being visible because they don't have Aabbs because calculate_bounds is being run in PostUpdate and there are no system commands applications before executing the visibility check systems that need the Aabbs. The solution then is to run the calculate_bounds system just before the previous system commands are applied which is at the end of the Update stage.
    A-Rendering C-Performance S-Ready-For-Final-Review 
    opened by superdump 0
  • "Border" around circle in 2d_shapes exmaple

    Bevy version


    [Optional] Relevant system information

    $ cargo --version
    cargo 1.66.0-nightly (071eeaf21 2022-10-22)
    $ uname -a
    Linux church 5.15.77 #1-NixOS SMP Thu Nov 3 14:59:20 UTC 2022 x86_64 GNU/Linux
    AdapterInfo { name: "Intel(R) UHD Graphics 620 (WHL GT2)", vendor: 32902, device: 16032, device_type: IntegratedGpu, driver: "Intel open-source Mesa driver", driver_info: "Mesa 22.2.2", backend: Vulkan }

    What you did

    $ cargo run --features wayland --example 2d_shapes

    What went wrong

    There's a "border" around the circle:


    I don't think that's supposed to be there.

    C-Bug S-Needs-Triage 
    opened by Munksgaard 0
  • Make WorldId Hash and SparseSetIndex

    Make WorldId Hash and SparseSetIndex

    What problem does this solve or what need does it fill?

    Currently WorldId can't be hashed in HashMap or used as a key in a SparseSet within bevy. This makes it challenging to branch behavior based on specific worlds (e.g. doing things differently for different SubApps). WorldId just wraps a usize so this should be safe to do.

    What solution would you like?

    Implement/derive Hash and SparseSetIndex for World.

    What alternative(s) have you considered?

    Worlds could be semi-uniquely identified through other means, like having a unique resource, but adding utility to the world's ID seems like the most straightforward choice.

    Additional context

    This would be particularly useful for generalizing cross-app extract functions. This came up while I was trying to make a extract function manager between a parent and multiple SubApp worlds, like so:

    SparseSet<WorldId, Vec<Box<Fn(&mut World, &mut World)>>>
    A-ECS C-Usability 
    opened by recatek 0
  • Merge SpriteBundle and TextureAtlasBundle

    Merge SpriteBundle and TextureAtlasBundle

    What problem does this solve or what need does it fill?

    My game is composed of entities that contain a SpriteBundle bundle or a SpriteSheetBundle bundle. When I want to update the color of an entity, I need to check every time if it contains a Sprite bundle or a TextureAtlasSprite bundle to be able to set the inner color field of these components. I don't want to have to make this check, because functionally it doesn't matter to me if an entity is rendered via a Sprite or a SpriteSheet, I just want to change the color of the texture.

    Basically I believe the Sprite and TextureAtlasSprite components contain data that is too coupled.

    Ideally I would like to be able to set Color only once for the entity, and then have the color be propagated to any sprite bundle of the entity. Same for 'anchor' or other sprite-related fields like flip_x, etc.

    What solution would you like?

    I think i would like to see most fields common to 'Sprite' and 'TextureAtlasSprite' grouped into a separate component. Something like:

    pub struct SpriteRenderComponent {
        pub color: Color,
        pub flip_x: bool,
        pub flip_y: bool,
        pub custom_size: Option<Vec2>,
        pub rect: Option<Rect>,
        pub anchor Anchor,

    And then change the bundles into:

    pub struct SpriteBundle{
       pub render: SpriteRenderComponent,
       pub texture: SpriteTextureEnum,
    pub enum SpriteTextureComponent{
       TextureAtlasSpriteTexture{texture_atlas: Handle<TextureAtlas>, index: usize},

    and then all sprite-based entities can just use the SpriteBundle, instead of having separate SpriteBundle or TextureAtlasBundle. This way I can change the color of all sprite-based entities without having to worry if they were created using a texture-atlas or not (which is just an inner implementation detail)

    The solution you propose for the problem presented.

    What alternative(s) have you considered?

    • I was thinking of having a global 'Color' component per entity, which can be propagated to sprite.color or textureAtlasSprite.color, but that seems overly complicated (some entities do not need a color) + the actual problem is that the Sprite and TextureAtlasSprite components are doing too much, some information should be shared between them.
    C-Enhancement A-Rendering C-Usability 
    opened by cBournhonesque 2
Bevy Engine
A modular game engine built in Rust, with a focus on developer productivity and performance
Bevy Engine
A simple and minimal game engine library built in rust.

Neptune A brand new free, open-source minimal and compact game engine built in rust. Design Goals We plan to make Neptune a small and minimal engine,

Levitate 17 Jan 25, 2023
My first attempt at game programming. This is a simple target shooting game built in macroquad.

sergio My first attempt at game programming. This is a simple target shooting game built in macroquad. Rules Hit a target to increase score by 1 Score

Laz 1 Jan 11, 2022
game engine built in rust, using wgpu and probably other stuff too

horizon game engine engine for devpty games, made in 99.9% rust and 0.1% shell. this is our main project currently. the engine will be used for most i

DEVPTY 2 Apr 12, 2022
Minecraft-esque voxel engine prototype made with the bevy game engine. Pending bevy 0.6 release to undergo a full rewrite.

vx_bevy A voxel engine prototype made using the Bevy game engine. Goals and features Very basic worldgen Animated chunk loading (ala cube world) Optim

Lucas Arriesse 125 Dec 31, 2022
2-player game made with Rust and "ggez" engine, based on "Conway's Game of Life"

fight-for-your-life A 2-player game based on the "Conway's Game of Life", made with Rust and the game engine "ggez". Create shapes on the grid that wi

Petros 3 Oct 25, 2021
A game of snake written in Rust using the Bevy game engine, targeting WebGL2

Snake using the Bevy Game Engine Prerequisites cargo install cargo-make Build and serve WASM version Set your local ip address in Makefile.toml (loca

Michael Dorst 0 Dec 26, 2021
2d Endless Runner Game made with Bevy Game Engine

Cute-runner A 2d Endless Runner Game made with Bevy Game Engine. Table of contents Project Infos Usage Screenshots Disclaimer Project Infos Date: Sept

JoaoMarinho 2 Jul 15, 2022
A game made in one week for the Bevy engine's first game jam

¿Quien es el MechaBurro? An entry for the first Bevy game jam following the theme of "Unfair Advantage." It was made in one week using the wonderful B

mike 20 Dec 23, 2022
A Client/Server game networking plugin using QUIC, for the Bevy game engine.

Bevy Quinnet A Client/Server game networking plugin using QUIC, for the Bevy game engine. Bevy Quinnet QUIC as a game networking protocol Features Roa

Gilles Henaux 65 Feb 20, 2023
Solana Game Server is a decentralized game server running on Solana, designed for game developers

Solana Game Server* is the first decentralized Game Server (aka web3 game server) designed for game devs. (Think web3 SDK for game developers as a ser

Tardigrade Life Sciences, Inc 16 Dec 1, 2022
Simple RUST game with the Bevy Engine

Simple RUST Game using the Bevy Engine YouTube videos for this code base: Episode 1 - Rust Game Development tutorial from Scratch with Bevy Engine Epi

null 150 Jan 7, 2023
A Simple Rust Game Engine For 2D

VeeBee VeeBee Is A Nice And Simple Game Engine For 2D. Features Sprites & Images! Btw There Is A Built In 2D Pack (But The Textures Are REALY BAD, Sor

null 2 Feb 2, 2022
A simple Minecraft written in Rust with the Piston game engine

hematite A simple Minecraft written in Rust with the Piston game engine How To Open a World This method is only for personal use. Never distribute cop

PistonDevelopers 1.7k Dec 22, 2022
Bevy Simple Portals is a Bevy game engine plugin aimed to create portals.

Portals for Bevy Bevy Simple Portals is a Bevy game engine plugin aimed to create portals. Those portals are (for now) purely visual and can be used t

Sélène Amanita 11 May 28, 2023
Victorem - easy UDP game server and client framework for creating simple 2D and 3D online game prototype in Rust.

Victorem Easy UDP game server and client framework for creating simple 2D and 3D online game prototype in Rust. Example Cargo.toml [dependencies] vict

Victor Winbringer 27 Jan 7, 2023
A dependency-free chess engine library built to run anywhere.

♔chess-engine♚ A dependency-free chess engine library built to run anywhere. Demo | Docs | Contact Me Written in Rust ?? ?? Why write a Chess engine?

adam mcdaniel 355 Dec 26, 2022
🎮 A Realtime Multiplayer Server/Client Game example built entirely with Rust 🦀

Example of a ?? Realtime Multiplayer Web Game Server/Client built entirely using Rust ??

Nick Baker 5 Dec 17, 2022
Scion is a tiny 2D game library built on top of wgpu, winit and legion.

Scion is a 2D game library made in rust. Please note that this project is in its first milestones and is subject to change according to convience need

Jérémy Thulliez 143 Dec 25, 2022
Work-in-Progress, opinionated game framework built on Bevy.

Bones A work-in-progress, opinionated game meta-engine built on Bevy. Under development for future use in the Jumpy game, and possibly other FishFolk

Fish Folk 9 Jan 3, 2023