Provides a mechanism to lay out data into GPU buffers according to WGSL's memory layout rules

Overview

Provides a mechanism to lay out data into GPU buffers ensuring WGSL's memory layout requirements are met.

Features

  • supports all WGSL host-shareable types + wrapper types (&T, &mut T, Box<T>, ...)
  • supports data types from a multitude of crates as features
  • covers a wide area of use cases (see examples)

Motivation

Having to manually lay out data into GPU buffers can become very tedious and error prone. How do you make sure the data in the buffer is laid out correctly? Enforce it so that future changes don't break this delicate balance?

encase gives you the ability to make sure at compile time that your types will be laid out correctly.

Design

The core trait is WgslType which mainly contains metadata about the given type.

The WriteInto, ReadFrom and CreateFrom traits represent the ability of a type to be written into the buffer, read from the buffer and created from the buffer respectively.

Most data types can implement the above traits via their respective macros:

The UniformBuffer, StorageBuffer, DynamicUniformBuffer and DynamicStorageBuffer structs are wrappers around an underlying raw buffer (a type implementing BufferRef and/or BufferMut depending on required capability). They facilitate the read/write/create operations.

Examples

Write affine transform to uniform buffer

use encase::{WgslType, UniformBuffer};

#[derive(WgslType)]
struct AffineTransform2D {
    matrix: glam::Mat2,
    translate: glam::Vec2
}

let transform = AffineTransform2D {
    matrix: glam::Mat2::IDENTITY,
    translate: glam::Vec2::ZERO,
};

let mut buffer = UniformBuffer::new(Vec::new());
buffer.write(&transform).unwrap();
let byte_buffer = buffer.into_inner();

// write byte_buffer to GPU

assert_eq!(&byte_buffer, &[0, 0, 128, 63, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 128, 63, 0, 0, 0, 0, 0, 0, 0, 0]);

Create vector instance by reading from dynamic uniform buffer at specific offset

use encase::DynamicUniformBuffer;

// read byte_buffer from GPU
let byte_buffer = [1u8; 256 + 8];

let mut buffer = DynamicUniformBuffer::new(&byte_buffer);
buffer.set_offset(256);
let vector: mint::Vector2<i32> = buffer.create().unwrap();

assert_eq!(vector, mint::Vector2 { x: 16843009, y: 16843009 });

Write and read back data from storage buffer

use encase::{WgslType, ArrayLength, StorageBuffer};

#[derive(WgslType)]
struct Positions {
    length: ArrayLength,
    #[size(runtime)]
    positions: Vec<mint::Point2<f32>>
}

let mut positions = Positions {
    length: ArrayLength,
    positions: Vec::from([
        mint::Point2 { x: 4.5, y: 3.4 },
        mint::Point2 { x: 1.5, y: 7.4 },
        mint::Point2 { x: 4.3, y: 1.9 },
    ])
};

let mut byte_buffer = Vec::new();

let mut buffer = StorageBuffer::new(&mut byte_buffer);
buffer.write(&positions).unwrap();

// write byte_buffer to GPU

// change length on GPU side
byte_buffer[0] = 2;

// read byte_buffer from GPU

let mut buffer = StorageBuffer::new(&mut byte_buffer);
buffer.read(&mut positions).unwrap();

assert_eq!(positions.positions.len(), 2);

Write different data types to dynamic storage buffer

use encase::{WgslType, DynamicStorageBuffer};

let mut byte_buffer = Vec::new();

let mut buffer = DynamicStorageBuffer::new_with_alignment(&mut byte_buffer, 64);
let offsets = [
    buffer.write(&[5.; 10]).unwrap(),
    buffer.write(&vec![3u32; 20]).unwrap(),
    buffer.write(&glam::Vec3::ONE).unwrap(),
];

// write byte_buffer to GPU

assert_eq!(offsets, [0, 64, 192]);
Comments
  • Avoid `clippy::missing_const_for_fn`

    Avoid `clippy::missing_const_for_fn`

    As in the title. Actually not a libraries responsibility, I would count that as a false-positive on Clippy's part.

    See https://github.com/rust-lang/rust-clippy/issues/8854.

    opened by daxpedda 4
  • How can you pad array of structures properly?

    How can you pad array of structures properly?

    Hello,

    Is there a way to add proper padding for dynamically sized arrays containing structures? Something like Vec<MyStruct>, so not structures containing dynamically sized arrays as can be seen in the example code. Here's my wgpu 13.1 compute shader program which compiles and runs, but the resulting calculations are wrong. Only first few instances are calculated properly and the rest are initialized to zero, although they should not be zero since that's not how the buffer is being initialized to begin with.

    Ideally I would like to directly pad array of cmath Vector3 values to my shader program, with a type signature like this Vec<Vector3<f32>> but even having a wrapper struct that derives ShaderType would be good enough for now.

    WGSL file:

    struct Vec3 {
        x: f32,
        y: f32,
        z: f32
    };
    
    @group(0)
    @binding(0)
    var<storage, read> test_arr: array<Vec3>;
    
    @group(0)
    @binding(1)
    var<storage, read_write> output_buf: array<Vec3>;
    
    @compute
    @workgroup_size(1)
    fn main(@builtin(global_invocation_id) global_id: vec3<u32>) {
        var gg = test_arr[global_id.x];
        gg.x = 7.2;
        gg.y = 2.2;
        gg.z = 1.2;
        output_buf[global_id.x] = gg;
    }
    
    

    Rust program:

    
             #[derive(ShaderType)]
             pub struct Vec3Wrap {
                 pub x: f32,
                 pub y: f32,
                 pub z: f32
             }
    
    
              let shader = include_wgsl!("./compute_calc_vis.wgsl");
              let shader = engine.device.create_shader_module(shader);
    
              let data = (0..28).map(|_| Vec3Wrap {
                  x: 0.0,
                  y: 5.0,
                  z: 0.0,
              }).collect::<Vec<_>>();
    
              let mut buf = encase::StorageBuffer::new(Vec::new());
              buf.write(&data).unwrap();
              let byte_buffer = buf.into_inner();
    
              let input_buffer = engine.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
                  label: Some("Input Buffer"),
                  contents: bytemuck::cast_slice(byte_buffer.as_slice()),
                  usage: wgpu::BufferUsages::STORAGE | wgpu::BufferUsages::COPY_DST
              });
    
              let output_gpu_buffer = engine.device.create_buffer(&wgpu::BufferDescriptor {
                  label: Some("Output Buffer"),
                  size: byte_buffer.len() as _,
                  usage: wgpu::BufferUsages::STORAGE | wgpu::BufferUsages::COPY_SRC,
                  mapped_at_creation: false,
              });
    
              let mapping_buffer = engine.device.create_buffer(&wgpu::BufferDescriptor {
                  label: Some("Mapping Buffer"),
                  size: byte_buffer.len() as _,
                  usage: wgpu::BufferUsages::COPY_DST | wgpu::BufferUsages::MAP_READ,
                  mapped_at_creation: false,
              });
    
              let compute_pipeline = engine.device.create_compute_pipeline(&wgpu::ComputePipelineDescriptor {
                  label: None,
                  // layout: Some(&pipeline_layout),
                  layout: None,
                  module: &shader,
                  entry_point: "main",
              });
    
              let bind_group_layout = compute_pipeline.get_bind_group_layout(0);
              let pipeline_layout = engine.device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
                  label: None,
                  bind_group_layouts: &[&bind_group_layout],
                  push_constant_ranges: &[],
              });
    
              let bind_group = engine.device.create_bind_group(&wgpu::BindGroupDescriptor {
                  label: None,
                  layout: &bind_group_layout,
                  entries: &[
                      wgpu::BindGroupEntry {
                          binding: 0,
                          resource: input_buffer.as_entire_binding(),
                      },
                      wgpu::BindGroupEntry {
                          binding: 1,
                          resource: output_gpu_buffer.as_entire_binding(),
                      },
                  ],
              });
    
              let mut encoder = engine.device.create_command_encoder(&wgpu::CommandEncoderDescriptor::default());
    
              {
                  let mut cpass = encoder.begin_compute_pass(&wgpu::ComputePassDescriptor::default());
                  cpass.set_pipeline(&compute_pipeline);
                  cpass.set_bind_group(0, &bind_group, &[]);
                  cpass.dispatch_workgroups(data.len() as u32, 1, 1);
              }
    
              encoder.copy_buffer_to_buffer(&output_gpu_buffer, 0, &mapping_buffer, 0, data.len() as _);
    
              engine.queue.submit(core::iter::once(encoder.finish()));
    
              let output_slice = mapping_buffer.slice(..);
              output_slice.map_async(wgpu::MapMode::Read, |_| {});
    
              engine.device.poll(wgpu::Maintain::Wait);
    
              let output = output_slice.get_mapped_range().to_vec();
              mapping_buffer.unmap();
    
              let ob = StorageBuffer::new(output);
              let out_val: Vec<Vec3Wrap> = ob.create().unwrap();
    
              info!("compute values:");
              for x in out_val.iter() {
                  info!("x: {}, y: {}, z: {}", x.x,x.y, x.z);
              }
    
    
    
    INFO [..] x: 7.2, y: 2.2, z: 1.2
    INFO [..] x: 7.2, y: 2.2, z: 1.2
    INFO [..] x: 7.2, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    INFO [..] x: 0, y: 0, z: 0
    
    
    opened by kaphula 2
  • Switch to nightly_coverage to fix stable coverage runs

    Switch to nightly_coverage to fix stable coverage runs

    Closes #18

    I also tried to make some of the tests pass when no feature flags are passed on cargo test with stable, but wasn't sure what to do about wrapper.rs which fails with:

    test tests/pass/wrappers.rs [should pass] ... error
    ┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈
    error[E0554]: `#![feature]` may not be used on the stable release channel
     --> tests/pass/wrappers.rs:1:1
      |
    1 | #![feature(trivial_bounds)]
      | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    

    Probably could address the mint usage in a more targeted fashion and completely killing the file. Happy to tweak this (or revert it entirely).

    The basic test plan I've been using is cargo llvm-cov both on this repo and in a bevy project I have patched to point to my PR.

    opened by emarcotte 2
  • Switch to `coverage_nightly` ?

    Switch to `coverage_nightly` ?

    I was trying to run coverage tests something that depends on this crate and noticed that it fails with:

    error[E0554]: `#![feature]` may not be used on the stable release channel
     --> C:\Users\Eugene\.cargo\registry\src\github.com-1ecc6299db9ec823\encase-0.4.0\src\lib.rs:2:23
      |
    2 | #![cfg_attr(coverage, feature(no_coverage))]
      |   
    

    According to https://github.com/taiki-e/cargo-llvm-cov#exclude-function-from-coverage this could be solved by a command line flag or a change to checking for coverage_nightly instead of coverage. Would a PR be acceptable to switch to coverage_nightly or is it preferable to use the command line flag for llvm-cov?

    opened by emarcotte 1
  • Properly deal with over-aligned T's in dynamic buffers

    Properly deal with over-aligned T's in dynamic buffers

    This PR allows two different use cases (1 is arguably a bug).

    1. It is theoretically possible that the default alignment for https://docs.rs/wgpu/latest/wgpu/struct.Limits.html#structfield.min_storage_buffer_offset_alignment is lower than the most-aligned structure DVec4.
    2. This lets you use the dynamic storage buffer api to build dynamically defined structures. You can set the designed alignment to 1, then you can use each "element" in the dynamic storage buffer as a member of a different type and it will automatically lay things out correctly.
    opened by cwfitzgerald 1
  • Update `glam` to 0.21

    Update `glam` to 0.21

    Currently, glam is at 0.21. I tried updating encase, and there are no compile errors when upgrading directly, except that "std" has to be added to the "features".

    opened by CGMossa 0
  • Use Encase's Layouting Independent of Uploading

    Use Encase's Layouting Independent of Uploading

    In rend3's vertex pulling rewrite I need to deal with sparse updates of buffers a lot. Because of that I'm not actually writing out the entire buffer all at once. I end up uploading individual items that changed into the byte offsets into the buffer they are supposed to be.

    I would love to have a definition like this:

    #[derive(ShaderType)]
    pub struct DirectionalLightShader {
        header: DirectionalLightHeader,
        #[runtime]
        lights: Vec<DirectionalLight>,
    }
    

    But because I can't upload the whole thing at once, I can't actually just use WriteInto here. What I would love is an API that would give me an offset into the structure for a given member. Maybe something like

    /// Raw Value because it's just a plain member
    let offset: u64 = DirectionalLightShader::Offsets::HEADER
    /// Vectors require you to give the index to get the full index
    let offset: u64 = DirectionalLIghtShader::Offsets::LIGHTS::index(2);
    

    This would let me use encase's layout even though I need my own custom weird layout system. Not at all attached to the syntax, just what it would let me accomplish.

    opened by cwfitzgerald 2
  • Add support for the Rust's unstable allocator-api on Vec

    Add support for the Rust's unstable allocator-api on Vec

    This PR adds feature-gated support for rust's new(ish) allocator api on Vec (see https://github.com/rust-lang/rust/issues/32838 & https://doc.rust-lang.org/unstable-book/library-features/allocator-api.html).

    This enables support for custom allocators, which are often needed in video games & high performance projects, as well as make it possible for libraries supporting the API to use encase with user-provided Vecs. I haven't added support for Box<T> which now can also take a custom allocator, as that feature still sometime causes ICEs.

    As for the feature name, I chose allocator_api, as that's also the name of Rust's feature (and it's disabled by default).

    This takes the form of an additional bound on Vecs, which now look like Vec<T, A> where A: std::alloc::Allocator.

    opened by aabizri 3
  • OpenGL/WebGl std140 support?

    OpenGL/WebGl std140 support?

    Hey hi!

    I am investigating about lib that helps me to set my buffers using std140 easily, I am using right now crevice but I am having some troubles with shaders that require sized arrays or dynamic size types, like:

    struct PointLight {
       a: Vec3,
       b: Vec3,
       c: Vec3,
       ...
    }
    
    struct Material {
       attr: f32,
       attr2: Vec3,
       lights: Vec<PointLight>, // or [PointLight; 4]
    }
    

    So my question is, opengl/std140 is supported by encase? I see that the readme talks about WGPU only. If so, is this kind of structs is supported? This is exactly the code that I am trying to make it work with crevice: https://github.com/Nazariglez/LearnOpenGL-Notan/blob/main/src/_2_lighting/_6_1_multiple_lights.rs#L264

    Where #[uniform] it just adds AsStd140 and Uniform to control the input type when setting the buffer's data.

    If encase support this it would be great.

    opened by Nazariglez 2
  • Support variable-sized arrays in uniform buffers via `arrayvec::ArrayVec`/`tinyvec::ArrayVec`

    Support variable-sized arrays in uniform buffers via `arrayvec::ArrayVec`/`tinyvec::ArrayVec`

    Since the arrayvec::ArrayVec/tinyvec::ArrayVec types have a hard cap on the items they contain, the capacity can be used on the shader side as the fixed-size array length.

    Besides being able to use these Vec-like data structures, this will also allow this trick to work (we should however note somewhere that reading beyond the actual length of items that were written will effectively return garbage data).

    enhancement 
    opened by teoxoy 0
  • Vertex buffers

    Vertex buffers

    I am currently trying to use encase with wgpu, but got a bit confused when I tried to use it for vertex buffers.

    As far as I could tell, the only difference between StorageBuffer and UniformBuffer is that UniformBuffer calls assert_uniform_compat.

    So if I need to make a vertex buffer, am I supposed to just use StorageBuffer? (worked fine so far by the way)

    enhancement 
    opened by daxpedda 3
Owner
Teodor Tanasoaia
Exploring diverse areas of IT
Teodor Tanasoaia
🍖A WGPU graphics pipeline, along with simple types used to marshal data to the GPU

renderling ?? This library is a collection of WGPU render pipelines. Shaders are written in GLSL. shaderc is used to compile shaders to SPIR-V. Defini

Schell Carl Scivally 5 Dec 20, 2022
grr and rust-gpu pbr rendering

grr-gltf Barebone gltf viewer using grr and rust-gpu. Currently only supports a single gltf model! Assets These files need to be downloaded and placed

Markus Siglreithmaier 28 Dec 2, 2022
GFA visualizer, GPU-accelerated using Vulkan

gfaestus - Vulkan-accelerated GFA visualization Demo: https://youtu.be/TOJZeeCqatk gfaestus is a tool for visualizing and interacting with genome grap

Christian Fischer 46 Nov 29, 2022
Tic-Tac-Toe on the GPU, as an example application for wgpu

Tic-Tac-GPU A simple (cough cough) example on a tic-tac-toe game with wgpu. Why? Because I didn't find that many small applications which use wgpu as

multisn8 2 Oct 7, 2022
Cross-platform GPU-accelerated viewer for the Mandelbrot set and similar (escape-time) fractals

fractal_viewer A cross-platform, GPU-accelerated viewer for the Mandelbrot Set and related fractals. Try it online! Usage Scroll wheel to zoom, click

null 5 Jan 8, 2023
Rust implementation of Another World (aka Out of this world) game engine

RustyWorld Rust implementation of Another World (aka Out of this world) game engine. I wanted a fun project to challenge myself while learning Rust, a

Francesco Degrassi 3 Jul 1, 2021
Find out what takes most of the space in your executable.

cargo-bloat Find out what takes most of the space in your executable. Supports ELF (Linux, BSD), Mach-O (macOS) and PE (Windows) binaries. WASM is not

Yevhenii Reizner 1.7k Jan 4, 2023
virtualization-rs provides the API of the Apple Virtualization.framework in Rust language.

virtualization-rs Rust bindings for Virtualization.framework virtualization-rs provides the API of the Apple Virtualization.framework in Rust language

suzu 44 Dec 31, 2022
An asset that provides 2d collision detector and kinematics, build from scratch in bevy

Busturi A physics engine An asset that provides 2d collision detector and kinematics, build from scratch in bevy How to use Add PhysicsPlugin to the p

NemuiSen 2 Jun 22, 2022
A interactive and fun to use memory game written in Rust

memg - Memory Game memg is a interactive and fun to use memory game written in rust Installation Make sure you have rust installed. Use this official

null 4 Sep 9, 2021
GDDB is a superfast in-memory database designed for use in Godot

GDDB GDDB is a superfast in-memory database designed for use in Godot. This database aims to provide an easy frontend to an efficient in-memory databa

Richard Patching 5 Dec 4, 2022
A plugin for Egui integration into Bevy

bevy_egui This crate provides a Egui integration for the Bevy game engine. Features: Desktop and web (bevy_webgl2) platforms support Clipboard (web su

Vladyslav Batyrenko 453 Jan 3, 2023
Synchronize games from other platforms into your Steam library

BoilR Description This little tool will synchronize games from other platforms into your Steam library, using the Steam Shortcuts feature. The goal is

Philip Kristoffersen 823 Jan 9, 2023
Reads files from the Tiled editor into Rust

rs-tiled Read maps from the Tiled Map Editor into rust for use in video games. It is game engine agnostic and pretty barebones at the moment. Document

mapeditor.org 227 Jan 5, 2023
Transform your terminal into an art canvas where you can draw stuff!

Termdraw Turn your terminal into the drawing cavnas of your dream... or not! Installation To install this dream-come-true of a tool simply run cargo i

Enoki 5 Nov 23, 2022
Turn your keyboard into a typewriter! 📇

Turn your keyboard into a typewriter! ?? daktilo-demo.mp4 daktilo ("typewriter" in Turkish, pronounced "duck-til-oh", derived from the Ancient Greek w

Orhun Parmaksız 583 Oct 8, 2023
A refreshingly simple data-driven game engine built in Rust

What is Bevy? Bevy is a refreshingly simple data-driven game engine built in Rust. It is free and open-source forever! WARNING Bevy is still in the ve

Bevy Engine 21.1k Jan 4, 2023
Rust SDK for working with RIS-Live real-time BGP data stream.

ris-live-rs Provides parsing functions for RIS-Live real-time BGP message stream JSON data. The main parsing function, parse_ris_live_message converts

BGPKIT 10 Oct 11, 2022
Bevy is a refreshingly simple data-driven game engine built in Rust

What is Bevy? Bevy is a refreshingly simple data-driven game engine built in Rust. It is free and open-source forever! WARNING Bevy is still in the ve

John Winston 3 Jun 3, 2022