The write-once-run-anywhere GPGPU library for Rust

Overview

The old version of Emu (which used macros) is here.

Discord Chat crates.io docs.rs

Overview

Emu is a GPGPU library for Rust with a focus on portability, modularity, and performance.

It's a CUDA-esque compute-specific abstraction over WebGPU providing specific functionality to make WebGPU feel more like CUDA. Here's a quick run-down of highlight features...

  • Emu can run anywhere - Emu uses WebGPU to support DirectX, Metal, Vulkan (and also OpenGL and browser eventually) as compile targets. This allows Emu to run on pretty much any user interface including desktop, mobile, and browser. By moving heavy computations to the user's device, you can reduce system latency and improve privacy.

  • Emu makes compute easier - Emu makes WebGPU feel like CUDA. It does this by providing...

    • DeviceBox<T> as a wrapper for data that lives on the GPU (thereby ensuring type-safe data movement)
    • DevicePool as a no-config auto-managed pool of devices (similar to CUDA)
    • trait Cache - a no-setup-required LRU cache of JITed compute kernels.
  • Emu is transparent - Emu is a fully transparent abstraction. This means, at any point, you can decide to remove the abstraction and work directly with WebGPU constructs with zero overhead. For example, if you want to mix Emu with WebGPU-based graphics, you can do that with zero overhead. You can also swap out the JIT compiler artifact cache with your own cache, manage the device pool if you wish, and define your own compile-to-SPIR-V compiler that interops with Emu.

  • Emu is asynchronous - Emu is fully asynchronous. Most API calls will be non-blocking and can be synchronized by calls to DeviceBox::get when data is read back from device.

An example

Here's a quick example of Emu. You can find more in emu_core/examples and most recent documentation here.

First, we just import a bunch of stuff

use emu_glsl::*;
use emu_core::prelude::*;
use zerocopy::*;

We can define types of structures so that they can be safely serialized and deserialized to/from the GPU.

#[repr(C)]
#[derive(AsBytes, FromBytes, Copy, Clone, Default, Debug)]
struct Rectangle {
    x: u32,
    y: u32,
    w: i32,
    h: i32,
}

For this example, we make this entire function async but in reality you will only want small blocks of code to be async (like a bunch of asynchronous memory transfers and computation) and these blocks will be sent off to an executor to execute. You definitely don't want to do something like this where you are blocking (by doing an entire compilation step) in your async code.

fn main() -> Result<(), Box<dyn std::error::Error>> {
    futures::executor::block_on(assert_device_pool_initialized());

    // first, we move a bunch of rectangles to the GPU
    let mut x: DeviceBox<[Rectangle]> = vec![Default::default(); 128].as_device_boxed()?;
    
    // then we compile some GLSL code using the GlslCompile compiler and
    // the GlobalCache for caching compiler artifacts
    let c = compile::<String, GlslCompile, _, GlobalCache>(
        GlslBuilder::new()
            .set_entry_point_name("main")
            .add_param_mut()
            .set_code_with_glsl(
            r#"
#version 450
layout(local_size_x = 1) in; // our thread block size is 1, that is we only have 1 thread per block

struct Rectangle {
    uint x;
    uint y;
    int w;
    int h;
};

// make sure to use only a single set and keep all your n parameters in n storage buffers in bindings 0 to n-1
// you shouldn't use push constants or anything OTHER than storage buffers for passing stuff into the kernel
// just use buffers with one buffer per binding
layout(set = 0, binding = 0) buffer Rectangles {
    Rectangle[] rectangles;
}; // this is used as both input and output for convenience

Rectangle flip(Rectangle r) {
    r.x = r.x + r.w;
    r.y = r.y + r.h;
    r.w *= -1;
    r.h *= -1;
    return r;
}

// there should be only one entry point and it should be named "main"
// ultimately, Emu has to kind of restrict how you use GLSL because it is compute focused
void main() {
    uint index = gl_GlobalInvocationID.x; // this gives us the index in the x dimension of the thread space
    rectangles[index] = flip(rectangles[index]);
}
            "#,
        )
    )?.finish()?;
    
    // we spawn 128 threads (really 128 thread blocks)
    unsafe {
        spawn(128).launch(call!(c, &mut x));
    }

    // this is the Future we need to block on to get stuff to happen
    // everything else is non-blocking in the API (except stuff like compilation)
    println!("{:?}", futures::executor::block_on(x.get())?);

    Ok(())
}

And last but certainly not least, we use an executor to execute.

fn main() {
    futures::executor::block_on(do_some_stuff()).expect("failed to do stuff on GPU");
}

Built with Emu

Emu is relatively new but has already been used for GPU acceleration in a variety of projects.

  • Used in toil for GPU-accelerated linear algebra
  • Used in ipl3hasher for hash collision finding
  • Used in bigbang for simulating gravitational acceleration (used older version of Emu)

Getting started

The latest stable version is on Crates.io. To start using Emu, simply add the following line to your Cargo.toml.

[dependencies]
emu_core = "0.1.1"

To understand how to start using Emu, check out the docs. If you have any questions, please ask in the Discord.

Contributing

Feedback, discussion, PRs would all very much be appreciated. Some relatively high-priority, non-API-breaking things that have yet to be implemented are the following in rough order of priority.

  • Enusre that WebGPU polling is done correctly in `DeviceBox::get
  • Add support for WGLSL as input, use Naga for shader compilation
  • Add WASM support in Cargo.toml
  • Add benchmarks`
  • Reuse staging buffers between different DeviceBoxes
  • Maybe use uniforms for DeviceBox<T> when T is small (maybe)

If you are interested in any of these or anything else, please don't hesitate to open an issue on GitHub or discuss more on Discord.

Comments
  • OS X example build failed

    OS X example build failed

    Hello, I am trying the example:

    #[macro_use]
    extern crate em;
    use em::*;
    
    #[gpu_use]
    fn main() {
        let mut x = vec![0.0; 1000];
    
        gpu_do!(load(x)); // move data to the GPU
        
        gpu_do!(launch()); // off-load to run on the GPU
    
        for i in 0..1000 {
            x[i] = x[i] * 10.0;
        }
    
        gpu_do!(read(x)); // move data back from the GPU
        
        println!("{:?}", x);
    }
    

    Here's the error.

       ...
       Compiling emu_macro v0.1.0
    error[E0277]: the trait bound `syn::Expr: std::convert::From<quote::__rt::TokenStream>` is not satisfied
       --> /Users/mrrobb/.cargo/registry/src/github.com-1ecc6299db9ec823/emu_macro-0.1.0/src/passing.rs:337:50
        |
    337 |                     let gpu_ident = quote! {gpu}.into();
        |                                                  ^^^^ the trait `std::convert::From<quote::__rt::TokenStream>` is not implemented for `syn::Expr`
        |
        = help: the following implementations were found:
                  <syn::Expr as std::convert::From<syn::ExprArray>>
                  <syn::Expr as std::convert::From<syn::ExprAssign>>
                  <syn::Expr as std::convert::From<syn::ExprAssignOp>>
                  <syn::Expr as std::convert::From<syn::ExprAsync>>
                and 35 others
        = note: required because of the requirements on the impl of `std::convert::Into<syn::Expr>` for `quote::__rt::TokenStream`
    
    error: aborting due to previous error
    
    For more information about this error, try `rustc --explain E0277`.
    error: Could not compile `emu_macro`.
    warning: build failed, waiting for other jobs to finish...
    error: build failed
    

    Am I doing something wrong? Thank you.

    opened by MrRobb 12
  • Compiling error

    Compiling error

    Hi everyone! This project ia awesome. I was tring run example code, but I got error: let gpu_ident = quote! {gpu}.into(); -> ^^^^ the trait std::convert::From<quote::__private::TokenStream> is not implemented for syn::Expr this error was in passing.rs file. Help me please to understand how it works? p.s. my OS is Windows 10 64 bit, Rust has 1.42.0 version, and I have installed OpenCL driver (my video card is Nvidia GeForce 1070).

    opened by PerfectStepCoder 8
  • How a user runs an Emu function

    How a user runs an Emu function

    A function in Emu operates on a "work-item" (work-item is a term OpenCL uses; I loosely use it here but we can refer to it differently if we come up with a better name).

    multiply(global_buffer [f32], scalar f32) {
    	global_buffer[get_global_id(0)] *= scalar;
    }
    

    With the above function, a work-item corresponds to a particular index in global_buffer. So the work can be thought of as a 1d grid with dimensions equal to the length of global_buffer. Let's consider another function.

    multiply_matrices(m i32, n i32, k i32, global_a [f32], global_b [f32], global_c [f32]) {
    	let row: i32 = get_global_id(0);
    	let col: i32 = get_global_id(1);
    
    	let acc: f32 = 0.0;
     
    	for i in 0..k {
    		acc += global_a[i*m + row] * global_b[col*k + i];
    	}
         
    	global_c[col * m + row] = acc;
    }
    

    When this function is run, a work-item corresponds to a pair of indices - one in global_a and one in global_b. So the work in this case is a 2d grid with dimensions equal to the product of the lengths of global_a and global_b.

    Now here's the thing - both of these functions can be ultimately run with a binding to OpenCL. But only the first function can be run with the build! macro. This is because functions you intend to run with the build! macro operate on 1d grids of work where the dimension is by default the length of the first parameter to the function.

    This is an important thing to note and I think it can help us answer the following key questions.

    • How should Emu functions be ultimately called by a user?
    • How should a user be using get_global_id()?
    • A user has a bunch of data - how do we support mapping and filtering and reducing?
    enhancement 
    opened by calebwin 8
  • Structs as input to functions

    Structs as input to functions

    The only things Emu functions can really accept right now are vectors (technically arrays/pointers) and primitive data (f32 or i32). Simple structures could be accepted with 2 changes.

    -A change to the language so you can declare what kind of structs you accept and how to unpackage primitive data from them in the declaration of the Emu function -A change to the build! macro to generate a function that can accept structs of a certain type and unpackage them into primitive data to send to the Emu function.

    Before these changes are implemented, we should think about how the general interface to an Emu user should change. How should they pass structs to functions in a way that is most seamless.

    enhancement 
    opened by calebwin 8
  • Example fails to compile, because of the quote crate

    Example fails to compile, because of the quote crate

    Hello,

    Your project looks very interesting and I wanted to give it ago. I copied your example code on the README and I couldn't compile it. After solving the issue in ticket #27 by using: em = { git = "https://github.com/calebwin/emu", branch = "dev" }

    I get the below output when doing cargo check:

    error[E0433]: failed to resolve: could not find `__rt` in `quote`
       --> C:\Users\Pedro\.cargo\git\checkouts\emu-7973979264d9dc07\095942b\emu_macro\src\accelerating.rs:123:66
        |
    123 | ...                   .is_ident(&Ident::new("load", quote::__rt::Span::call_site()))
        |                                                            ^^^^ could not find `__rt` in `quote`
    
    error[E0433]: failed to resolve: could not find `__rt` in `quote`
       --> C:\Users\Pedro\.cargo\git\checkouts\emu-7973979264d9dc07\095942b\emu_macro\src\accelerating.rs:169:66
        |
    169 | ...                   .is_ident(&Ident::new("read", quote::__rt::Span::call_site()))
        |                                                            ^^^^ could not find `__rt` in `quote`
    
    error[E0433]: failed to resolve: could not find `__rt` in `quote`
       --> C:\Users\Pedro\.cargo\git\checkouts\emu-7973979264d9dc07\095942b\emu_macro\src\accelerating.rs:193:68
        |
    193 | ...                   .is_ident(&Ident::new("launch", quote::__rt::Span::call_site()))
        |                                                              ^^^^ could not find `__rt` in `quote`
    
    error[E0433]: failed to resolve: could not find `__rt` in `quote`
       --> C:\Users\Pedro\.cargo\git\checkouts\emu-7973979264d9dc07\095942b\emu_macro\src\accelerating.rs:259:64
        |
    259 |                     let ident = Ident::new(&param.name, quote::__rt::Span::call_site());
        |                                                                ^^^^ could not find `__rt` in `quote`
    

    I tried hard-setting the 'quote' crate to a specific version (1.0.1) and got the following message:

    error: failed to select a version for `quote`.
        ... required by package `emu_macro v0.1.0 (https://github.com/calebwin/emu?branch=dev#095942ba)`
        ... which is depended on by `em v0.3.0 (https://github.com/calebwin/emu?branch=dev#095942ba)`
        ... which is depended on by `emu-test v0.1.0 (D:\Code\Rust\emu-test)`
    versions that meet the requirements `^1.0.2` are: 1.0.3
    
    all possible versions conflict with previously selected packages.
    
      previously selected package `quote v1.0.1`
        ... which is depended on by `emu-test v0.1.0 (D:\Code\Rust\emu-test)`
    
    failed to select a version for `quote` which could resolve this conflict
    

    Setting quote to 1.0.3 doesn't solve the issue, but it's interesting that 1.0.2 seems to have been yanked from crates.io. Is it possible emu_macro depends on code that is no longer present in 1.0.3?

    opened by palmada 7
  • Unable to get platform id list after 10 seconds of waiting

    Unable to get platform id list after 10 seconds of waiting

    This code waits 10 seconds and prints the error:

    thread 'main' panicked at 'Platform::default(): Unable to get platform id list after 10 seconds of waiting.', src/libcore/result.rs:999:5
    note: Run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
    

    Code:

    use em::{build, emu};
    
    extern crate ocl;
    use ocl::{flags, Buffer, Context, Device, Kernel, Platform, Program, Queue};
    
    emu! {
        multiply(global_vector [f32], scalar f32) {
            global_vector[get_global_id(0)] *= scalar;
        }
    }
    
    build! { multiply [f32] f32 }
    
    fn main() {
        let args = std::env::args().collect::<Vec<String>>();
        if args.len() < 3 {
            panic!("cargo run -- <SCALAR> <NUMBERS>...");
        }
    
        let scalar = args[1].parse::<f32>().unwrap();
    
        let vector = args[2..]
            .into_iter()
            .map(|string| string.parse::<f32>().unwrap())
            .collect();
    
        let result = multiply(vector, scalar).unwrap();
        dbg!(result);
    }
    

    My operating system is Ubuntu and I installed OpenCL by the following commands:

    $ sudo apt update
    $ sudo apt install ocl-icd-opencl-dev
    
    bug 
    opened by Hirrolot 5
  • README instructions have various issues

    README instructions have various issues

    As mentioned in #31 , I'm migrating my project to use the new emu_core GLSL abstraction layer and I am encountering a few documentation issues getting started.

    In the dependencies section, it is said that this is how you add emu to a project:

    emu_core = {
        git = "https://github.com/calebwin/emu/tree/master/emu_core.git",
        rev = "265d2a5fb9292e2644ae4431f2982523a8d27a0f"
    }
    

    Newlines inside of a dep key in Cargo are invalid, and also this isn't a valid Git URL. Currently, the only way I see to use emu_core is to clone the whole emu repository and then use a path dependency. In workspace setups, typically the individual crates are published to crates.io individually, avoiding this issue.

    Additionally, there's a discord link at the bottom of the README that I was going to use to address the above question, but the Discord invite is invalid.

    The only person who can fix these issues would be @calebwin so that's why I'm filing an issue instead of a PR. Thanks again for the crate -- I'll be using it with the cloned dependency for now.

    opened by sezna 4
  • Support for structs and typechecking?

    Support for structs and typechecking?

    I wanted to ping you and make you aware of this project having raised quite some interest on reddit. There are two interesting questions, one about potential struct support, and the other about type checking/inference before translation.

    Maybe they are of interest to you? See the reddit thread here: https://www.reddit.com/r/rust/comments/bvwvpd/emu_gpu_programming_language_for_rust/

    opened by SuperFluffy 4
  • How to pass a 2D array of floats?

    How to pass a 2D array of floats?

    How can I pass a 2D array of floats?

    Preparing any sort of DeviceBox from a Vec<Vec<f32>> is a no-go it seems.

    The dimensions of the vector are compile-time constants from the perspective of GLSL (they get formatted in). The dimensions are determined at runtime on the rust side.

    Can I just flatten into a single buffer and the GLSL code won't notice?

    opened by wbrickner 2
  • coreaudio-sys AudioUnit compile error on Windows & Linux

    coreaudio-sys AudioUnit compile error on Windows & Linux

    I get this in WSL2 Debian on Windows

    error[E0455]: native frameworks are only available on macOS targets
        --> /home/walther/.cargo/registry/src/github.com-1ecc6299db9ec823/coreaudio-sys-0.1.2/src/audio_unit.rs:6380:1
         |
    6380 | #[link(name = "AudioUnit", kind = "framework")]
         | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    
    error[E0455]: native frameworks are only available on macOS targets
        --> /home/walther/.cargo/registry/src/github.com-1ecc6299db9ec823/coreaudio-sys-0.1.2/src/audio_unit.rs:6739:1
         |
    6739 | #[link(name = "AudioUnit", kind = "framework")]
         | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    
    error: aborting due to 2 previous errors
    
    For more information about this error, try `rustc --explain E0455`.
    error: could not compile `coreaudio-sys`.
    

    image

    As well as on Windows itself image

    opened by Walther 2
  • Initial overhead

    Initial overhead

    Hello !

    First, thanks for this crate and your contribution to the Rust community. This is amazingly simple to use, even for me who has not ever touch OpenCL.

    I tried running a simple benchmark program and I found the result unsatisfying. I suppose this example is so trivial, the initial cost of initializing the opencl environment each time is heavy and this is slowing down the entire function call.

    The program is (based on your example) :

    #![feature(test)]
    
    extern crate em;
    extern crate ocl;
    
    extern crate test;
    
    use em::emu;
    
    emu! {
        function logistic(x [f32]) {
            x[..] = 1 / (1 + pow(E, -x[..]));
        }
    
        pub fn logistic(x: &mut Vec<f32>);
    }
    
    pub fn logistic_cpu(x: &mut Vec<f32>) {
        let mut result = Vec::new();
    
        for value in x {
            result.push(1.0 / (1.0 + 2.71828182846_f32.powf(-*value)))
        }
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
        use test::Bencher;
    
        #[bench]
        fn logistic_opencl(b: &mut Bencher) {
            let mut test_data = vec![0.9, 4.9, 4.8, 3.9, 1.3, 4.8, 9.13, -0.16, 81.20, -16.0, 0.9, 4.9, 4.8, 3.9, 1.3, 4.8, 9.13, -0.16, 81.20, -16.0, 0.9, 4.9, 4.8, 3.9, 1.3, 4.8, 9.13, -0.16, 81.20, -16.0, 0.9, 4.9, 4.8, 3.9, 1.3, 4.8, 9.13, -0.16, 81.20, -16.0, 0.9, 4.9, 4.8, 3.9, 1.3, 4.8, 9.13, -0.16];
            b.iter(|| logistic(&mut test_data));
            println!("OpenCL : {:?}", test_data);
        }
    
        #[bench]
        fn logistic_non_opencl(c: &mut Bencher) {
            let mut test_data = vec![0.9, 4.9, 4.8, 3.9, 1.3, 4.8, 9.13, -0.16, 81.20, -16.0, 0.9, 4.9, 4.8, 3.9, 1.3, 4.8, 9.13, -0.16, 81.20, -16.0, 0.9, 4.9, 4.8, 3.9, 1.3, 4.8, 9.13, -0.16, 81.20, -16.0, 0.9, 4.9, 4.8, 3.9, 1.3, 4.8, 9.13, -0.16, 81.20, -16.0, 0.9, 4.9, 4.8, 3.9, 1.3, 4.8, 9.13, -0.16];
            c.iter(|| logistic_cpu(&mut test_data));
            println!("non OpenCL : {:?}", test_data);
        }
    }
    

    And the result is :

    test tests::logistic_non_opencl ... bench:         561 ns/iter (+/- 66)
    test tests::logistic_opencl     ... bench:  72,081,552 ns/iter (+/- 4,863,815)
    
    

    My initial intention was to write a recurrent network as efficiently as possible. Do you think using Emu is a good choice ?

    opened by hugues31 2
  • Example panics at runtime (`COPY_DST` flag)

    Example panics at runtime (`COPY_DST` flag)

    Hello, running the compute example:

    View full code
    use emu_core::prelude::*;
    use emu_glsl::*;
    use zerocopy::*;
    
    #[repr(C)]
    #[derive(AsBytes, FromBytes, Copy, Clone, Default, Debug, GlslStruct)]
    struct Shape {
      x: u32,
      y: u32,
      w: i32,
      h: i32,
      r: [i32; 2],
    }
    
    fn main() -> Result<(), Box<dyn std::error::Error>> {
        // ensure that a device pool has been initialized
        // this should be called before every time when you assume you have devices to use
        // that goes for both library users and application users
        futures::executor::block_on(assert_device_pool_initialized());
    
        println!("{:?}", take()?.lock().unwrap().info.as_ref().unwrap());
    
        // create some data on GPU
        // even mutate it once loaded to GPU
        let mut shapes: DeviceBox<[Shape]> = vec![Default::default(); 1024].as_device_boxed_mut()?;
        let mut x: DeviceBox<[i32]> = vec![0; 1024].as_device_boxed_mut()?;
        shapes.set(vec![
            Shape {
                x: 0,
                y: 0,
                w: 100,
                h: 100,
                r: [2, 9]
            };
            1024
        ])?;
    
        // compile GslKernel to SPIR-V
        // then, we can either inspect the SPIR-V or finish the compilation by generating a DeviceFnMut
        // then, run the DeviceFnMut
        let c = compile::<GlslKernel, GlslKernelCompile, Vec<u32>, GlobalCache>(
            GlslKernel::new()
                .spawn(64)
                .share("float stuff[64]")
                .param_mut::<[Shape], _>("Shape[] shapes")
                .param_mut::<[i32], _>("int[] x")
                .param::<i32, _>("int scalar")
                .with_struct::<Shape>()
                .with_const("int c", "7")
                .with_helper_code(
                    r#"
    Shape flip(Shape s) {
        s.x = s.x + s.w;
        s.y = s.y + s.h;
        s.w *= -1;
        s.h *= -1;
        s.r = ivec2(5, 3);
        return s;
    }
    "#,
        )
        .with_kernel_code(
            "shapes[gl_GlobalInvocationID.x] = flip(shapes[gl_GlobalInvocationID.x]); x[gl_GlobalInvocationID.x] = scalar + c + int(gl_WorkGroupID.x);",
        ),
    )?.finish()?;
        unsafe {
            spawn(16).launch(call!(c, &mut shapes, &mut x, &DeviceBox::new(10)?))?;
        }
    
        // download from GPU and print out
        println!("{:?}", futures::executor::block_on(shapes.get())?);
        println!("{:?}", futures::executor::block_on(x.get())?);
        Ok(())
    }
    
    $ cargo run
    

    yields

        Finished dev [unoptimized + debuginfo] target(s) in 0.44s
         Running `target/debug/emu_test`
    Limits {
        max_bind_groups: 4,
        max_dynamic_uniform_buffers_per_pipeline_layout: 8,
        max_dynamic_storage_buffers_per_pipeline_layout: 4,
        max_sampled_textures_per_shader_stage: 16,
        max_samplers_per_shader_stage: 16,
        max_storage_buffers_per_shader_stage: 4,
        max_storage_textures_per_shader_stage: 4,
        max_uniform_buffers_per_shader_stage: 12,
        max_uniform_buffer_binding_size: 16384,
        max_push_constant_size: 0,
    }
    { name: "Intel(R) Iris(TM) Plus Graphics 655", vendor_id: 0, device_id: 0, device_type: IntegratedGpu }
    wgpu error: Validation Error
    
    Caused by:
        In CommandEncoder::copy_buffer_to_buffer
        Copy error
        destination buffer/texture is missing the `COPY_DST` usage flag
          note: destination = `<Buffer-(4, 1, Metal)>`
    
    
    thread 'main' panicked at 'Handling wgpu errors as fatal by default', /Users/wbrickner/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-0.7.0/src/backend/direct.rs:1896:5
    stack backtrace:
       0: std::panicking::begin_panic
                 at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/std/src/panicking.rs:616:12
       1: wgpu::backend::direct::default_error_handler
                 at /Users/wbrickner/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-0.7.0/src/backend/direct.rs:1896:5
       2: core::ops::function::Fn::call
                 at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/core/src/ops/function.rs:70:5
       3: <alloc::boxed::Box<F,A> as core::ops::function::Fn<Args>>::call
                 at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/alloc/src/boxed.rs:1875:9
       4: wgpu::backend::direct::ErrorSinkRaw::handle_error
                 at /Users/wbrickner/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-0.7.0/src/backend/direct.rs:1883:9
       5: wgpu::backend::direct::Context::handle_error
                 at /Users/wbrickner/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-0.7.0/src/backend/direct.rs:109:9
       6: wgpu::backend::direct::Context::handle_error_nolabel
                 at /Users/wbrickner/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-0.7.0/src/backend/direct.rs:121:9
       7: <wgpu::backend::direct::Context as wgpu::Context>::command_encoder_copy_buffer_to_buffer
                 at /Users/wbrickner/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-0.7.0/src/backend/direct.rs:1542:13
       8: wgpu::CommandEncoder::copy_buffer_to_buffer
                 at /Users/wbrickner/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-0.7.0/src/lib.rs:1954:9
       9: emu_core::device::Device::get::{{closure}}
                 at /Users/wbrickner/.cargo/git/checkouts/emu-7973979264d9dc07/9fe3db3/emu_core/src/device.rs:391:9
      10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
                 at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/core/src/future/mod.rs:91:19
      11: emu_core::boxed::<impl emu_core::device::DeviceBox<[T]>>::get::{{closure}}
                 at /Users/wbrickner/.cargo/git/checkouts/emu-7973979264d9dc07/9fe3db3/emu_core/src/boxed.rs:298:23
      12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
                 at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/core/src/future/mod.rs:91:19
      13: futures_executor::local_pool::block_on::{{closure}}
                 at /Users/wbrickner/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.21/src/local_pool.rs:315:23
      14: futures_executor::local_pool::run_executor::{{closure}}
                 at /Users/wbrickner/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.21/src/local_pool.rs:90:37
      15: std::thread::local::LocalKey<T>::try_with
                 at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/std/src/thread/local.rs:442:16
      16: std::thread::local::LocalKey<T>::with
                 at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/std/src/thread/local.rs:418:9
      17: futures_executor::local_pool::run_executor
                 at /Users/wbrickner/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.21/src/local_pool.rs:86:5
      18: futures_executor::local_pool::block_on
                 at /Users/wbrickner/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.21/src/local_pool.rs:315:5
      19: emu_test::main
                 at ./src/main.rs:71:22
      20: core::ops::function::FnOnce::call_once
                 at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/core/src/ops/function.rs:227:5
    note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
    

    my understanding is that buffers must have their usage declared correctly (with some amount of detail) at construction time through wgpu.

    opened by wbrickner 3
  • `.finish()` stage of the shader compilation segfaults on NVIDIA Vulkan driver

    `.finish()` stage of the shader compilation segfaults on NVIDIA Vulkan driver

    I am running on Ubuntu 22.04 with emu_core 0.1.1, info()?.name is "NVIDIA GeForce RTX 3050 Ti Laptop GPU", the driver is version 515 of the official NVIDIA Linux driver, installed through APT.

    The problem seems to be related to the presence of a storage buffer, the one called prec_mat: if I remove it in both in the shader and in the SpirvBuilder, the issue does not manifest. I am using rust-gpu to write my shader. Note that if my integrated AMD GPU is selected, the code runs fine.

    Below is a comprehensive stack trace:

    ___lldb_unnamed_symbol462 (@___lldb_unnamed_symbol462:301)
    ___lldb_unnamed_symbol11106 (@___lldb_unnamed_symbol11106:2200)
    ___lldb_unnamed_symbol11107 (@___lldb_unnamed_symbol11107:19)
    ___lldb_unnamed_symbol16036 (@___lldb_unnamed_symbol16036:120)
    ___lldb_unnamed_symbol11528 (@___lldb_unnamed_symbol11528:60)
    ___lldb_unnamed_symbol11308 (@___lldb_unnamed_symbol11308:258)
    _nv002nvvm (@_nv002nvvm:11)
    ___lldb_unnamed_symbol58166 (@___lldb_unnamed_symbol58166:66)
    ___lldb_unnamed_symbol58168 (@___lldb_unnamed_symbol58168:583)
    ___lldb_unnamed_symbol58169 (@___lldb_unnamed_symbol58169:146)
    ___lldb_unnamed_symbol58181 (@___lldb_unnamed_symbol58181:164)
    ___lldb_unnamed_symbol58182 (@___lldb_unnamed_symbol58182:8)
    ___lldb_unnamed_symbol58172 (@___lldb_unnamed_symbol58172:148)
    ___lldb_unnamed_symbol58204 (@___lldb_unnamed_symbol58204:91)
    ___lldb_unnamed_symbol57964 (@___lldb_unnamed_symbol57964:70)
    ___lldb_unnamed_symbol57965 (@___lldb_unnamed_symbol57965:28)
    ash::vk::features::DeviceFnV1_0::create_compute_pipelines (/home/mikidep/.cargo/registry/src/github.com-1ecc6299db9ec823/ash-0.31.0/src/vk/features.rs:5094)
    gfx_backend_vulkan::device::<impl gfx_hal::device::Device<gfx_backend_vulkan::Backend> for gfx_backend_vulkan::Device>::create_compute_pipeline (/home/mikidep/.cargo/registry/src/github.com-1ecc6299db9ec823/gfx-backend-vulkan-0.5.11/src/device.rs:1044)
    wgpu_core::device::<impl wgpu_core::hub::Global<G>>::device_create_compute_pipeline (/home/mikidep/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-core-0.5.6/src/device/mod.rs:1932)
    wgpu_device_create_compute_pipeline (/home/mikidep/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-native-0.5.1/src/device.rs:347)
    wgpu::Device::create_compute_pipeline (/home/mikidep/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-0.5.2/src/lib.rs:906)
    emu_core::device::Device::compile (/home/mikidep/.cargo/registry/src/github.com-1ecc6299db9ec823/emu_core-0.1.1/src/device.rs:611)
    emu_core::compile::SpirvOrFinished<P,C>::finish (/home/mikidep/.cargo/registry/src/github.com-1ecc6299db9ec823/emu_core-0.1.1/src/compile.rs:305)
    scene_emu::main (/home/mikidep/Documenti/Codice/scene-emu/src/main.rs:104)
    core::ops::function::FnOnce::call_once (@core::ops::function::FnOnce::call_once:6)
    std::sys_common::backtrace::__rust_begin_short_backtrace (@std::sys_common::backtrace::__rust_begin_short_backtrace:6)
    std::rt::lang_start::{{closure}} (@std::rt::lang_start::{{closure}}:7)
    core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once (@std::rt::lang_start_internal:184)
    std::panicking::try::do_call (@std::rt::lang_start_internal:183)
    std::panicking::try (@std::rt::lang_start_internal:183)
    std::panic::catch_unwind (@std::rt::lang_start_internal:183)
    std::rt::lang_start_internal::{{closure}} (@std::rt::lang_start_internal:183)
    std::panicking::try::do_call (@std::rt::lang_start_internal:183)
    std::panicking::try (@std::rt::lang_start_internal:183)
    std::panic::catch_unwind (@std::rt::lang_start_internal:183)
    std::rt::lang_start_internal (@std::rt::lang_start_internal:183)
    std::rt::lang_start (@std::rt::lang_start:13)
    main (@main:10)
    __libc_start_call_main (@__libc_start_call_main:29)
    __libc_start_main_impl (@__libc_start_main@@GLIBC_2.34:43)
    _start (@_start:15)
    

    I am also attaching relevant Rust code and disassembled shader SPIR-V code:

    • main.rs.txt - This is the entrypoint of my main crate, which uses Emu;
    • stack.rs.txt - This is a module shared between my shader and my main crate, which is used in shader parameter and DeviceBox definitions;
    • lib.rs.txt - This is the entrypoint of my shader, where the accepted storage buffers are listed;
    • main_shader_disass.txt - This is the disassembled version of the SPIR-V shader compiled with rust-gpu.

    Below are extracts from the above source files, in which the incriminated parameter is declared:

    (in main.rs)

        let spirv = SpirvBuilder::new()
            .set_entry_point_name("main")
            .add_param_mut::<[u32]>() // alpha
            .add_param_mut::<[StackSym]>() // stack
            .add_param_mut::<[usize]>() // gives_stack
            .add_param_mut::<[u32]>() // prec_mat
            .add_param::<usize>() // length
            .add_param::<usize>() // chunk_size
            .add_param::<u32>() // term_thresh
            .set_code_with_u8(std::io::Cursor::new(code))?
            .build();
        let c = compile::<Spirv<_>, SpirvCompile, _, GlobalCache>(spirv)?.finish()?;
    

    Segfault happens on the last line.

    (in lib.rs)

    #[spirv(compute(threads(4)))]
    pub fn main(
        #[spirv(global_invocation_id)] id: UVec3,
        #[spirv(storage_buffer, descriptor_set = 0, binding = 0)] alpha: &mut [u32],
        #[spirv(storage_buffer, descriptor_set = 0, binding = 1)] stack: &mut [StackSym],
        #[spirv(storage_buffer, descriptor_set = 0, binding = 2)] gives_stack: &mut [usize],
        #[spirv(storage_buffer, descriptor_set = 0, binding = 3)] prec_mat: &mut [u32],
        #[spirv(storage_buffer, descriptor_set = 0, binding = 4)] length: &mut usize,
        #[spirv(storage_buffer, descriptor_set = 0, binding = 5)] chunk_size: &mut usize,
        #[spirv(storage_buffer, descriptor_set = 0, binding = 6)] term_thresh: &mut u32,
    ) { // ...
    

    I understand that the issue should be related to NVIDIA's Vulkan implementation, but maybe you know something about this kind of issue. Thank you in advance.

    opened by mikidep 0
  • Bump smallvec from 1.2.0 to 1.8.0

    Bump smallvec from 1.2.0 to 1.8.0

    Bumps smallvec from 1.2.0 to 1.8.0.

    Release notes

    Sourced from smallvec's releases.

    v1.8.0

    • Add optional support for the arbitrary crate (#275).

    v1.7.0

    • new_const and from_const constructors for creating a SmallVec in const contexts. Requires Rust 1.51 and the optional const_new feature. (#265)

    v1.6.1

    • Fix a possible buffer overflow in insert_many (#252, #254).

    v1.6.0

    • The "union" feature is now compatible with stable Rust 1.49 (#248, #247).
    • Fixed warnings when compiling with Rust 1.51 nightly (#242, #246).

    v1.5.1

    • Improve performance of push (#241).

    v1.5.0

    • Add the append method (#237).
    • Add support for more array sizes between 17 and 31 (#234).
    • Don't panic on deserialization errors (#238).

    v1.4.2

    • insert_many no longer leaks elements if the provided iterator panics (#213).
    • The unstable const_generics and specialization features are updated to work with the most recent nightly Rust toolchain (#232).
    • Internal code cleanup (#229, #231).

    v1.4.1

    • Don't allocate when the size of the element type is zero. Allocating zero bytes is undefined behavior. (#228)

    v1.4.0

    • Add try_reserve, try_reserve_exact, and try_grow methods (#214).

    v1.3.0

    • Add a new unstable const_generics feature (#204).
    • Improve inlining of constructor functions (#206).
    • Add a slice.to_smallvec() convenience method (#203).
    • Documentation and testing improvements.
    Commits
    • 0a4fdff Version 1.8.0
    • 6d0dea5 Auto merge of #275 - as-com:arbitrary-support, r=mbrubeck
    • 9bcd950 Add support for arbitrary
    • 7cbb3b1 Auto merge of #271 - saethlin:drain-aliasing-test, r=jdm
    • 0fced9d Test for drains that shift the tail, when inline
    • 218e0bb Merge pull request #270 from servo/github-actions
    • 52c50af Replace TravisCI with Github Actions.
    • 5ae217a Include the cost of shifts in insert/remove benchmarks (#268)
    • 58edc0e Version 1.7.0
    • 1e4b151 Added feature const_new which enables SmallVec::new_const() (#265)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump futures-task from 0.3.4 to 0.3.21

    Bump futures-task from 0.3.4 to 0.3.21

    Bumps futures-task from 0.3.4 to 0.3.21.

    Release notes

    Sourced from futures-task's releases.

    0.3.21

    • Fix potential data race in FlattenUnordered that introduced in 0.3.20 (#2566)

    0.3.20

    • Fix stacked borrows violations when -Zmiri-tag-raw-pointers is enabled. This raises MSRV of futures-task to 1.45. (#2548, #2550)
    • Change FuturesUnordered to respect yielding from future (#2551)
    • Add StreamExt::{flatten_unordered, flat_map_unordered} (#2083)

    0.3.19

    • Remove unstable read-initializer feature (#2534)
    • Fix panic in FuturesUnordered (#2535)
    • Fix compatibility issue with FuturesUnordered and tokio's cooperative scheduling (#2527)
    • Add StreamExt::count (#2495)

    0.3.18

    • Fix unusable Sink implementation on stream::Scan (#2499)
    • Make task::noop_waker_ref available without std feature (#2505)
    • Add async LineWriter (#2477)
    • Remove dependency on proc-macro-hack. This raises MSRV of utility crates to 1.45. (#2520)

    0.3.17

    • Use FuturesOrdered in join_all (#2412)
    • Add {future, stream}::poll_immediate (#2452)
    • Add stream_select! macro (#2262)
    • Implement Default for OptionFuture (#2471)
    • Add Peekable::{peek_mut, poll_peek_mut} (#2488)
    • Add BufReader::seek_relative (#2489)

    0.3.16

    • Add TryStreamExt::try_chunks (#2438)
    • Add StreamExt::{all, any} (#2460)
    • Add stream::select_with_strategy (#2450)
    • Update to new io_slice_advance interface (#2454)

    0.3.15

    • Use #[proc_macro] at Rust 1.45+ to fix an issue where proc macros don't work with rust-analyzer (#2407)
    • Support targets that do not have atomic CAS on stable Rust (#2400)
    • futures-test: Add async #[test] function attribute (#2409)
    • Add stream::abortable (#2410)
    • Add FuturesUnordered::clear (#2415)
    • Implement IntoIterator for FuturesUnordered (#2423)
    • Implement Send and Sync for FuturesUnordered iterators (#2416)
    • Make FuturesUnordered::iter_pin_ref public (#2423)
    • Add SelectAll::clear (#2430)
    • Add SelectAll::{iter, iter_mut} (#2428)
    • Implement IntoIterator for SelectAll (#2428)
    • Implement Clone for WeakShared (#2396)

    0.3.14

    • Add future::SelectAll::into_inner (#2363)

    ... (truncated)

    Changelog

    Sourced from futures-task's changelog.

    0.3.21 - 2022-02-06

    • Fix potential data race in FlattenUnordered that introduced in 0.3.20 (#2566)

    0.3.20 - 2022-02-06

    NOTE: This release has been yanked due to a bug fixed in 0.3.21.

    • Fix stacked borrows violations when -Zmiri-tag-raw-pointers is enabled. This raises MSRV of futures-task to 1.45. (#2548, #2550)
    • Change FuturesUnordered to respect yielding from future (#2551)
    • Add StreamExt::{flatten_unordered, flat_map_unordered} (#2083)

    0.3.19 - 2021-12-18

    • Remove unstable read-initializer feature (#2534)
    • Fix panic in FuturesUnordered (#2535)
    • Fix compatibility issue with FuturesUnordered and tokio's cooperative scheduling (#2527)
    • Add StreamExt::count (#2495)

    0.3.18 - 2021-11-23

    NOTE: This release has been yanked. See #2529 for details.

    • Fix unusable Sink implementation on stream::Scan (#2499)
    • Make task::noop_waker_ref available without std feature (#2505)
    • Add async LineWriter (#2477)
    • Remove dependency on proc-macro-hack. This raises MSRV of utility crates to 1.45. (#2520)

    0.3.17 - 2021-08-30

    • Use FuturesOrdered in join_all (#2412)
    • Add {future, stream}::poll_immediate (#2452)
    • Add stream_select! macro (#2262)
    • Implement Default for OptionFuture (#2471)
    • Add Peekable::{peek_mut, poll_peek_mut} (#2488)
    • Add BufReader::seek_relative (#2489)

    0.3.16 - 2021-07-23

    • Add TryStreamExt::try_chunks (#2438)
    • Add StreamExt::{all, any} (#2460)
    • Add stream::select_with_strategy (#2450)
    • Update to new io_slice_advance interface (#2454)

    0.3.15 - 2021-05-11

    • Use #[proc_macro] at Rust 1.45+ to fix an issue where proc macros don't work with rust-analyzer (#2407)
    • Support targets that do not have atomic CAS on stable Rust (#2400)
    • futures-test: Add async #[test] function attribute (#2409)
    • Add stream::abortable (#2410)

    ... (truncated)

    Commits
    • fc1e325 Release 0.3.21
    • 20279eb FlattenUnordered: improve wakers behavior (#2566)
    • 75dca5a Fix MSRV in futures-task readme
    • 55281c8 Release 0.3.20
    • 591b982 Redefine executor and compat modules in futures crate (#2564)
    • 94b508b Basic StreamExt::{flatten_unordered, flat_map_unordered} impls (#2083)
    • dca16fa Do not auto-create PR on fork
    • a9795a9 Automatically creates PR when no_atomic_cas.rs needs to be updated
    • 4841888 Update comments in build scripts
    • 85706b6 Clean up ci/no_atomic_cas.sh
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump futures-util from 0.3.4 to 0.3.21

    Bump futures-util from 0.3.4 to 0.3.21

    Bumps futures-util from 0.3.4 to 0.3.21.

    Release notes

    Sourced from futures-util's releases.

    0.3.21

    • Fix potential data race in FlattenUnordered that introduced in 0.3.20 (#2566)

    0.3.20

    • Fix stacked borrows violations when -Zmiri-tag-raw-pointers is enabled. This raises MSRV of futures-task to 1.45. (#2548, #2550)
    • Change FuturesUnordered to respect yielding from future (#2551)
    • Add StreamExt::{flatten_unordered, flat_map_unordered} (#2083)

    0.3.19

    • Remove unstable read-initializer feature (#2534)
    • Fix panic in FuturesUnordered (#2535)
    • Fix compatibility issue with FuturesUnordered and tokio's cooperative scheduling (#2527)
    • Add StreamExt::count (#2495)

    0.3.18

    • Fix unusable Sink implementation on stream::Scan (#2499)
    • Make task::noop_waker_ref available without std feature (#2505)
    • Add async LineWriter (#2477)
    • Remove dependency on proc-macro-hack. This raises MSRV of utility crates to 1.45. (#2520)

    0.3.17

    • Use FuturesOrdered in join_all (#2412)
    • Add {future, stream}::poll_immediate (#2452)
    • Add stream_select! macro (#2262)
    • Implement Default for OptionFuture (#2471)
    • Add Peekable::{peek_mut, poll_peek_mut} (#2488)
    • Add BufReader::seek_relative (#2489)

    0.3.16

    • Add TryStreamExt::try_chunks (#2438)
    • Add StreamExt::{all, any} (#2460)
    • Add stream::select_with_strategy (#2450)
    • Update to new io_slice_advance interface (#2454)

    0.3.15

    • Use #[proc_macro] at Rust 1.45+ to fix an issue where proc macros don't work with rust-analyzer (#2407)
    • Support targets that do not have atomic CAS on stable Rust (#2400)
    • futures-test: Add async #[test] function attribute (#2409)
    • Add stream::abortable (#2410)
    • Add FuturesUnordered::clear (#2415)
    • Implement IntoIterator for FuturesUnordered (#2423)
    • Implement Send and Sync for FuturesUnordered iterators (#2416)
    • Make FuturesUnordered::iter_pin_ref public (#2423)
    • Add SelectAll::clear (#2430)
    • Add SelectAll::{iter, iter_mut} (#2428)
    • Implement IntoIterator for SelectAll (#2428)
    • Implement Clone for WeakShared (#2396)

    0.3.14

    • Add future::SelectAll::into_inner (#2363)

    ... (truncated)

    Changelog

    Sourced from futures-util's changelog.

    0.3.21 - 2022-02-06

    • Fix potential data race in FlattenUnordered that introduced in 0.3.20 (#2566)

    0.3.20 - 2022-02-06

    NOTE: This release has been yanked due to a bug fixed in 0.3.21.

    • Fix stacked borrows violations when -Zmiri-tag-raw-pointers is enabled. This raises MSRV of futures-task to 1.45. (#2548, #2550)
    • Change FuturesUnordered to respect yielding from future (#2551)
    • Add StreamExt::{flatten_unordered, flat_map_unordered} (#2083)

    0.3.19 - 2021-12-18

    • Remove unstable read-initializer feature (#2534)
    • Fix panic in FuturesUnordered (#2535)
    • Fix compatibility issue with FuturesUnordered and tokio's cooperative scheduling (#2527)
    • Add StreamExt::count (#2495)

    0.3.18 - 2021-11-23

    NOTE: This release has been yanked. See #2529 for details.

    • Fix unusable Sink implementation on stream::Scan (#2499)
    • Make task::noop_waker_ref available without std feature (#2505)
    • Add async LineWriter (#2477)
    • Remove dependency on proc-macro-hack. This raises MSRV of utility crates to 1.45. (#2520)

    0.3.17 - 2021-08-30

    • Use FuturesOrdered in join_all (#2412)
    • Add {future, stream}::poll_immediate (#2452)
    • Add stream_select! macro (#2262)
    • Implement Default for OptionFuture (#2471)
    • Add Peekable::{peek_mut, poll_peek_mut} (#2488)
    • Add BufReader::seek_relative (#2489)

    0.3.16 - 2021-07-23

    • Add TryStreamExt::try_chunks (#2438)
    • Add StreamExt::{all, any} (#2460)
    • Add stream::select_with_strategy (#2450)
    • Update to new io_slice_advance interface (#2454)

    0.3.15 - 2021-05-11

    • Use #[proc_macro] at Rust 1.45+ to fix an issue where proc macros don't work with rust-analyzer (#2407)
    • Support targets that do not have atomic CAS on stable Rust (#2400)
    • futures-test: Add async #[test] function attribute (#2409)
    • Add stream::abortable (#2410)

    ... (truncated)

    Commits
    • fc1e325 Release 0.3.21
    • 20279eb FlattenUnordered: improve wakers behavior (#2566)
    • 75dca5a Fix MSRV in futures-task readme
    • 55281c8 Release 0.3.20
    • 591b982 Redefine executor and compat modules in futures crate (#2564)
    • 94b508b Basic StreamExt::{flatten_unordered, flat_map_unordered} impls (#2083)
    • dca16fa Do not auto-create PR on fork
    • a9795a9 Automatically creates PR when no_atomic_cas.rs needs to be updated
    • 4841888 Update comments in build scripts
    • 85706b6 Clean up ci/no_atomic_cas.sh
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump crossbeam-queue from 0.2.1 to 0.2.3

    Bump crossbeam-queue from 0.2.1 to 0.2.3

    Bumps crossbeam-queue from 0.2.1 to 0.2.3.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Owner
Caleb Winston
I'm currently working on computing at scale, computing at the edge, neuroscience, lab automation...
Caleb Winston
Statistical computation library for Rust

statrs Current Version: v0.13.0 Should work for both nightly and stable Rust. NOTE: While I will try to maintain backwards compatibility as much as po

Michael Ma 384 Dec 27, 2022
Mathematical optimization in pure Rust

argmin A pure Rust optimization framework This crate offers a numerical optimization toolbox/framework written entirely in Rust. It is at the moment p

argmin 549 Jan 1, 2023
BLAS bindings for Rust

RBLAS Rust bindings and wrappers for BLAS (Basic Linear Algebra Subprograms). Overview RBLAS wraps each external call in a trait with the same name (b

Michael Yang 77 Oct 8, 2022
gmp bindings for rust

Documentation The following functions are intentionally left out of the bindings: gmp_randinit (not thread-safe, obsolete) mpz_random (not thread-safe

Bartłomiej Kamiński 37 Nov 5, 2022
Rust wrapper for ArrayFire

Arrayfire Rust Bindings ArrayFire is a high performance library for parallel computing with an easy-to-use API. It enables users to write scientific c

ArrayFire 696 Dec 30, 2022
Collection of Optimization algorithm in Rust

rustimization A rust optimization library which includes L-BFGS-B and Conjugate Gradient algorithm. Documentation The simplest way to use these optimi

Naushad Karim 47 Sep 23, 2022
Easy c̵̰͠r̵̛̠ö̴̪s̶̩̒s̵̭̀-t̶̲͝h̶̯̚r̵̺͐e̷̖̽ḁ̴̍d̶̖̔ ȓ̵͙ė̶͎ḟ̴͙e̸̖͛r̶̖͗ë̶̱́ṉ̵̒ĉ̷̥e̷͚̍ s̷̹͌h̷̲̉a̵̭͋r̷̫̊ḭ̵̊n̷̬͂g̵̦̃ f̶̻̊ơ̵̜ṟ̸̈́ R̵̞̋ù̵̺s̷̖̅ţ̸͗!̸̼͋

Rust S̵̓i̸̓n̵̉ I̴n̴f̶e̸r̵n̷a̴l mutability! Howdy, friendly Rust developer! Ever had a value get m̵̯̅ð̶͊v̴̮̾ê̴̼͘d away right under your nose just when

null 294 Dec 23, 2022
Quinine is a Rust library that implements atomic, lock-free, but write-once versions of containers like `Box` or `Arc`

Quinine is a Rust library that implements atomic, lock-free, but write-once versions of containers like `Box` or `Arc`

Paul Khuong 4 Feb 19, 2022
A dependency-free chess engine library built to run anywhere.

♔chess-engine♚ A dependency-free chess engine library built to run anywhere. Demo | Docs | Contact Me Written in Rust ?? ?? Why write a Chess engine?

adam mcdaniel 355 Dec 26, 2022
🚀Wasmer is a fast and secure WebAssembly runtime that enables super lightweight containers to run anywhere

Wasmer is a fast and secure WebAssembly runtime that enables super lightweight containers to run anywhere: from Desktop to the Cloud, Edge and IoT devices.

Wasmer 14.1k Jan 8, 2023
Simple WIP GPGPU framework for Rust built on top of wgpu

gpgpu A simple GPU compute library based on wgpu. It is meant to be used alongside wgpu if desired. To start using gpgpu, just create a Framework inst

Jerónimo Sánchez 97 Dec 26, 2022
Accel: GPGPU Framework for Rust

Accel: GPGPU Framework for Rust crate crates.io docs.rs GitLab Pages accel CUDA-based GPGPU framework accel-core Helper for writing device code accel-

Toshiki Teramura 439 Dec 18, 2022
A high level, easy to use gpgpu crate based on wgpu

A high level, easy to use gpgpu crate based on wgpu. It is made for very large computations on powerful gpus

null 18 Nov 26, 2022
Click-once - A small tiny little binary to fix undesired mouse double clicks in Windows, written in Rust.

click-once A small tiny little binary to fix malfunctioning mouse double clicks in Windows, written in Rust. Minimal executable with little to no over

null 23 Dec 29, 2022
Proof of concept writing a monolith BBS using Rust, GraphQL, WASM, and SQL. WILL BE ARCHIVED ONCE PROVEN

GraphQL Forum Important DO NOT even think about using this in production, lest your sanity be destroyed and credentials lost! Loosely following the aw

Rongcui Dong 25 Apr 25, 2023
Utility that takes logs from anywhere and sends them to Telegram.

logram Utility that takes logs from anywhere and sends them to Telegram. Supports log collection from files, journald and docker containers. More abou

Max Eliseev 85 Dec 22, 2022
Cloudflare worker for embedding polls anywhere.

poll.fizzy.wtf Cloudflare worker for embedding polls anywhere. ?? Pineapple on pizza? ?? Yes ?? No ?? Total Features Unlimited polls and unlimited opt

Valentin Berlier 39 Dec 10, 2022
A command-line tool and Docker image to automatically backup Git repositories from GitHub or anywhere

A command-line tool and Docker image to automatically backup Git repositories from GitHub or anywhere

Jake Wharton 256 Dec 27, 2022
A command line application which sets your wall paper with new image generating pollens once they arrive.

pollenwall Table of Contents pollenwall About Installation Binary releases Build from source Usage Command Line Arguments Running as a service MacOS L

Pollinations.AI 2 Jan 7, 2022
Creates a DLL that runs a payload once injected into a process.

Educational purposes only Don't use this project maliciously. Prerequisites Install rust Install windows toolchain Setup Run cargo run --bin builder -

RadonCoding 3 Aug 27, 2022