Zero-cost asynchronous programming in Rust

Last update: Aug 7, 2022


Zero-cost asynchronous programming in Rust

Build Status Rustc Version

Documentation | Website

futures-rs is a library providing the foundations for asynchronous programming in Rust. It includes key trait definitions like Stream, as well as utilities like join!, select!, and various futures combinator methods which enable expressive asynchronous control flow.


Add this to your Cargo.toml:

futures = "0.3"

Now, you can use futures-rs:

use futures::future::Future;

The current futures-rs requires Rust 1.39 or later.

Feature std

Futures-rs works without the standard library, such as in bare metal environments. However, it has a significantly reduced API surface. To use futures-rs in a #[no_std] environment, use:

futures = { version = "0.3", default-features = false }


This project is licensed under either of

at your option.


Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in futures-rs by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

  • 1. Consider having polling an error represent the final Stream value

    Consider having polling an error represent the final value. In other words, a poll that returns an error means that poll should never be called again. In the case of a Stream, this means that an error indicates the stream has terminated.

    This issue is a placeholder for the associated discussion.

    cc @aturon

    Reviewed by carllerche at 2016-10-12 16:13
  • 2. Consider passing the Task that is driving the future to Future::poll and renaming the function

    Currently Future::poll seems to be expected to call task::park which then fetches the current task from TLS and panics if there is no task in TLS.

    This results in an unintuitive API (it's not clear at first glance that poll()'s expected interface/implementation is related to tasks) and a potential run-time failure that could be checked at compile time.

    So my suggestion is to instead pass the task driving the Future explicitly to Future::poll as an additional function argument, either as a Task reference, a closure calling task::park() (if that's enough), or a similar mechanism, instead of storing it in the TLS variable CURRENT_TASK.

    Also, "poll" is a confusing name, since it creates the expectation that it is a function that anyone can call to get the value of the future if it has already completed, but it is in fact an "internal" function that drives future execution instead and currently even panics if called outside a task.

    Something like "drive", "execute", "run", "run_next_step" or similar would be a better name.

    Reviewed by rjuse at 2016-09-06 19:43
  • 3. Task system overhaul

    Updated description

    The primary change made in this PR is to restructure memory management and notifications throughout the "task system" in the futures crate. It is intended that this will have very little impact, if any, on consumers of futures. Implementations of runtimes of futures (e.g. crates like tokio-core) are the target for this series of changes, enabling a suite of possible optimizations that were not previously feasible. One major use case that is now enabled is usage of the task and executor modules in the no_std ecosystem. This means that bare-metal applications of futures should be able to use the same task system that the std-based futures ecosystem uses.

    One of the largest changes being made to support this is an update to the memory management of objects behind Task handles. Previously it was required that Arc<Unpark> instances were passed into the various Spawn::poll_* functions, but new functions Spawn::poll_*_notify were added which operate with a NotifyHandle instead. A NotifyHandle is conceptually very similar to an Arc<Unpark> instance, but it works through an unsafe trait UnsafeNotify to manage memory instead of requiring Arc is used. You can still use Arc safely, however, if you'd like.

    In addition to supporting more forms of memory management, the poll_*_notify functions also take a new id parameter. This parameter is intended to be an opaque bag-of-bits to the futures crate itself but runtimes can use this to identify the future being notified. This is intended to enable situations where the same instance of a NotifyHandle can be used for all futures executed by using the id field to distinguish which future is ready when it gets notified.

    API Additions

    • A FuturesUnordered::push method was added and the FuturesUnordered type itself was completely rewritten to efficiently track a large number of futures.
    • A Task::will_notify_current method was added with a slightly different implementation than Task::is_current but with stronger guarantees and documentation wording about its purpose.

    Compatibility Notes

    As with all 0.1.x releases this PR is intended to be 100% backwards compatible. All code that previously compiled should continue to do so with these changes. As with other changes, though, there are also some updates to be aware of:

    • The task::park function has been renamed to task::current.
    • The Task::unpark function has been renamed to Task::notify, and in general terminology around "unpark" has shifted to terminology around "notify"
    • The Unpark trait has been deprecated in favor of the Notify trait mentioned above.
    • The UnparkEvent structure has been deprecated. It currently should perform the same as it used to, but it's planned that in a future 0.1.x release the performance will regress for crates that have not transitioned away. The primary primitive to replace this is the addition of a push function on the FuturesUnordered type. If this does not help implement your use case though, please let us know!
    • The Task::is_current method is now deprecated, and you likely want to use Task::will_notify_current instead, but let us know if this doesn't suffice!

    Original description

    This PR is a work in progress

    I'm submitting this PR early to hopefully try to illustrate my thoughts and get some early feedback.


    • [x] Land #442
    • [x] Update tokio-core
    • [x] Run sscache tests using new task system
    • [x] Switch Arc<Unpark> to UnparkHandle #432
    • [x] Decide on #312 (leaning towards yes)
    • [x] Allow executors to customize wait behavior #360 (deferring this until later)
    • [x] Fix Task::is_current
    • [x] Remove Notify::is_current, I don't think this is needed anymore.
    • [x] Consider GetNotifyHandle
    • [x] Should ref_inc -> ref_dec be moved to UnsafeNotify (@alexcrichton says no).
    • [x] Consider getting rid of poll_*_notify on Stream and Sink. Also, maybe name it poll_notify if it is only for Future.
    • [x] Merge
    • [x] u64 vs. usize


    The previous implementation of the task system required a number of allocations per task instance. Each task required a dedicated Arc<Unpark> handle which means that executors require at least two allocations per task.

    Things get worse when using with_unpark_event as nested calls to with_unpark_event result in Vec allocation and cloning during each call to task::park.

    This commit provides an overhaul to the task system to work around these problems. The Unpark trait is changed so that only one instance is required per executor. In order to identify which task is being unparked, Unpark::unpark takes an unpark_id: u64 argument.

    with_unpark_event is removed in favor of UnparkContext which satisfies a similar end goal, but requires only a single allocation per lifetime of the UnparkContext.

    The new Unpark trait

    In general, tasks are driven to completion by executors and executors are able to handle large number of tasks. As such, the Unpark trait has been tweaked to require a single allocation per executor instead of one per task. The idea is that the executor creates one Arc<Unpark> handle and uses the unpark_id: u64 to identify which task is unparked.

    In the case of tokio-core, each task is stored in a slab and the the unpark_id is the slab index. Now, given that an Arc is no longer used, it can be that a slab slot is released and repurposed for a different task while there are still outstanding Task handles referencing the now released task.

    There are two potential ways to deal with this.

    a) Not care. Futures need to be able to handle spurious wake ups already. Spurious wake ups can be reduced by splitting the u64 into 28 bits for the slab offset and use the rest of the u64 as a slot usage counter.

    b) Use Unpark::ref_inc and Unpark::ref_dec to allow the executor implementation to handle its own reference counting.

    Option b) would allow an executor implementation to store a pointer as the unpark_id and the ref_inc and ref_dec allow for atomic reference counting. This could be used in cases where using a slab is not an option.


    This is quite similar in spirit to with_unpark_event except it requires significantly less allocations when used in a nested situation.

    It does have some different behavior. Given the following:

    // Currently in task A
    let my_context = UnparkContext::new(my_unpark);
    my_context.with(0, || {
        let task = task::park();
        thread::spawn(move || {
    my_executor.spawn(move || {
        // Currently in task B
        my_context.with(0, || {
            // some work here

    Task B will be the root task that is notified.

    Reviewed by carllerche at 2017-03-31 21:32
  • 4. Yank futures 0.2?

    Since futures 0.2 is considered just a snapshot that libraries shouldn't expose, most of the ecosystem is still using futures 0.1. It is not expected that 0.2 will see any more development, nor that the ecosystem will move to it. Instead, work is ongoing to try to get futures into libstd.

    However, the version on is 0.2, and the version that is shown on is also 0.2. This leads to a lot of confusion when new users try to get started in the ecosystem, since they don't understand why futures returned by libraries don't implement futures::Future (from v2) (example).

    Could it just be yanked/deprecated/etc, with a 0.1.22 published to get the and listings to suggest 0.1 until the new version is actually ready?

    Reviewed by seanmonstar at 2018-06-18 19:37
  • 5. Can we find a better name for select?

    When the crossbeam-channel RFC was introduced a few months ago, I found it very hard to understand what select meant. I have the same issue in future-rs, where I find the name equally as opaque as in crossbeam. Luckily I have since been pointed in the direction of some Unix history which explains the design and naming behind select, but I feel like we can still do better than requiring a history lesson for a function name to make sense.

    Therefore I would like to propose that select is slightly renamed to make it more clear what it is actually doing, to e.g. select_any or select_any_ready maybe (or any other name that is deemed better). Although the presence of select_ok already makes the naming tough, and not having a pure select breaks somewhat with the precedent for this functionality (since it is named as such in Unix and Go for example), but I think there are many Rust users who are not familiar with these precedents and will hence be confused by the name (myself included). At the same time, by keeping the word "select" in the name, it is still easy to search for, for those that do know the precedent (and if "select" is kept as the first part of the name, anyone just looking to type select in their IDE will also be helped by autocomplete).

    I feel this would be a very rustic improvement to make; it should still be easy for veterans to use, but welcomes newcomers at the same time, by optimizing for readability over familiarity.

    Reviewed by KasMA1990 at 2018-02-13 17:02
  • 6. futures-macro (pulled in by futures-util) appears not to compile on musl

    I have a CI pipeline that tests my project on glibc and musl and it appears that when I pull in futures-util as a dependency, this compiles just fine on glibc but does not compile on musl.

    Here is the build log:

    This appears to be due to futures-macro exporting all of its procedural macros with #[proc_macro_hack] which is not allowed on musl.

    I know that rust and musl have had problems in the past related to proc macros and I'm wondering if this is the right place to report this or if this should go into the rust repo.

    Reviewed by jbaublitz at 2020-10-25 14:59
  • 7. Shared can interact badly with futures that don't always poll their subfutures

    (I'm opening this issue to continue the discussion from

    The problem is that Shared, as it is currently designed, can interact poorly with certain futures, such as the ModedFuture sketched below. We should either formalize some reason why ModedFuture is an invalid implementation of Future, or we should redesign Shared to accommodate this kind of situation.

    extern crate futures;
    extern crate tokio_core;
    use futures::{Future, Poll};
    use futures::sync::oneshot;
    use std::rc::Rc;
    use std::cell::RefCell;
    enum Mode { Left, Right }
    struct ModedFutureInner<F> where F: Future {
        mode: Mode,
        left: F,
        right: F,
        task: Option<::futures::task::Task>,
    struct ModedFuture<F> where F: Future {
        inner: Rc<RefCell<ModedFutureInner<F>>>,
    struct ModedFutureHandle<F> where F: Future {
        inner: Rc<RefCell<ModedFutureInner<F>>>,
    impl <F> ModedFuture<F> where F: Future {
        pub fn new(left: F, right: F, mode: Mode) -> (ModedFutureHandle<F>, ModedFuture<F>) {
            let inner = Rc::new(RefCell::new(ModedFutureInner {
                left: left, right: right, mode: mode, task: None,
            (ModedFutureHandle { inner: inner.clone() }, ModedFuture { inner: inner })
    impl <F> ModedFutureHandle<F> where F: Future {
        pub fn switch_mode(&mut self, mode: Mode) {
            self.inner.borrow_mut().mode = mode;
            if let Some(t) = self.inner.borrow_mut().task.take() {
                // The other future may have become ready while we were ignoring it.
    impl <F> Future for ModedFuture<F> where F: Future {
        type Item = F::Item;
        type Error = F::Error;
        fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
            let ModedFutureInner { ref mut mode, ref mut left, ref mut right, ref mut task } =
            *task = Some(::futures::task::park());
            match *mode {
                Mode::Left => left.poll(),
                Mode::Right => right.poll(),
    pub fn main() {
        let mut core = ::tokio_core::reactor::Core::new().unwrap();
        let handle = core.handle();
        let (tx, rx) = oneshot::channel::<u32>();
        let f1 = rx.shared();
        let f2 = f1.clone();
        let (mut mfh, mf) = ModedFuture::new(
            Box::new(f1.map_err(|_| ()).map(|v| *v)) as Box<Future<Item=u32, Error=()>>,
            Box::new(::futures::future::empty()) as Box<Future<Item=u32, Error=()>>,
        let (tx3, rx3) = oneshot::channel::<u32>();
        handle.spawn(|x| tx3.complete(*x)).map_err(|_| ()));
        handle.spawn(|_| ()));
        tx.complete(11); // It seems like this ought to cause f2 and then rx3 to get resolved.
        // This hangs forever.;
    Reviewed by dwrensha at 2017-01-05 20:47
  • 8. How can I wait for a future in multiple threads

    I can't seem to make this work, perhaps I'm being dumb! In any case, I think this should be in the documentation as an example.

    In my case, I have background processes loading data for a given ID. Obviously if a piece of data is already being loaded then I want to join the existing waiters rather than starting a new background process.

    I've implemented this using a Map <Id, Vec <Complete>>>, then the first loader triggers the completion of subsequent loaders when it has completed. This is a lot of boilerplate.

    I've tried all sorts of things to get this to work but somehow I can't get anything else to compile. Waiting for a future consumes it, so I can only do that once. I have tried to replace the future with a new one and wait on that, like I might do in JavaScript, but this also doesn't work.

    If anyone can show me an example then that would be great, if not then I'll probably create an abstraction around my current method and submit this as a pull request for the library.

    Reviewed by jamespharaoh at 2016-09-22 14:42
  • 9. Consider supporting Reactive Streams and Reactive Sockets

    I am one of the developers of and I just stumbled upon this nice library (I am a Rust lurker mostly). Futures are a very nice building block for asynchronous programs, but eventually one reaches the point where some kind of streaming abstraction is needed. The simplest streaming approach is the RX-style chained Observables, however, as we and others found out, with asynchronous call-chains backpressure becomes an issue as blocking is no longer available to throttle a producer. To solve this issue, the reactive-streams (RS) standard has been created: backed by various companies interested in the JVM landscape. This set of interoperability interfaces was designed by multiple teams together. This standard is also on its way to become part of JDK9. There is also an effort to expose its semantics as a wire-level protocol which nicely completements the RS standard (which mainly focuses on in-JVM asynchronous, ordered, backpressured communications).

    Since I imagine that eventually the need for asynchronous streams will arise here, I think these standards can be interesting for Rust, too. While RS might not be perfect, it was a result of a long design process and now has mostly consensus about it in the JVM land, so it would be nice to see a Rust implementation that is similar enough to be easily connectable to JVMs, maybe via reactive-socket.

    Sorry for the shameless plug :)

    Reviewed by drewhk at 2016-08-29 13:17
  • 10. Having both "sync" and "sink" in the API makes verbal discussion v. confusing

    The other day I had a conversation in which we were a few mins in before we realized we were talking at cross purposes because of a sync/sink confusion.

    Maybe rename futures::sync? Or "sink" -> "drain" (or something)?

    Reviewed by jsgf at 2017-02-20 20:37
  • 11. Consider allowing futures to require specialized executors?

    Forgive me if this is not the appropriate place to be discussing things like this, but has the possibility of allowing futures to specify what sort of executors they may be spawned on been examined at all? This would allow libraries to support extra features like task thread-affinity or priority.

    The way I'm thinking of doing this would be to make the following changes to Future and Context:

    pub trait Future<S: Spawn + ?Sized = dyn Spawn> {
        type Output;
        fn poll(self: PinMut<Self>, cx: &mut Context<S>) -> Poll<Self::Output>;
    pub struct Context<'a, S: Spawn + 'a + ?Sized = dyn Spawn> {
        local_waker: &'a LocalWaker,
        spawner: &'a mut S,
    // And appropriate changes to method signatures in Context::new and Context::with_spawner

    Existing futures would not need to change because of the defaulted type parameter, but futures could require additional features by changing S to something else. For instance, if a future has a non-send subfuture one could create a new trait SpawnLocal: Spawn which takes LocalFutureObj instead of FutureObj, and implement Future<dyn SpawnLocal>.

    If anyone is interested, I have a sort of proof-of-concept for the changes in libcore here. (It's not an actual fork of libcore though, since I'm not quite sure how to do that.)

    Reviewed by AlphaModder at 2018-08-12 00:11
  • 12. WakerRef depends on libstd implementation detail for soundness

    It depends on the fact that Waker always stores the data of the waker behind a reference and never inline. If data was stored inline, WakerRef could be used to cause a use after free by storing a Mutex<Vec<u8>> in the waker and then for a WakerRef derived from this waker mutate it to force the vec to reallocate and then access the vec stored in the original waker, which is now dangling due to WakerRef storing a copy of the waker rather than a reference.

    Reviewed by bjorn3 at 2022-07-23 08:46
  • 13. Issue regarding `async` with multithreading

    Hello, I am having a bit of trouble with multithreading when I am calling async functions, an simplified example is as follows:

    use tokio;
    use futures::{stream, StreamExt};
    async fn changer(s: Vec<String>) -> String {
        s[0].replace("a", "A").to_owned()
    async fn caller(vec: Vec<Vec<String>>) -> Vec<String> {
        let threads = stream::iter(vec).map(|x| {
            tokio::spawn(async move {
        let mut res = Vec::new();
        threads.for_each(|t| async move {
            // res.push(t.unwrap());
            println!("{:?}", t)
    async fn main()  {
        let now = time::Instant::now();
        let vec = vec![
        let res = caller(vec).await;
        println!("{:?}, time elaspsed: {:?}", res, now.elapsed())

    I want to be able to return the results from threads in caller() instead of printing them. I am able to println! the results. However, rust wouldn't compile when I tried to push the result of each individual thread with the following error

    error: captured variable cannot escape `FnMut` closure body
      --> src/
    16 |       let mut res = Vec::new();
       |           ------- variable defined here
    17 |       threads.for_each(|t| async  {
       |  ________________________-_^
       | |                        |
       | |                        inferred to be a `FnMut` closure
    18 | |         res.push(t.unwrap());
       | |         --- variable captured here
    19 | |         // println!("{:?}", t)
    20 | |     }).await;
       | |_____^ returns an `async` block that contains a reference to a captured variable, which then escapes the closure body
       = note: `FnMut` closures only have access to their captured variables while they are executing...
       = note: ...therefore, they cannot allow references to captured variables to escape
    error: could not compile `playground` due to previous error

    Multiple attempts by me have been unsuccessful, I am wondering what is the proper way to do this? I just need to be able to store and then return the results after threads returns

    Reviewed by jxuanli at 2022-07-10 20:20
  • 14. Proposal: Add a future that can be paused and restarted

    Sometimes, you'd like to temporarily suspend a future until something else happens. I propose a Pausable<F> API that allows one to render a Future as stopped.

    Public API

    pub trait FutureExt {
        /* ... */
        fn pausable(self) -> Pausable<F>;
    pub struct Pausable<F> { /* ... */ }
    pub struct PausableHandle { /* ... */ }
    impl<F: Future> Future for Pausable<F> {
        type Output = F::Output;
        /* ... */
    impl<F> Pausable<F> {
        pub fn pause_handle(&self) -> PausableHandle { /* ... */ }
    impl PausableHandle {
        pub fn pause(&self) { /* ... */ }
        pub fn unpause(&self) { /* ... */ }

    Reference Level

    This would be implemented by having a SharedState struct between Pausable and PausableHandle through an Arc that looks like this:

    struct SharedState {
        paused: AtomicBool,
        waker: AtomicWaker,

    PausableHandle would then be implemented like this (ignoring atomics):

    pub struct PausableHandle {
        state: Arc<SharedState>,
    impl PausableHandle {
        pub fn pause(&self) {
        pub fn unpause(&self) {

    Pausable<F> would then be implemented like this:

    pub struct Pausable<F> {
        fut: F,
        state: Arc<SharedState>,
    impl<F> {
        pub fn handle(&self) -> PausableHandle {
            PausableHandle { state: self.state.clone() }
        // also have pause() and unpause() from the handle struct here
    impl<F: Future> Future for Pausable<F> {
        type Item = F::Item;
        fn poll(&mut self, cx: &mut Context) -> Poll<Item> {
            if self.state.paused.load() {
                // store waker so we're woken up on unpause
            } else {
    Reviewed by notgull at 2022-07-05 19:59
  • 15. Proposal: add `AsyncReadExt::by_ref`


    When reading a known number of bytes from futures::io::ReadAsync, it is useful to use the idiom

    let mut a = vec![];
        .take(length0 as u64)
        .take(length1 as u64)

    however, this is currently not possible because by_ref does not exist like Read::by_ref. This makes it impossible to implement the idiom above.

    The proposal is to add such method. Alternatively, any way of doing the above? ^^

    Reviewed by jorgecarleitao at 2022-06-29 13:02
  • 16. Concurrent stream combinators

    Currently we have the Stream::for_each_concurrent, buffer_unordered, and buffered, which use FuturesUnordered to run streams concurrently. Calling the combinators "buffered" is not great for visibility in my experience, and it might be more obvious if they were named concurrent/concurrent_unordered (or similar) with a limit: Option<usize> parameter. Or perhaps just introduce concurrent and make the limit field in buffered non-optional (undoing for_each_concurrent is also trivially implemented as stream.concurrent().for_each(...), so it may not be worth having.

    Reviewed by ibraheemdev at 2022-06-05 04:03
Monad/MonadIO, Handler, Coroutine/doNotation, Functional Programming features for Rust

fpRust Monad, Functional Programming features for Rust Why I love functional programming, Rx-style coding. However it's hard to implement them in Rust

Aug 1, 2022
Coroutine I/O for Rust

Coroutine I/O Coroutine scheduling with work-stealing algorithm. WARN: Possibly crash because of TLS inline, check

May 30, 2022
[no longer maintained] Scalable, coroutine-based, fibers/green-threads for Rust. (aka MIO COroutines).

Documentation mioco Mioco provides green-threads (aka fibers) like eg. Goroutines in Go, for Rust. Status This repo is a complete re-implementation of

Jun 10, 2022
Metal IO library for Rust

Mio – Metal IO Mio is a fast, low-level I/O library for Rust focusing on non-blocking APIs and event notification for building high performance I/O ap

Aug 14, 2022
rust stackful coroutine library

May May is a high-performant library for programming stackful coroutines with which you can easily develop and maintain massive concurrent programs. I

Aug 7, 2022
Robyn is an async Python backend server with a runtime written in Rust, btw.

Robyn is an async Python backend server with a runtime written in Rust, btw.

Aug 8, 2022
Zero-cost high-level lua 5.3 wrapper for Rust

td_rlua This library is a high-level binding for Lua 5.3. You don't have access to the Lua stack, all you can do is read/write variables (including ca

May 4, 2022
zero runtime cost default arguments in rust

Default Arguments in Rust Enables default arguments in rust by macro in zero cost. Just wrap function with default_args! and macro with name of functi

Jul 30, 2022
Adds zero-cost stack overflow protection to your embedded programs

flip-link adds zero-cost stack overflow protection to your embedded programs The problem Bare metal Rust programs may not be memory safe in presence o

Jul 25, 2022
RISC Zero is a zero-knowledge verifiable general computing platform based on zk-STARKs and the RISC-V microarchitecture.

RISC Zero WARNING: This software is still experimental, we do not recommend it for production use (see Security section). RISC Zero is a zero-knowledg

Aug 15, 2022
Engula empowers engineers to build reliable and cost-effective databases.
Engula empowers engineers to build reliable and cost-effective databases.

Engula is a storage engine that empowers engineers to build reliable and cost-effective databases with less effort and more confidence. Engula is in t

Aug 12, 2022
Cost saving K8s controller to scale down and up of resources during non-business hours

Kube-Saver Motivation Scale down cluster nodes by scaling down Deployments, StatefulSet, CronJob, Hpa during non-business hours and save $$, but if yo

Jul 12, 2022
An open source, programmed in rust, privacy focused tool for reading programming resources (like stackoverflow) fast, efficient and asynchronous from the terminal.

Falion An open source, programmed in rust, privacy focused tool for reading programming resources (like StackOverFlow) fast, efficient and asynchronou

Jun 17, 2022
A set of Zero Knowledge modules, written in Rust and designed to be used in other system programming environments.

Zerokit A set of Zero Knowledge modules, written in Rust and designed to be used in other system programming environments. Initial scope Focus on RLN

Aug 3, 2022
A turing-complete programming language using only zero-width unicode characters, inspired by brainfuck and whitespace.

Zero-Width A turing-complete programming language using only zero-width unicode characters, inspired by brainfuck and whitespace. Currently a (possibl

Jan 14, 2022
A fully asynchronous, futures-based Kafka client library for Rust based on librdkafka

rust-rdkafka A fully asynchronous, futures-enabled Apache Kafka client library for Rust based on librdkafka. The library rust-rdkafka provides a safe

Aug 3, 2022
A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...

Tokio A runtime for writing reliable, asynchronous, and slim applications with the Rust programming language. It is: Fast: Tokio's zero-cost abstracti

Aug 11, 2022
BLEZ - Asynchronous Bluetooth Low Energy on Linux for Rust

BLEZ - Asynchronous Bluetooth Low Energy on Linux for Rust This library provides an asynchronous, fully featured interface to the Bluetooth Low Energy

Oct 21, 2021
influxdb provides an asynchronous Rust interface to an InfluxDB database.

influxdb influxdb provides an asynchronous Rust interface to an InfluxDB database. This crate supports insertion of strings already in the InfluxDB Li

Feb 16, 2021
Simple interoperability between C++ coroutines and asynchronous Rust

cxx-async Overview cxx-async is a Rust crate that extends the cxx library to provide seamless interoperability between asynchronous Rust code using as

Aug 2, 2022