Zero-cost asynchronous programming in Rust

Overview

futures-rs

Zero-cost asynchronous programming in Rust

Build Status Crates.io Rustc Version

Documentation | Website

futures-rs is a library providing the foundations for asynchronous programming in Rust. It includes key trait definitions like Stream, as well as utilities like join!, select!, and various futures combinator methods which enable expressive asynchronous control flow.

Usage

Add this to your Cargo.toml:

[dependencies]
futures = "0.3"

Now, you can use futures-rs:

use futures::future::Future;

The current futures-rs requires Rust 1.39 or later.

Feature std

Futures-rs works without the standard library, such as in bare metal environments. However, it has a significantly reduced API surface. To use futures-rs in a #[no_std] environment, use:

[dependencies]
futures = { version = "0.3", default-features = false }

License

This project is licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in futures-rs by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Comments
  • Consider having polling an error represent the final Stream value

    Consider having polling an error represent the final Stream value

    Consider having polling an error represent the final value. In other words, a poll that returns an error means that poll should never be called again. In the case of a Stream, this means that an error indicates the stream has terminated.

    This issue is a placeholder for the associated discussion.

    cc @aturon

    C-feature-request 
    opened by carllerche 57
  • Consider passing the Task that is driving the future to Future::poll and renaming the function

    Consider passing the Task that is driving the future to Future::poll and renaming the function

    Currently Future::poll seems to be expected to call task::park which then fetches the current task from TLS and panics if there is no task in TLS.

    This results in an unintuitive API (it's not clear at first glance that poll()'s expected interface/implementation is related to tasks) and a potential run-time failure that could be checked at compile time.

    So my suggestion is to instead pass the task driving the Future explicitly to Future::poll as an additional function argument, either as a Task reference, a closure calling task::park() (if that's enough), or a similar mechanism, instead of storing it in the TLS variable CURRENT_TASK.

    Also, "poll" is a confusing name, since it creates the expectation that it is a function that anyone can call to get the value of the future if it has already completed, but it is in fact an "internal" function that drives future execution instead and currently even panics if called outside a task.

    Something like "drive", "execute", "run", "run_next_step" or similar would be a better name.

    C-feature-request 
    opened by rjuse 49
  • Task system overhaul

    Task system overhaul

    Updated description

    The primary change made in this PR is to restructure memory management and notifications throughout the "task system" in the futures crate. It is intended that this will have very little impact, if any, on consumers of futures. Implementations of runtimes of futures (e.g. crates like tokio-core) are the target for this series of changes, enabling a suite of possible optimizations that were not previously feasible. One major use case that is now enabled is usage of the task and executor modules in the no_std ecosystem. This means that bare-metal applications of futures should be able to use the same task system that the std-based futures ecosystem uses.

    One of the largest changes being made to support this is an update to the memory management of objects behind Task handles. Previously it was required that Arc<Unpark> instances were passed into the various Spawn::poll_* functions, but new functions Spawn::poll_*_notify were added which operate with a NotifyHandle instead. A NotifyHandle is conceptually very similar to an Arc<Unpark> instance, but it works through an unsafe trait UnsafeNotify to manage memory instead of requiring Arc is used. You can still use Arc safely, however, if you'd like.

    In addition to supporting more forms of memory management, the poll_*_notify functions also take a new id parameter. This parameter is intended to be an opaque bag-of-bits to the futures crate itself but runtimes can use this to identify the future being notified. This is intended to enable situations where the same instance of a NotifyHandle can be used for all futures executed by using the id field to distinguish which future is ready when it gets notified.

    API Additions

    • A FuturesUnordered::push method was added and the FuturesUnordered type itself was completely rewritten to efficiently track a large number of futures.
    • A Task::will_notify_current method was added with a slightly different implementation than Task::is_current but with stronger guarantees and documentation wording about its purpose.

    Compatibility Notes

    As with all 0.1.x releases this PR is intended to be 100% backwards compatible. All code that previously compiled should continue to do so with these changes. As with other changes, though, there are also some updates to be aware of:

    • The task::park function has been renamed to task::current.
    • The Task::unpark function has been renamed to Task::notify, and in general terminology around "unpark" has shifted to terminology around "notify"
    • The Unpark trait has been deprecated in favor of the Notify trait mentioned above.
    • The UnparkEvent structure has been deprecated. It currently should perform the same as it used to, but it's planned that in a future 0.1.x release the performance will regress for crates that have not transitioned away. The primary primitive to replace this is the addition of a push function on the FuturesUnordered type. If this does not help implement your use case though, please let us know!
    • The Task::is_current method is now deprecated, and you likely want to use Task::will_notify_current instead, but let us know if this doesn't suffice!

    Original description

    This PR is a work in progress

    I'm submitting this PR early to hopefully try to illustrate my thoughts and get some early feedback.

    Checklist

    • [x] Land #442
    • [x] Update tokio-core
    • [x] Run sscache tests using new task system
    • [x] Switch Arc<Unpark> to UnparkHandle #432
    • [x] Decide on #312 (leaning towards yes)
    • [x] Allow executors to customize wait behavior #360 (deferring this until later)
    • [x] Fix Task::is_current
    • [x] Remove Notify::is_current, I don't think this is needed anymore.
    • [x] Consider GetNotifyHandle https://github.com/alexcrichton/futures-rs/issues/129
    • [x] Should ref_inc -> ref_dec be moved to UnsafeNotify (@alexcrichton says no).
    • [x] Consider getting rid of poll_*_notify on Stream and Sink. Also, maybe name it poll_notify if it is only for Future.
    • [x] Merge https://github.com/carllerche/futures-rs/pull/4
    • [x] u64 vs. usize

    Overview

    The previous implementation of the task system required a number of allocations per task instance. Each task required a dedicated Arc<Unpark> handle which means that executors require at least two allocations per task.

    Things get worse when using with_unpark_event as nested calls to with_unpark_event result in Vec allocation and cloning during each call to task::park.

    This commit provides an overhaul to the task system to work around these problems. The Unpark trait is changed so that only one instance is required per executor. In order to identify which task is being unparked, Unpark::unpark takes an unpark_id: u64 argument.

    with_unpark_event is removed in favor of UnparkContext which satisfies a similar end goal, but requires only a single allocation per lifetime of the UnparkContext.

    The new Unpark trait

    In general, tasks are driven to completion by executors and executors are able to handle large number of tasks. As such, the Unpark trait has been tweaked to require a single allocation per executor instead of one per task. The idea is that the executor creates one Arc<Unpark> handle and uses the unpark_id: u64 to identify which task is unparked.

    In the case of tokio-core, each task is stored in a slab and the the unpark_id is the slab index. Now, given that an Arc is no longer used, it can be that a slab slot is released and repurposed for a different task while there are still outstanding Task handles referencing the now released task.

    There are two potential ways to deal with this.

    a) Not care. Futures need to be able to handle spurious wake ups already. Spurious wake ups can be reduced by splitting the u64 into 28 bits for the slab offset and use the rest of the u64 as a slot usage counter.

    b) Use Unpark::ref_inc and Unpark::ref_dec to allow the executor implementation to handle its own reference counting.

    Option b) would allow an executor implementation to store a pointer as the unpark_id and the ref_inc and ref_dec allow for atomic reference counting. This could be used in cases where using a slab is not an option.

    UnparkContext

    This is quite similar in spirit to with_unpark_event except it requires significantly less allocations when used in a nested situation.

    It does have some different behavior. Given the following:

    // Currently in task A
    let my_context = UnparkContext::new(my_unpark);
    
    my_context.with(0, || {
        let task = task::park();
    
        thread::spawn(move || {
            thread::sleep(a_few_secs);
            task.unpark();
        });
    });
    
    my_executor.spawn(move || {
        // Currently in task B
    
        my_context.with(0, || {
            // some work here
        });
    });
    

    Task B will be the root task that is notified.

    opened by carllerche 39
  • Yank futures 0.2?

    Yank futures 0.2?

    Since futures 0.2 is considered just a snapshot that libraries shouldn't expose, most of the ecosystem is still using futures 0.1. It is not expected that 0.2 will see any more development, nor that the ecosystem will move to it. Instead, work is ongoing to try to get futures into libstd.

    However, the version on crates.io is 0.2, and the version that is shown on docs.rs is also 0.2. This leads to a lot of confusion when new users try to get started in the ecosystem, since they don't understand why futures returned by libraries don't implement futures::Future (from v2) (example).

    Could it just be yanked/deprecated/etc, with a 0.1.22 published to get the docs.rs and crates.io listings to suggest 0.1 until the new version is actually ready?

    opened by seanmonstar 36
  • Can we find a better name for select?

    Can we find a better name for select?

    When the crossbeam-channel RFC was introduced a few months ago, I found it very hard to understand what select meant. I have the same issue in future-rs, where I find the name equally as opaque as in crossbeam. Luckily I have since been pointed in the direction of some Unix history which explains the design and naming behind select, but I feel like we can still do better than requiring a history lesson for a function name to make sense.

    Therefore I would like to propose that select is slightly renamed to make it more clear what it is actually doing, to e.g. select_any or select_any_ready maybe (or any other name that is deemed better). Although the presence of select_ok already makes the naming tough, and not having a pure select breaks somewhat with the precedent for this functionality (since it is named as such in Unix and Go for example), but I think there are many Rust users who are not familiar with these precedents and will hence be confused by the name (myself included). At the same time, by keeping the word "select" in the name, it is still easy to search for, for those that do know the precedent (and if "select" is kept as the first part of the name, anyone just looking to type select in their IDE will also be helped by autocomplete).

    I feel this would be a very rustic improvement to make; it should still be easy for veterans to use, but welcomes newcomers at the same time, by optimizing for readability over familiarity.

    breaking change 
    opened by KasMA1990 36
  • futures-macro (pulled in by futures-util) appears not to compile on musl

    futures-macro (pulled in by futures-util) appears not to compile on musl

    I have a CI pipeline that tests my project on glibc and musl and it appears that when I pull in futures-util as a dependency, this compiles just fine on glibc but does not compile on musl.

    Here is the build log: https://travis-ci.org/github/jbaublitz/neli/jobs/738716295

    This appears to be due to futures-macro exporting all of its procedural macros with #[proc_macro_hack] which is not allowed on musl.

    I know that rust and musl have had problems in the past related to proc macros and I'm wondering if this is the right place to report this or if this should go into the rust repo.

    C-question 
    opened by jbaublitz 30
  • Shared can interact badly with futures that don't always poll their subfutures

    Shared can interact badly with futures that don't always poll their subfutures

    (I'm opening this issue to continue the discussion from https://github.com/alexcrichton/futures-rs/pull/305.)

    The problem is that Shared, as it is currently designed, can interact poorly with certain futures, such as the ModedFuture sketched below. We should either formalize some reason why ModedFuture is an invalid implementation of Future, or we should redesign Shared to accommodate this kind of situation.

    extern crate futures;
    extern crate tokio_core;
    
    use futures::{Future, Poll};
    use futures::sync::oneshot;
    use std::rc::Rc;
    use std::cell::RefCell;
    
    enum Mode { Left, Right }
    
    struct ModedFutureInner<F> where F: Future {
        mode: Mode,
        left: F,
        right: F,
        task: Option<::futures::task::Task>,
    }
    
    struct ModedFuture<F> where F: Future {
        inner: Rc<RefCell<ModedFutureInner<F>>>,
    }
    
    struct ModedFutureHandle<F> where F: Future {
        inner: Rc<RefCell<ModedFutureInner<F>>>,
    }
    
    impl <F> ModedFuture<F> where F: Future {
        pub fn new(left: F, right: F, mode: Mode) -> (ModedFutureHandle<F>, ModedFuture<F>) {
            let inner = Rc::new(RefCell::new(ModedFutureInner {
                left: left, right: right, mode: mode, task: None,
             }));
            (ModedFutureHandle { inner: inner.clone() }, ModedFuture { inner: inner })
        }
    }
    
    impl <F> ModedFutureHandle<F> where F: Future {
        pub fn switch_mode(&mut self, mode: Mode) {
            self.inner.borrow_mut().mode = mode;
            if let Some(t) = self.inner.borrow_mut().task.take() {
                // The other future may have become ready while we were ignoring it.
                t.unpark();
            }
        }
    }
    
    impl <F> Future for ModedFuture<F> where F: Future {
        type Item = F::Item;
        type Error = F::Error;
        fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
            let ModedFutureInner { ref mut mode, ref mut left, ref mut right, ref mut task } =
                *self.inner.borrow_mut();
            *task = Some(::futures::task::park());
            match *mode {
                Mode::Left => left.poll(),
                Mode::Right => right.poll(),
            }
        }
    }
    
    pub fn main() {
        let mut core = ::tokio_core::reactor::Core::new().unwrap();
        let handle = core.handle();
    
        let (tx, rx) = oneshot::channel::<u32>();
        let f1 = rx.shared();
        let f2 = f1.clone();
    
        let (mut mfh, mf) = ModedFuture::new(
            Box::new(f1.map_err(|_| ()).map(|v| *v)) as Box<Future<Item=u32, Error=()>>,
            Box::new(::futures::future::empty()) as Box<Future<Item=u32, Error=()>>,
            Mode::Left);
    
        let (tx3, rx3) = oneshot::channel::<u32>();
        handle.spawn(f2.map(|x| tx3.complete(*x)).map_err(|_| ()));
    
        core.turn(Some(::std::time::Duration::from_millis(1)));
    
        handle.spawn(mf.map(|_| ()));
    
        core.turn(Some(::std::time::Duration::from_millis(1)));
    
        mfh.switch_mode(Mode::Right);
    
        tx.complete(11); // It seems like this ought to cause f2 and then rx3 to get resolved.
    
        // This hangs forever.
        core.run(rx3).unwrap();
    }
    
    opened by dwrensha 27
  • How can I wait for a future in multiple threads

    How can I wait for a future in multiple threads

    I can't seem to make this work, perhaps I'm being dumb! In any case, I think this should be in the documentation as an example.

    In my case, I have background processes loading data for a given ID. Obviously if a piece of data is already being loaded then I want to join the existing waiters rather than starting a new background process.

    I've implemented this using a Map <Id, Vec <Complete>>>, then the first loader triggers the completion of subsequent loaders when it has completed. This is a lot of boilerplate.

    I've tried all sorts of things to get this to work but somehow I can't get anything else to compile. Waiting for a future consumes it, so I can only do that once. I have tried to replace the future with a new one and wait on that, like I might do in JavaScript, but this also doesn't work.

    If anyone can show me an example then that would be great, if not then I'll probably create an abstraction around my current method and submit this as a pull request for the library.

    opened by jamespharaoh 27
  • Consider supporting Reactive Streams and Reactive Sockets

    Consider supporting Reactive Streams and Reactive Sockets

    I am one of the developers of https://github.com/akka/akka/ and I just stumbled upon this nice library (I am a Rust lurker mostly). Futures are a very nice building block for asynchronous programs, but eventually one reaches the point where some kind of streaming abstraction is needed. The simplest streaming approach is the RX-style chained Observables, however, as we and others found out, with asynchronous call-chains backpressure becomes an issue as blocking is no longer available to throttle a producer. To solve this issue, the reactive-streams (RS) standard has been created: http://www.reactive-streams.org backed by various companies interested in the JVM landscape. This set of interoperability interfaces was designed by multiple teams together. This standard is also on its way to become part of JDK9. There is also an effort to expose its semantics as a wire-level protocol http://reactivesocket.io which nicely completements the RS standard (which mainly focuses on in-JVM asynchronous, ordered, backpressured communications).

    Since I imagine that eventually the need for asynchronous streams will arise here, I think these standards can be interesting for Rust, too. While RS might not be perfect, it was a result of a long design process and now has mostly consensus about it in the JVM land, so it would be nice to see a Rust implementation that is similar enough to be easily connectable to JVMs, maybe via reactive-socket.

    Sorry for the shameless plug :)

    opened by drewhk 27
  • Having both

    Having both "sync" and "sink" in the API makes verbal discussion v. confusing

    The other day I had a conversation in which we were a few mins in before we realized we were talking at cross purposes because of a sync/sink confusion.

    Maybe rename futures::sync? Or "sink" -> "drain" (or something)?

    opened by jsgf 26
  • Consider allowing futures to require specialized executors?

    Consider allowing futures to require specialized executors?

    Forgive me if this is not the appropriate place to be discussing things like this, but has the possibility of allowing futures to specify what sort of executors they may be spawned on been examined at all? This would allow libraries to support extra features like task thread-affinity or priority.

    The way I'm thinking of doing this would be to make the following changes to Future and Context:

    pub trait Future<S: Spawn + ?Sized = dyn Spawn> {
        type Output;
    
        fn poll(self: PinMut<Self>, cx: &mut Context<S>) -> Poll<Self::Output>;
    }
    
    pub struct Context<'a, S: Spawn + 'a + ?Sized = dyn Spawn> {
        local_waker: &'a LocalWaker,
        spawner: &'a mut S,
    }
    // And appropriate changes to method signatures in Context::new and Context::with_spawner
    

    Existing futures would not need to change because of the defaulted type parameter, but futures could require additional features by changing S to something else. For instance, if a future has a non-send subfuture one could create a new trait SpawnLocal: Spawn which takes LocalFutureObj instead of FutureObj, and implement Future<dyn SpawnLocal>.

    If anyone is interested, I have a sort of proof-of-concept for the changes in libcore here. (It's not an actual fork of libcore though, since I'm not quite sure how to do that.)

    opened by AlphaModder 25
  • Fuse, and therefore other adapters like Peekable, are incompatible with FuturesUnordered

    Fuse, and therefore other adapters like Peekable, are incompatible with FuturesUnordered

    Hi there! So this is a really fun issue I spent some time debugging. I had written an adapter that sits on top of a Peekable<FuturesUnordered<T>>. I had some code that did something like:

    // self.stream is a Peekable<FuturesUnordered<T>>
    
    match self.stream.poll_next(ctx) {
        // do some stuff here
    }
    self.stream.get_mut().push(new_future);
    self.stream.poll_peek(ctx);
    

    I found that if the inner FuturesUnordered dropped to having no futures from the first poll_next, then poll_peek always returns Poll::Ready(None) after that.

    This is because Peekable uses the Fuse adapter internally (source code link). The

    I worked around it by writing my own version of Peekable that didn't use Fuse internally. Note that FuturesUnordered already implements FusedStream so I didn't have to change anything about Peekable.

    Potential solutions

    I discussed this issue with a couple of folks and here are some thoughts we had:

    1. Easiest, non-breaking change: add a reset_fuse method to the Fuse adapter, as well as any other adapters that use a Fuse internally. That would let me use get_mut() followed by reset_fuse() to inform the adapter that it should start polling for values again. (There would also be an advocacy component here -- if you write an adapter that uses one that has reset_fuse internally, you must also expose a reset_fuse method.)

    2. Remove the Fuse adapter from Peekable and all other adapters that use Fuse, and replace it with internal logic. This makes a lot of logical sense to me, but it is a breaking change because it won't be possible to maintain the FusedStream for Peekable<St> where St: Stream impl.

      However, it would still be possible to write impl FusedStream for Peekable<St> where St: FusedStream. Also, other people might build their own adapters that use Fuse anyway, so some other workaround like reset_fuse is necessary for ecosystem cohesion.

    3. Add a FusedStreamExt trait and a whole new series of methods peekable_fused etc that work on FusedStreams. This would solve the problem, since FuturesUnordered implements FusedStream, but this seems excessive.

    4. Revisit the definition of FusedStream as a breaking change. FusedStream really isn't like FusedIterator at all -- this was quite surprising to me and to other people I discussed the issue with. I'm not sure what the semantics would be but requiring workarounds like reset_fuse feels like a little bit of a code smell.

    A-stream 
    opened by sunshowers 1
  • impl Future for stream::{Buffered, BufferUnordered, FuturesOrdered}

    impl Future for stream::{Buffered, BufferUnordered, FuturesOrdered}

    This enables poll()ing to fill the buffer without producing or consuming the Stream's outputs.

    Buffered and FuturesOrdered will also poll their inner futures produced by the stream, since they have an output value buffer available. BufferUnordered however will only poll the underlying stream, as it cannot produce outputs without consuming them.

    ~~Given that the Output of the BufferUnordered impl isn't particularly useful (because it doesn't indicate anything about the futures produced by the stream), maybe it should be type Output = Infallible, and never complete? For the other two, the completion is more meaningful as it indicates that the stream is empty and that all captured futures have produced outputs (though in the case of Buffered, it will stall while the buffer remains full).~~

    ~~It may also make sense to expose this as a poll_stream() function rather than impl Future (or otherwise a IntoFuture for &Self or something), for the sake of extension trait method resolution.~~

    EDIT: moved to inherent poll_stream() instead, see latest commit. Final name tbd (buffered.poll_all()? poll_buffer()? poll_fill()?)

    opened by arcnmx 0
  • FuturesUnordered poll ordering guarantees

    FuturesUnordered poll ordering guarantees

    The tokio-postgres crate executes queries in the order in which they are first polled. If two queries depend on each other, then their initial polling order must be the correct order. Subsequent polls are irrelevant. Currently FuturesUnordered polls futures pushed onto the queue initially in that order. Can we add this guarantee to the FuturesUnordered documentation?

    opened by david-monroe 0
  • Async transformations on arrays

    Async transformations on arrays

    Would it be possible to support async transformations on arrays?

    async fn map_async<const N: usize, A, B, Fut>(
        array: [A;N],
        fun: Fn(A) -> Fut
    ) -> [B;N]
        where Fut: Future<Output = B>;
    

    I ran into a case where I'd like to allocate an array of TCP connections.

    let sockets = map_async(["127.0.0.1:8000", "127.0.0.1:8001"], connect_socket);
    

    Currently I need to write this in the form of a macro.

    let sockets = map_async!(["127.0.0.1:8000", "127.0.0.1:8001"], connect_socket);
    
    opened by segeljakt 0
  • futures-channel fails under Miri

    futures-channel fails under Miri

    Discovered while discussing https://github.com/rust-lang/unsafe-code-guidelines/issues/380, probably related to https://github.com/rust-lang/unsafe-code-guidelines/issues/148.

    This code (playground):

    use std::thread;
    use futures::{channel::mpsc, executor::block_on, sink::SinkExt, stream::StreamExt};
    
    async fn send_sequence(n: u32, mut sender: mpsc::Sender<u32>) {
        for x in 0..n {
            sender.send(n - x).await.unwrap();
        }
    }
    
    fn main() {
        let (tx, rx) = mpsc::channel(1);
        let t = thread::spawn(move || block_on(send_sequence(20, tx)));
        let _list: Vec<_> = block_on(rx.collect());
        t.join().unwrap();
    }
    

    fails under Miri.

    opened by GoldsteinE 5
Releases(0.3.25)
  • 0.3.25(Oct 20, 2022)

  • 0.3.24(Aug 29, 2022)

  • 0.3.23(Aug 14, 2022)

  • 0.3.22(Aug 14, 2022)

    • Fix Sync impl of BiLockGuard (#2570)
    • Fix partial iteration in FuturesUnordered (#2574)
    • Fix false detection of inner panics in Shared (#2576)
    • Add Mutex::lock_owned and Mutex::try_lock_owned (#2571)
    • Add io::copy_buf_abortable (#2507)
    • Remove Unpin bound from TryStreamExt::into_async_read (#2599)
    • Make run_until_stalled handle self-waking futures (#2593)
    • Use FuturesOrdered in try_join_all (#2556)
    • Fix orderings in LocalPool waker (#2608)
    • Fix stream::Chunk adapters size hints (#2611)
    • Add push_front and push_back to FuturesOrdered (#2591)
    • Deprecate FuturesOrdered::push in favor of FuturesOrdered::push_back (#2591)
    • Performance improvements (#2583, #2626)
    • Documentation improvements (#2579, #2604, #2613)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.21(Feb 6, 2022)

  • 0.3.20(Feb 6, 2022)

    • Fix stacked borrows violations when -Zmiri-tag-raw-pointers is enabled. This raises MSRV of futures-task to 1.45. (#2548, #2550)
    • Change FuturesUnordered to respect yielding from future (#2551)
    • Add StreamExt::{flatten_unordered, flat_map_unordered} (#2083)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.19(Dec 18, 2021)

    • Remove unstable read-initializer feature (#2534)
    • Fix panic in FuturesUnordered (#2535)
    • Fix compatibility issue with FuturesUnordered and tokio's cooperative scheduling (#2527)
    • Add StreamExt::count (#2495)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.18(Nov 23, 2021)

    • Fix unusable Sink implementation on stream::Scan (#2499)
    • Make task::noop_waker_ref available without std feature (#2505)
    • Add async LineWriter (#2477)
    • Remove dependency on proc-macro-hack. This raises MSRV of utility crates to 1.45. (#2520)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.17(Aug 30, 2021)

    • Use FuturesOrdered in join_all (#2412)
    • Add {future, stream}::poll_immediate (#2452)
    • Add stream_select! macro (#2262)
    • Implement Default for OptionFuture (#2471)
    • Add Peekable::{peek_mut, poll_peek_mut} (#2488)
    • Add BufReader::seek_relative (#2489)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.16(Jul 23, 2021)

    • Add TryStreamExt::try_chunks (#2438)
    • Add StreamExt::{all, any} (#2460)
    • Add stream::select_with_strategy (#2450)
    • Update to new io_slice_advance interface (#2454)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.15(May 11, 2021)

    • Use #[proc_macro] at Rust 1.45+ to fix an issue where proc macros don't work with rust-analyzer (#2407)
    • Support targets that do not have atomic CAS on stable Rust (#2400)
    • futures-test: Add async #[test] function attribute (#2409)
    • Add stream::abortable (#2410)
    • Add FuturesUnordered::clear (#2415)
    • Implement IntoIterator for FuturesUnordered (#2423)
    • Implement Send and Sync for FuturesUnordered iterators (#2416)
    • Make FuturesUnordered::iter_pin_ref public (#2423)
    • Add SelectAll::clear (#2430)
    • Add SelectAll::{iter, iter_mut} (#2428)
    • Implement IntoIterator for SelectAll (#2428)
    • Implement Clone for WeakShared (#2396)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.14(Apr 10, 2021)

    • Add future::SelectAll::into_inner (#2363)
    • Allow calling UnboundedReceiver::try_next after None (#2369)
    • Reexport non-Ext traits from the root of futures_util (#2377)
    • Add AsyncSeekExt::stream_position (#2380)
    • Add stream::Peekable::{next_if, next_if_eq} (#2379)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.13(Feb 23, 2021)

    • Mitigated starvation issues in FuturesUnordered (#2333)
    • Fixed race with dropping mpsc::Receiver (#2304)
    • Added Shared::{strong_count, weak_count} (#2346)
    • Added no_std support for task::noop_waker_ref (#2332)
    • Implemented Stream::size_hint for Either (#2325)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.12(Jan 15, 2021)

  • 0.3.11(Jan 14, 2021)

  • 0.3.10(Jan 13, 2021)

  • 0.3.9(Jan 8, 2021)

    • Significantly improved compile time when async-await crate feature is disabled (#2273)
    • Added stream::repeat_with (#2279)
    • Added StreamExt::unzip (#2263)
    • Added sink::unfold (#2268)
    • Added SinkExt::feed (#2155)
    • Implemented FusedFuture for oneshot::Receiver (#2300)
    • Implemented Clone for sink::With (#2290)
    • Re-exported MapOkOrElse, MapInto, OkInto, TryFlatten, WriteAllVectored (#2275)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.8(Nov 9, 2020)

    • Switch proc-macros to use native #[proc_macro] at Rust 1.45+ (#2243)
    • Add WeakShared (#2169)
    • Add TryStreamExt::try_buffered (#2245)
    • Add StreamExt::cycle (#2252)
    • Implemented Clone for stream::{Empty, Pending, Repeat, Iter} (#2248, #2252)
    • Fix panic in some TryStreamExt combinators (#2250)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.7(Oct 23, 2020)

  • 0.3.6(Oct 5, 2020)

    • Fixed UB due to missing 'static on task::waker (#2206)
    • Added AsyncBufReadExt::fill_buf (#2225)
    • Added TryStreamExt::try_take_while (#2212)
    • Added is_connected_to method to mpsc::{Sender, UnboundedSender} (#2179)
    • Added is_connected_to method to oneshot::Sender (#2158)
    • Implement FusedStream for FuturesOrdered (#2205)
    • Fixed documentation links
    • Improved documentation
    • futures-test: Added track_closed method to AsyncWriteTestExt and SinkTestExt (#2159)
    • futures-test: Implemented more traits for InterleavePending (#2208)
    • futures-test: Implemented more traits for AssertUnmoved (#2208)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.5(May 8, 2020)

    • Added StreamExt::flat_map.
    • Added StreamExt::ready_chunks.
    • Added *_unpin methods to SinkExt.
    • Added a cancellation() future to oneshot::Sender.
    • Added reunite method to ReadHalf and WriteHalf.
    • Added Extend implementations for Futures(Un)Ordered and SelectAll.
    • Added support for reexporting the join! and select! macros.
    • Added no_std support for the pending! and poll! macros.
    • Added Send and Sync support for AssertUnmoved.
    • Fixed a bug where Shared wasn't relinquishing control to the executor.
    • Removed the Send bound on the output of RemoteHandle.
    • Relaxed bounds on FuturesUnordered.
    • Reorganized internal tests to work under different --features.
    • Reorganized the bounds on StreamExt::forward.
    • Removed and replaced a large amount of internal unsafe.
    Source code(tar.gz)
    Source code(zip)
  • 0.3.4(Feb 7, 2020)

  • 0.3.3(Feb 5, 2020)

  • 0.3.2(Feb 4, 2020)

    • Improved buffering performance of SplitSink (#1969)
    • Added select_biased! macro (#1976)
    • Added hash_receiver method to mpsc channel (#1962)
    • Added stream::try_unfold (#1977)
    • Fixed bug with zero-size buffers in vectored IO (#1998)
    • AtomicWaker::new() is now const fn (#2007)
    • Fixed bug between threadpool and user park/unparking (#2010)
    • Added stream::Peakable::peek (#2021)
    • Added StreamExt::scan (#2044)
    • Added impl of AsyncRead/Write for BufReader/Writer (#2033)
    • Added impl of Spawn and LocalSpawn for Arc<impl Spawn and Rc<impl Spawn> (#2039)
    • Fixed Sync issues with FuturesUnordered (#2054)
    • Added into_inner method for future::Ready (#2055)
    • Added MappedMutexGuard API (#2056)
    • Mitigated starvation issues in FuturesUnordered (#2049)
    • Added TryFutureExt::map_ok_or_else (#2058)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.1(Nov 7, 2019)

  • 0.3.0(Nov 6, 2019)

    • Stable release along with stable async/await!
    • Added async/await to default features (#1953)
    • Changed Spawn trait and FuturesUnordered::push to take &self (#1950)
    • Moved Spawn and FutureObj out of futures-core and into `futures-task (#1925)
    • Changed case convention for feature names (#1937)
    • Added executor feature (#1949)
    • Moved copy_into/copy_buf_into (#1948)
    • Changed SinkExt::send_all to accept a TryStream (#1946)
    • Removed ThreadPool::run (#1944)
    • Changed to use our own definition of io::Cursor (#1943)
    • Removed BufReader::poll_seek_relative (#1938)
    • Changed skip to take a usize rather than u64 (#1931)
    • Removed Stream impl for VecDeque (#1930)
    • Renamed Peekable::peek to poll_peek (#1928)
    • Added immutable iterators for FuturesUnordered (#1922)
    • Made ThreadPool optional (#1910)
    • Renamed oneshot::Sender::poll_cancel to poll_canceled (#1908)
    • Added some missing Clone implementations
    • Documentation fixes
    Source code(tar.gz)
    Source code(zip)
  • 0.3.0-alpha.19(Sep 26, 2019)

    • Stabilized the async-await feature (#1816)
    • Made async-await feature no longer require std feature (#1815)
    • Updated proc-macro2, syn, and quote to 1.0 (#1798)
    • Exposed unstable BiLock (#1827)
    • Renamed "nightly" feature to "unstable" (#1823)
    • Moved to our own io::{Empty, Repeat, Sink} (#1829)
    • Made AsyncRead::initializer API unstable (#1845)
    • Moved the Never type from futures-core to futures-util (#1836)
    • Fixed use-after-free on panic in ArcWake::wake_by_ref (#1797)
    • Added AsyncReadExt::chain (#1810)
    • Added Stream::size_hint (#1853)
    • Added some missing FusedFuture (#1868) and FusedStream implementations (#1831)
    • Added a From impl for Mutex (#1839)
    • Added Mutex::{get_mut, into_inner} (#1839)
    • Re-exported TryConcat and TryFilter (#1814)
    • Lifted Unpin bound and implemented AsyncBufRead for io::Take (#1821)
    • Lifted Unpin bounds on get_pin_mut (#1820)
    • Changed SendAll to flush the Sink when the source Stream is pending (#1877)
    • Set default threadpool size to one if num_cpus::get() returns zero (#1835)
    • Removed dependency on rand by using our own PRNG (#1837)
    • Removed futures-core dependency from futures-sink (#1832)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.0-alpha.18(Aug 9, 2019)

    • Rewrote join! and try_join! as procedural macros to allow passing expressions (#1783)
    • Banned manual implementation of TryFuture and TryStream for forward compatibility. See #1776 for more details. (#1777)
    • Changed AsyncReadExt::read_to_end to return the total number of bytes read (#1721)
    • Changed ArcWake::into_waker to a free function waker (#1676)
    • Supported trailing commas in macros (#1733)
    • Removed futures-channel dependency from futures-executor (#1735)
    • Supported channel::oneshot in no_std environment (#1749)
    • Added Future bounds to FusedFuture (#1779)
    • Added Stream bounds to FusedStream (#1779)
    • Changed StreamExt::boxed to return BoxStream (#1780)
    • Added StreamExt::boxed_local (#1780)
    • Added AsyncReadExt::read_to_string (#1721)
    • Implemented AsyncWrite for IntoAsyncRead (#1734)
    • Added get_ref, get_mut and into_inner methods to Compat01As03 and Compat01As03Sink (#1705)
    • Added ThreadPool::{spawn_ok, spawn_obj_ok} (#1750)
    • Added TryStreamExt::try_flatten (#1731)
    • Added FutureExt::now_or_never (#1747)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.0-alpha.17(Jul 3, 2019)

    • Removed try_ready! macro in favor of ready!(..)?. (#1602)
    • Removed io::Window::{set_start, set_end} in favor of io::Window::set. (#1667)
    • Re-exported pin_utils::pin_mut! macro. (#1686)
    • Made all extension traits unnamed in the prelude. (#1662)
    • Allowed ?Sized types in some methods and structs. (#1647)
    • Added Send + Sync bounds to ArcWake trait to fix unsoundness. (#1654)
    • Changed AsyncReadExt::copy_into to consume self. (#1674)
    • Renamed future::empty to pending. (#1689)
    • Added #[must_use] to some combinators. (#1600)
    • Added AsyncWriteExt::{write, write_vectored}. (#1612)
    • Added AsyncReadExt::read_vectored. (#1612)
    • Added TryFutureExt::try_poll_unpin. (#1613)
    • Added TryFutureExt::try_flatten_stream. (#1618)
    • Added io::BufWriter. (#1608)
    • Added Sender::same_receiver and UnboundedSender::same_receiver. (#1617)
    • Added future::try_select. (#1622)
    • Added TryFutureExt::{inspect_ok, inspect_err}. (#1630)
    • Added Compat::get_ref. (#1648)
    • Added io::Window::set. (#1667)
    • Added AsyncWriteExt::into_sink. (#1675)
    • Added AsyncBufReadExt::copy_buf_into. (#1674)
    • Added stream::pending. (#1689)
    • Implemented std::error::Error for SpawnError. (#1604)
    • Implemented Stream for FlattenSink. (#1651)
    • Implemented Sink for TryFlattenStream. (#1651)
    • Implemented AsyncRead, AsyncWrite, AsyncSeek, AsyncBufRead, FusedFuture and FusedStream for Either. (#1695)
    • Replaced empty enums with Never type, an alias for core::convert::Infallible.
    • Removed the futures-channel dependency from futures-sink and make futures-sink an optional dependency of futures-channel.
    • Renamed Sink::SinkError to Sink::Error.
    • Made a number of dependencies of futures-util optional.
    Source code(tar.gz)
    Source code(zip)
Owner
The Rust Programming Language
The Rust Programming Language
Monad/MonadIO, Handler, Coroutine/doNotation, Functional Programming features for Rust

fpRust Monad, Functional Programming features for Rust Why I love functional programming, Rx-style coding. However it's hard to implement them in Rust

null 98 Dec 24, 2022
Coroutine I/O for Rust

Coroutine I/O Coroutine scheduling with work-stealing algorithm. WARN: Possibly crash because of TLS inline, check https://github.com/zonyitoo/coio-rs

ty 454 Dec 2, 2022
[no longer maintained] Scalable, coroutine-based, fibers/green-threads for Rust. (aka MIO COroutines).

Documentation mioco Mioco provides green-threads (aka fibers) like eg. Goroutines in Go, for Rust. Status This repo is a complete re-implementation of

Dawid Ciężarkiewicz 137 Dec 19, 2022
Metal IO library for Rust

Mio – Metal IO Mio is a fast, low-level I/O library for Rust focusing on non-blocking APIs and event notification for building high performance I/O ap

Tokio 5.3k Jan 2, 2023
rust stackful coroutine library

May May is a high-performant library for programming stackful coroutines with which you can easily develop and maintain massive concurrent programs. I

Xudong Huang 1.3k Dec 31, 2022
Robyn is an async Python backend server with a runtime written in Rust, btw.

Robyn is an async Python backend server with a runtime written in Rust, btw.

Sanskar Jethi 1.8k Jan 4, 2023
Zero-cost high-level lua 5.3 wrapper for Rust

td_rlua This library is a high-level binding for Lua 5.3. You don't have access to the Lua stack, all you can do is read/write variables (including ca

null 47 May 4, 2022
zero runtime cost default arguments in rust

Default Arguments in Rust Enables default arguments in rust by macro in zero cost. Just wrap function with default_args! and macro with name of functi

Jaeyong Sung 73 Sep 6, 2022
Adds zero-cost stack overflow protection to your embedded programs

flip-link adds zero-cost stack overflow protection to your embedded programs The problem Bare metal Rust programs may not be memory safe in presence o

Knurling 151 Dec 29, 2022
Zero-cost and safe interface to UEFI firmware

ZFI – Zero-cost and safe interface to UEFI firmware ZFI is a Rust crate for writing a UEFI application with the following goals: Provides base APIs th

Ultima Microsystems 22 Sep 14, 2023
An Interpreter for Brainfuck programming language implemented in the Rust programming language with zero dependencies.

Brainfuck Hello, Visitor! Hey there, welcome to my project showcase website! It's great to have you here. I hope you're ready to check out some awesom

Syed Vilayat Ali Rizvi 7 Mar 31, 2023
Engula empowers engineers to build reliable and cost-effective databases.

Engula is a storage engine that empowers engineers to build reliable and cost-effective databases with less effort and more confidence. Engula is in t

Engula 706 Jan 1, 2023
Cost saving K8s controller to scale down and up of resources during non-business hours

Kube-Saver Motivation Scale down cluster nodes by scaling down Deployments, StatefulSet, CronJob, Hpa during non-business hours and save $$, but if yo

Mahesh Rayas 5 Aug 15, 2022
a cross-chain dollar cost averaging

Ethtent an intent-solver infrastructure prototype for automated defi earning Demo Video here Project Desciption We build an intent-solver infrastructu

null 4 Sep 11, 2023
gRPC client/server for zero-knowledge proof authentication Chaum Pederson Zero-Knowledge Proof in Rust

gRPC client/server for zero-knowledge proof authentication Chaum Pederson Zero-Knowledge Proof in Rust. Chaum Pederson is a zero-knowledge proof proto

Advaita Saha 4 Jun 12, 2023
RISC Zero is a zero-knowledge verifiable general computing platform based on zk-STARKs and the RISC-V microarchitecture.

RISC Zero WARNING: This software is still experimental, we do not recommend it for production use (see Security section). RISC Zero is a zero-knowledg

RISC Zero 653 Jan 3, 2023
The Zero Knowledge Whitelist Tool is a powerful utility for managing an address whitelist using Zero-Knowledge (ZK) proofs.

zk_whitelist: A Zero Knowledge Whitelist Tool The Zero Knowledge Whitelist Tool is a powerful utility for managing an address whitelist using Zero-Kno

Nikos Koumbakis 4 Nov 21, 2023
An open source, programmed in rust, privacy focused tool for reading programming resources (like stackoverflow) fast, efficient and asynchronous from the terminal.

Falion An open source, programmed in rust, privacy focused tool for reading programming resources (like StackOverFlow) fast, efficient and asynchronou

Obscurely 17 Dec 20, 2022
A set of Zero Knowledge modules, written in Rust and designed to be used in other system programming environments.

Zerokit A set of Zero Knowledge modules, written in Rust and designed to be used in other system programming environments. Initial scope Focus on RLN

vac 44 Dec 27, 2022
A turing-complete programming language using only zero-width unicode characters, inspired by brainfuck and whitespace.

Zero-Width A turing-complete programming language using only zero-width unicode characters, inspired by brainfuck and whitespace. Currently a (possibl

Gavin M 2 Jan 14, 2022