Concurrency extensions for Future

Overview

futures-concurrency

Concurrency extensions for Future

Installation

$ cargo add futures-concurrency

Contributing

Want to join us? Check out our "Contributing" guide and take a look at some of these issues:

License

Licensed under either of Apache License, Version 2.0 or MIT license at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this crate by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
Comments
  • Remove `MaybeDone` from tuple::join

    Remove `MaybeDone` from tuple::join

    Ref #22

    Remove the need for MaybeDone on Join for tuples.

    This is a spiritual successor of #74, thank you @yoshuawuyts and @poliorcetics for your reviews and ideas!

    The perf results are really nice now:

    >> critcmp main maybegone -f tuple::join
    group             main                                   maybegone
    -----             ----                                   ---------
    tuple::join 10    1.00    246.2±0.79ns        ? ?/sec    2.86    703.8±1.49ns        ? ?/sec
    

    edit: removes reference to #21, as we're no longing implementing "perfect" waking

    opened by matheus-consoli 7
  • Implement

    Implement "perfect" waking for `tuple::merge`

    This mimics the implementation for array::merge.

    I used that hack I introduced on #74 in a more generalized form, we can rewrite the gen_conditions in terms of this new macro (basically the same way I do here)

    Ref #21

    opened by matheus-consoli 5
  • Implement perfect waking for array/vec Join

    Implement perfect waking for array/vec Join

    Tries to implement #21 for array and vec Join.

    I’m a bit surprised as I have a ~400% regression in the benchmarks. Though I had a hard time figuring out how the benchmarks are working, and what they are even benchmarking.

    Looks like the CountdownFuture used in benchmarks might not be ideal to test for this? To properly test this, we could have a list of futures that wake from back to front, and count the number of times they were polled. With "perfect" waking, they should only be polled twice (initially, and a second time once they are woken). By making the poll "expensive", the wins should be visible in the benchmarks as well.

    opened by Swatinem 2
  • Improve RaceOk error format for array impl

    Improve RaceOk error format for array impl

    Duplicate changes from https://github.com/yoshuawuyts/futures-concurrency/pull/94/files for RaceOk impl.

    Should resolve the second item on https://github.com/yoshuawuyts/futures-concurrency/issues/98

    opened by Alastair-smith2 2
  • Governance while yosh is out of office

    Governance while yosh is out of office

    Hey all,

    Just wanted to share that I'll be out of office starting tomorrow until December 19th. In the interim: please feel free to keep filing issues, make PRs, and merge them too. I trust folks here to exercise their judgement on which things to merge, and make sure changes are reviewed, etc. So far things have been going pretty steady, so please feel free to things going the way they are.

    Both @matheus-consoli and @eholk have commit rights and can merge PRs. And @eholk also has the ability to publish new releases in a pinch. I don't expect a new release to have to be made, but should there be some critical issue that needs a hot fix: it's at least possible if I'm not around.

    For anyone new to the project: welcome! I ask you to please be respectful of people's time. Other than me are folks here are really just volunteering their time to help out on this. While they may be kind enough to help in a pinch, they are under absolutely no obligation to do so. Should there be a larger issues or design questions: I'll be back in a few weeks and I'll be happy to discuss them then.

    Anyway, I figured I'd let people know so nobody wonders why I'm suddenly radio-silent. See y'all in a few weeks!

    opened by yoshuawuyts 2
  • Remove need for `tuple_for_each!`, `tuple_len!` and simplify `gen_conditions!` a lot

    Remove need for `tuple_for_each!`, `tuple_len!` and simplify `gen_conditions!` a lot

    By using named fields instead of anonymous one in the $mod_name::Streams structs, we can completely remove the need for the very tuple_for_each! macros, which makes the code more maintainable and the crate faster to compile (though that wasn't a limitation at this point).

    I also took the opportunity to remove the tuple_len! macro because recursivity is heavy on the compiler. The generated code for this part doesn't slow down at all because it's all const-computed anyway (but instead of producing an imbalanced binary tree, we produce a more "normal" one which is much easier to deal with for rustc).

    Finally, I greatly simplified the gen_conditions! macro by removing the recursion by using the Indexes enum again (that trick is awesome, I'm glad I finally found a crate to share it with 😄), which (again) removes a very unbalanced tree of 0 + 1 + 1 + 1 + ... in the generated code.

    opened by poliorcetics 2
  • Remove `MaybeDone` from `tuple::join`

    Remove `MaybeDone` from `tuple::join`

    Ref #22

    Remove the need for MaybeDone on tuple::join by creating for each future three fields: the Future itself; its result; its state.

    I had to add a dependency on paste to create the $F_fields.

    Bench:

    >> critcmp main patch -f tuple::join
    group             main                                   patch
    -----             ----                                   -----
    tuple::join 10    1.00    232.6±9.13ns        ? ?/sec    1.14    266.0±3.09ns        ? ?/sec
    
    opened by matheus-consoli 2
  • Implement

    Implement "perfect" waking for `impl Merge for Vec` (2/2)

    Continues the work started in https://github.com/yoshuawuyts/futures-concurrency/pull/50, removing the final set of allocations. The strategy used for this is to create an initial WakerList, which we populate with intermediate wakers on the first call to poll. In a future patch we may want to create a static variant of this as well for use in our fixed-length combinators.

    I also took the opportunity to rename StreamWaker to InlineWaker, since we may want to use this mechanism for the other combinators too.

    Benchmarks

    Performance seems to be about 1.5-2x as good as before, which is expected since we no longer allocate on each call to poll:

    ❯ critcmp main patch -f vec::merge
    group              main                                   patch
    -----              ----                                   -----
    vec::merge 10      1.89      5.3±0.15µs        ? ?/sec    1.00      2.8±0.08µs        ? ?/sec        
    vec::merge 100     1.68     75.3±0.56µs        ? ?/sec    1.00     44.8±1.11µs        ? ?/sec        
    vec::merge 1000    1.46      2.8±0.02ms        ? ?/sec    1.00  1946.8±40.07µs        ? ?/sec   
    
    opened by yoshuawuyts 2
  • The scope of the unsafe block can be appropriately reduced

    The scope of the unsafe block can be appropriately reduced

    In this function you use the unsafe keyword for almost the entrie function body.

    We need to mark unsafe operations more precisely using unsafe keyword. Keeping unsafe blocks small can bring many benefits. For example, when mistakes happen, we can locate any errors related to memory safety within an unsafe block. This is the balance between Safe and Unsafe Rust. The separation is designed to make using Safe Rust as ergonomic as possible, but requires extra effort and care when writing Unsafe Rust.

    Hope this PR can help you. Best regards. References https://doc.rust-lang.org/nomicon/safe-unsafe-meaning.html https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html

    opened by cactter 2
  • implement `(a, b, c).race()` in this crate, or somewhere else?

    implement `(a, b, c).race()` in this crate, or somewhere else?

    In https://blog.yoshuawuyts.com/futures-concurrency-3/ there is mention of Future::race extension method in async_std.

    There is also a suggestion that there could be an extension trait that's implemented for tuples, similar to how Merge works currently in this crate: https://docs.rs/futures-concurrency/latest/futures_concurrency/trait.Merge.html#impl-Merge-for-(S0%2C%20S1%2C%20S2%2C%20S3)

    The post also says:

    In this post we're going to take a look at how this mode of concurrency works, take a closer look at the issues select! {} has, discuss Stream::merge as an alternative, and finally we'll look at what the ergonomics might look like in the future. This post is part of the "Futures Concurrency" series. You can find all library code mentioned in this post as part of the futures-concurrency crate.

    I find the idea of a race extension method really compelling, and have told all of my workmates about the blog post, but there doesn't seem to be an implementation of it yet. If someone wanted to implement this trait, would this repo be a reasonable place to put it?

    opened by alsuren 2
  • Initial implementation of `RaceOk` for tuples

    Initial implementation of `RaceOk` for tuples

    Initial impl of RaceOk for tuples

    something to look at:

    I'm using an array as the internal representation of AggregateError, which can be surprising as we tend to expect a tuple-ish.

    I opted to use the array because 1) arrays are much easier to work on compared to macro-based-variadic-tuples; 2) as of this implementation, we're restricting every Future to have the same Output, ergo every error has the same type.

    opened by matheus-consoli 1
  • Waker optimization + O(woken) polling for every combinator except chain

    Waker optimization + O(woken) polling for every combinator except chain

    Performance improvements

    WakerArray/WakerVec now keeps track of what futures were woken in a list, so that no O(n) search is needed when polling. This makes polling O(woken) instead of O(n). The impact is extremely significant for large combinators.

    Also, the new implementation avoids locking and unlocking the Readiness Mutex in a loop. It copies out the data necessary for iteration once at the beginning of each poll.

    I've also made WakerArray/WakerVec use a single Arc shared between all wakers (without giving up perfect waking) instead of needing a separate Arc for each. This change involves some unsafe work with RawWaker.

    API changes

    Race, TryJoin, and RaceOk for tuples now supports futures with heterogeneous results. Racing futures with different output types would return an enum whose variants are the possible outputs. If all outputs are the same type then there is a function to convert the enum to that type.

    RaceOk error types are simplified to be just array/tuple/vec of the errors. I removed the wrapping AggregateError because it can be confusing for tuples (since error types are not necessarily homogeneous anymore).

    Organizational changes

    As part of rewriting the various combinators, I've merged the code for join/try_join/race/race_ok. There is now a crate-private futures::common module with a generic combinator whose behaviors can be controlled to match join/try_join/race/race_ok by a generic type parameter. For tuples, I basically implement try join and make every other combinator delegate to that.

    I've also upped the edition to Rust 2021. This is not really necessary but the disjoint closure capture saves a few lines of code.

    I renamed "Readiness" to "Awakeness" because the former gets confusing since Poll/PollState::Ready means the future is complete rather than awake.

    Benchmark

    Currently, CountdownFuture/Stream wake and complete in perfect order. This is unrealistic. I added shuffling (with a fixed seed) so that they better represent real workload.

    Below: after = this PR before = origin/main with countdown shuffling commit cherry-picked

    $ critcmp  after before
    group                after                                  before
    -----                -----                                  ------
    array::join 10       1.00  1854.5±12.26ns        ? ?/sec    1.20      2.2±0.02µs        ? ?/sec
    array::join 100      1.00     20.6±0.16µs        ? ?/sec    1.17     24.1±0.28µs        ? ?/sec
    array::join 1000     1.00    219.4±3.13µs        ? ?/sec    5.67  1242.9±19.53µs        ? ?/sec
    array::merge 10      1.00  1828.5±29.31ns        ? ?/sec    1.32      2.4±0.05µs        ? ?/sec
    array::merge 100     1.00     19.9±0.26µs        ? ?/sec    2.15     42.8±3.00µs        ? ?/sec
    array::merge 1000    1.00    225.9±3.09µs        ? ?/sec    8.98      2.0±0.04ms        ? ?/sec
    array::race 10       1.07  1138.1±18.48ns        ? ?/sec    1.00  1061.1±43.17ns        ? ?/sec
    array::race 100      1.00      7.9±0.09µs        ? ?/sec    1.41     11.1±0.17µs        ? ?/sec
    array::race 1000     1.00     82.5±1.13µs        ? ?/sec    1.58    130.6±2.91µs        ? ?/sec
    tuple::join 10       1.00  1912.5±29.00ns        ? ?/sec    1.14      2.2±0.03µs        ? ?/sec
    tuple::merge 10      1.00      2.2±0.03µs        ? ?/sec    1.23      2.7±0.06µs        ? ?/sec
    tuple::race 10       1.15  1134.5±20.99ns        ? ?/sec    1.00   987.5±13.09ns        ? ?/sec
    vec::join 10         1.00      2.3±0.06µs        ? ?/sec    1.08      2.5±0.05µs        ? ?/sec
    vec::join 100        1.00     18.0±0.19µs        ? ?/sec    2.05     36.9±0.47µs        ? ?/sec
    vec::join 1000       1.00    202.1±1.68µs        ? ?/sec    9.45  1909.9±161.66µs        ? ?/sec
    vec::merge 10        1.00      2.4±0.04µs        ? ?/sec    1.18      2.8±0.05µs        ? ?/sec
    vec::merge 100       1.00     20.8±0.25µs        ? ?/sec    2.57     53.3±1.27µs        ? ?/sec
    vec::merge 1000      1.00    222.8±3.95µs        ? ?/sec    12.86     2.9±0.05ms        ? ?/sec
    vec::race 10         1.33  1349.1±15.53ns        ? ?/sec    1.00  1011.6±13.01ns        ? ?/sec
    vec::race 100        1.00      7.3±0.10µs        ? ?/sec    1.45     10.6±0.14µs        ? ?/sec
    vec::race 1000       1.00     72.0±1.64µs        ? ?/sec    1.70    122.7±1.75µs        ? ?/sec
    
    
    opened by wishawa 1
  • Implement PinnedDrop for `impl RaceOk for Tuple`

    Implement PinnedDrop for `impl RaceOk for Tuple`

    https://github.com/yoshuawuyts/futures-concurrency/pull/109 implemented RaceOk for tuples, but is missing a PinnedDrop implementation. This is needed since we have a MaybeUninit structure which holds the errors. If we get an error once, store it, and then cancel the RaceOk call - we've now leaked data - since MaybeUninit types need to be manually de-initialized on drop.

    bug 
    opened by yoshuawuyts 2
  • Fair chaining APIs

    Fair chaining APIs

    Now that #104 exists to close https://github.com/yoshuawuyts/futures-concurrency/issues/85, there is a real question about fairness and chaining. I've made the case before that in order to guarantee fairness, the scheduling algorithm needs to know about all types it operates on. When we were still using permutations and even just rng-based starting points, I believe this to be true. But I'm slowly coming around to @eholk's idea that this may not be the case.

    Benefits

    If we resolve this, I think we may be able to improve our ergonomics. Take for example the following code, which I believe to be quite representative of futures-concurrency's ergonomics:

    let streams = (
        socket_stream.map(Either::Response),
        route_rx.stream().map(Either::Request),
        pinger.map(|_| Either::Ping),
    );
    let mut merged = streams.merge();
    while let Some(either) = merged.next().await { ... }
    

    The tuple instantiation imo looks quite foreign. In this repo's style, we'd probably instead choose to name the futures, flattening the operation somewhat:

    let a = socket_stream.map(Either::Response);
    let b = route_rx.stream().map(Either::Request);
    let c = pinger.map(|_| Either::Ping);
    
    let mut merged = (a, b, c).merge();
    while let Some(either) = merged.next().await { ... }
    

    But while I'd argue this is more pleasant to read, we can't expect people to always do this. The earlier example is often to easier to write, and thus will be written as such. But a chaining API could probably be even easier to author as well:

    let mut merged = socket_stream
        .map(Either::Response)
        .merge(route_rx.stream().map(Either::Request))
        .merge(pinger.map(|_| Either::Ping));
    
    while let Some(either) = merged.next().await { ... }
    

    We used to have this API, but we ended up removing it. And I think there's definitely a case to be made to add this back. Just like we'd be expected to have both: async_iter::AsyncIterator::chain and impl async_iter::Chain for tuple, so could we have this for both variants of merge.

    Implementation

    I'd love to hear more from @eholk here. But my initial hunch is that perhaps something like ExactSizeIterator could help us. But rather than return how many items are contained in an iterator, it'd return the number of iterators contained within. That way outer iterators can track how often they should call inner iterators before moving on. I think this may need specialization to work though?

    I think even if we can't make the API strictly fair, it might still be worth adding the chaining API - and we can possibly resolve the fairness issues in the stdlib? Or maybe we can add a nightly flag with the specializations on it as part of this lib? Do folks have thoughts on this?

    enhancement 
    opened by yoshuawuyts 2
  • Improve error handling `race_ok` variants

    Improve error handling `race_ok` variants

    We should duplicate the pretty printing logic of https://github.com/yoshuawuyts/futures-concurrency/pull/94 for the remainder of the race_ok types.

    Tasks

    • [x] impl RaceOk for Vec
    • [X] impl RaceOk for array
    • [ ] impl RaceOk for tuple
    enhancement good first issue 
    opened by yoshuawuyts 0
Releases(v7.0.0)
  • v7.0.0(Nov 17, 2022)

    Highlights

    Futures-concurrency is a library prototyping async/.await-based concurrency operations intended for inclusion in the stdlib. This effort is lead by Yosh, as part of the Rust Async WG.

    Library Priorities

    As part of the Rust Async WG we're working towards a proposal for a base set of "futures concurrency" operations we can add to the stdlib. This library serves as a testing ground for those APIs, enabling us to experiment separately from the stdlib. And for users of stable Rust to use the futures concurrency operators without relying on nightly.

    What's change in this release over v6.0.0 is that we recognize that this library isn't just a prototype for the stdlib. It needs to be usable standalone as well. This means rather than using core::async_iter::AsyncIterator, we opt to use the ecosystem-standard futures_core::Stream instead. This enables this library to be used in production, enabling testing and benchmarking to take place, all while still working towards inclusion in the stdlib.

    Performance

    Another focus in this release has been on performance. We've added a modest suite of benchmarks to the repository, and used that to track our performance optimizations. The main performance optimization we've found is the implementation of "perfect waking". When a futures-concurrency future is woken, it now only wakes the futures which were scheduled to be woken - and no more. Together with some other optimizations this has lead to drastic performance improvements across the APIs where we've implemented it:

    group                 v6.0.1 (prev release)    v7.0.0 (curr release)
    array::merge 10       1.46      4.2±0.28µs     1.00      2.9±0.05µs
    array::merge 100      7.60   365.5±13.01µs     1.00     48.1±1.44µs
    array::merge 1000     20.88    39.2±1.49ms     1.00  1877.5±57.35µs
    vec::merge 10         1.73      4.9±0.31µs     1.00      2.9±0.05µs
    vec::merge 100        8.33    360.6±3.84µs     1.00     43.3±0.80µs
    vec::merge 1000       17.67    38.0±0.41ms     1.00      2.2±0.06ms
    

    We've only implemented it for two APIs so far - but that already has shown a 30%-95% performance improvement. They're synthetic benchmarks, and probably not entirely represenative. But the fact that we're making headway in removing unnecessary wakes has knock-on effects as well. If for whatever reason there are futures in a codebase where spurious waking may be expensive (not ideal, but we've seen it happen), this will ensure that doesn't end up negatively affecting performance. Meaning depending on the scenario, the benefits here may actually show up in application performance.

    Zip and Chain

    We've started work on two new traits: stream::Zip and stream::Chain. These now exist in addition to stream::Merge, providing control over order of execution. Unlike their stdlib Iterator counterparts, we've chosen to expose this functionality as separate traits. Not only does that enable seamless switching between merge, zip, and chain semantics. It als prevents tuple-nesting when combining more than two streams together:

    let s1 = stream::repeat(0).take(5);
    let s2 = stream::repeat(1).take(5);
    let s3 = stream::repeat(2).take(5);
    
    // With the `StreamExt::zip` method
    let s = (s1).zip(s2).zip(s3);
    while let Some(((n1, n2), n3))) = s.next().await  {
        assert_eq!((n1, n2, n3), (0, 1, 2));
    }
    
    // With the `Zip` trait
    let s = (s1, s2, s3).zip();
    while let Some((n1, n2, n3)) = s.next().await  {
        assert_eq!((n1, n2, n3), (0, 1, 2));
    }
    

    Support for arrays and vectors has landed, with support for tuples expected soon. This requires generating some macros like we have for merge already, and we just haven't quite gotten around to that yet. Soon!

    Changelog

    Added

    • Implement stream::Merge for tuples up to length 12 by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/19
    • Implement Join for 0 and 1 length tuples by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/40
    • Init zip and chain traits by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/73

    Changed

    • Remove IntoFuture definition by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/31
    • Revert away from async traits by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/24
    • Remove the custom Stream definition by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/35
    • Format debug output by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/36
    • Make implementation of Race for tuple fair by @matheus-consoli in https://github.com/yoshuawuyts/futures-concurrency/pull/58

    Fixed

    • Make impl of Join for single element tuple return single elmnt tuple by @matheus-consoli in https://github.com/yoshuawuyts/futures-concurrency/pull/86

    Internal

    • Add miri to ci by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/27
    • Remove MaybeDone from impl Join for Vec by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/29
    • Expose tuple futures from path by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/26
    • Re-export the structs by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/34
    • Init bench by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/46
    • Reuse inline wakers by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/51
    • Implement "perfect" waking for impl Merge for Vec (1/2) by @eholk in https://github.com/yoshuawuyts/futures-concurrency/pull/50
    • Add more benchmarks by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/52
    • Fix clippy warnings by @matheus-consoli in https://github.com/yoshuawuyts/futures-concurrency/pull/54
    • Switch Readiness to use bitvec by @matheus-consoli in https://github.com/yoshuawuyts/futures-concurrency/pull/53
    • Fix bench saving by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/56
    • Clippying ci by @matheus-consoli in https://github.com/yoshuawuyts/futures-concurrency/pull/55
    • Remove duplication of gen_conditions by @matheus-consoli in https://github.com/yoshuawuyts/futures-concurrency/pull/59
    • Use array::from_fn to safely create output array of MaybeUninit by @matheus-consoli in https://github.com/yoshuawuyts/futures-concurrency/pull/60
    • Inline random number generator by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/61
    • Streamline futures benches by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/64
    • Implement "perfect" waking for impl Merge for Vec (2/2) by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/57
    • Avoid allocation for small length in PollState tracking by @michaelwoerister in https://github.com/yoshuawuyts/futures-concurrency/pull/78
    • Remove Fuse from impl Merge for array by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/70
    • Make {array,vec}::race fair by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/71
    • Remove MaybeDone from impl Join for array by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/72
    • Inline poll_states and remove Fuse for vec::merge by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/79
    • Author initial comparative benchmarks by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/80
    • Push fixes from #79 by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/89
    • Perfect waker for array::Merge by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/75
    • Provide const variants of the utils/ types by @yoshuawuyts in https://github.com/yoshuawuyts/futures-concurrency/pull/81
    • Hide PollState variants by @matheus-consoli in https://github.com/yoshuawuyts/futures-concurrency/pull/92
    • Move test channel to utils by @matheus-consoli in https://github.com/yoshuawuyts/futures-concurrency/pull/91

    New Contributors

    • @eholk made their first contribution in https://github.com/yoshuawuyts/futures-concurrency/pull/50
    • @matheus-consoli made their first contribution in https://github.com/yoshuawuyts/futures-concurrency/pull/54
    • @michaelwoerister made their first contribution in https://github.com/yoshuawuyts/futures-concurrency/pull/78

    Full Changelog: https://github.com/yoshuawuyts/futures-concurrency/compare/v6.0.1...v7.0.0 Read the Documentation: https://docs.rs/futures-concurrency/7.0.0/

    Source code(tar.gz)
    Source code(zip)
Owner
Yoshua Wuyts
calm computing. he/they
Yoshua Wuyts
Loom is a concurrency permutation testing tool for Rust.

Loom is a testing tool for concurrent Rust code

Tokio 1.4k Jan 9, 2023
A concurrency framework for building real-time systems

Real-Time Interrupt-driven Concurrency A concurrency framework for building real-time systems. Formerly known as Real-Time For the Masses. Features Ta

Real-Time Interrupt-driven Concurrency (RTIC) 1k Dec 31, 2022
A prisma query-engine concurrency runner

Smash A prisma query-engine concurrency runner. Smash can be used to run concurrent requests against the prisma query engine. Currently it has workloa

garren smith 2 Jan 26, 2022
High concurrency, RealTime, In-memory storage inspired by erlang mnesia

DarkBird is a Document oriented, high concurrency in-memory Storage, also persist data to disk to avoid loss any data The darkbird provides the follow

DanyalMh 25 Dec 15, 2022
Rust Concurrency Cheat Sheet

Rust ensures data race safety through the type system (Send and Sync marker traits) as well as the ownership and borrowing rules: it is not allowed to alias a mutable reference, so it is not possible to perform a data race.

null 327 Dec 19, 2022
Optimistic multi-version concurrency control (MVCC) for main memory databases, written in Rust.

MVCC for Rust This is a work-in-progress the Hekaton optimistic multiversion concurrency control library in Rust. The aim of the project is to provide

Pekka Enberg 32 Apr 20, 2023
Native Ruby extensions written in Rust

Ruru (Rust + Ruby) Native Ruby extensions in Rust Documentation Website Have you ever considered rewriting some parts of your slow Ruby application? J

Dmitry Gritsay 812 Dec 26, 2022
Native Ruby extensions without fear

Helix ⚠️ Deprecated ⚠️ Sadly, we have made the decision to deprecate this project. While we had hoped to bootstrap the project to a point where it cou

Tilde 2k Jan 1, 2023
Idiomatic Rust bindings for OpenAL 1.1 and extensions (including EFX).

alto alto provides idiomatic Rust bindings for OpenAL 1.1 and extensions (including EFX). WARNING Because Alto interacts with global C state via dynam

null 80 Aug 7, 2022
🦸‍♂️ Recast migrates your old extensions to AndroidX, making them compatible with the latest version of Kodular.

Recast Recast helps make your old extensions compatible with Kodular Creator version 1.5.0 or above. Prerequisites To use Recast, you need to have Jav

Shreyash Saitwal 13 Dec 28, 2022
A high-performance renderer to render glTF models that use the `KHR_materials_transmission` and `KHR_materials_volume` extensions.

This is a high-performance renderer designed among other things to render glTF models that use the KHR_materials_transmission and KHR_materials_volume

Ashley 21 Dec 5, 2022
Helpers to build Perl extensions written in Rust.

Module-Install-Rust Helpers to build Perl extensions written in Rust. INSTALLATION To install this module, run the following commands: perl Makef

Vickenty Fesunov 1 Oct 13, 2019
The Reactive Extensions for the Rust Programming Language

This is an implementation of reactive streams, which, at the high level, is patterned off of the interfaces and protocols defined in http://reactive-s

ReactiveX 468 Dec 20, 2022
alto provides idiomatic Rust bindings for OpenAL 1.1 and extensions (including EFX).

alto alto provides idiomatic Rust bindings for OpenAL 1.1 and extensions (including EFX). WARNING Because Alto interacts with global C state via dynam

null 80 Aug 7, 2022
An implementation of the JSONPath A spec in Rust, with several extensions added on

Rust JSONPath Plus An implementation of the JSONPath A spec in Rust, with several extensions added on. This library also supports retrieving AST analy

Rune Tynan 4 Jul 13, 2022
Generic extensions for tapping values in Rust.

tap Suffix-Position Pipeline Behavior This crate provides extension methods on all types that allow transparent, temporary, inspection/mutation (tappi

Alexander Payne 213 Dec 30, 2022
Extensions for x64dbg written in Rust: Telescope and Unicorn powered disassembly

This is the library that extends x64dbg with new features: Telescope. It's basically recursive dereferencerer of memory view which looks at the pointe

null 18 Sep 11, 2022
Extensions for Arc such as field projection.

arc-ext Extensions for Arc<T> such as field projection. Usage The ArcExt trait implementation extends Arc<T> with .project and .project_option methods

Brendan Molloy 3 Dec 29, 2022
Opinionated set of extensions for use with rust-script

rust-script-ext Opinionated set of extensions for use with rust-script. Using rust-script to run Rust like a shell script is great! This crate provide

Kurt Lawrence 13 Sep 3, 2023
A high performance http proxy server & extensions platform & net packet capture tool

CthulhuRs A high performance http proxy server A browser extensions platform A net packet capture tool Demonstration Main features of CthulhuRs Inject

null 5 Apr 30, 2024