Tools for concurrent programming in Rust

Overview

Crossbeam

Build Status License Cargo Documentation Rust 1.36+ chat

This crate provides a set of tools for concurrent programming:

Atomics

  • AtomicCell, a thread-safe mutable memory location.(no_std)
  • AtomicConsume, for reading from primitive atomic types with "consume" ordering.(no_std)

Data structures

  • deque, work-stealing deques for building task schedulers.
  • ArrayQueue, a bounded MPMC queue that allocates a fixed-capacity buffer on construction.(alloc)
  • SegQueue, an unbounded MPMC queue that allocates small buffers, segments, on demand.(alloc)

Memory management

  • epoch, an epoch-based garbage collector.(alloc)

Thread synchronization

  • channel, multi-producer multi-consumer channels for message passing.
  • Parker, a thread parking primitive.
  • ShardedLock, a sharded reader-writer lock with fast concurrent reads.
  • WaitGroup, for synchronizing the beginning or end of some computation.

Utilities

  • Backoff, for exponential backoff in spin loops.(no_std)
  • CachePadded, for padding and aligning a value to the length of a cache line.(no_std)
  • scope, for spawning threads that borrow local variables from the stack.

Features marked with (no_std) can be used in no_std environments.
Features marked with (alloc) can be used in no_std environments, but only if alloc feature is enabled.

Crates

The main crossbeam crate just re-exports tools from smaller subcrates:

  • crossbeam-channel provides multi-producer multi-consumer channels for message passing.
  • crossbeam-deque provides work-stealing deques, which are primarily intended for building task schedulers.
  • crossbeam-epoch provides epoch-based garbage collection for building concurrent data structures.
  • crossbeam-queue provides concurrent queues that can be shared among threads.
  • crossbeam-utils provides atomics, synchronization primitives, scoped threads, and other utilities.

There is one more experimental subcrate that is not yet included in crossbeam:

Usage

Add this to your Cargo.toml:

[dependencies]
crossbeam = "0.8"

Compatibility

Crossbeam supports stable Rust releases going back at least six months, and every time the minimum supported Rust version is increased, a new minor version is released. Currently, the minimum supported Rust version is 1.36.

Contributing

Crossbeam welcomes contribution from everyone in the form of suggestions, bug reports, pull requests, and feedback. 💛

If you need ideas for contribution, there are several ways to get started:

RFCs

We also have the RFCs repository for more high-level discussion, which is the place where we brainstorm ideas and propose substantial changes to Crossbeam.

You are welcome to participate in any open issues or pull requests.

Learning resources

If you'd like to learn more about concurrency and non-blocking data structures, there's a list of learning resources in our wiki, which includes relevant blog posts, papers, videos, and other similar projects.

Another good place to visit is merged RFCs. They contain elaborate descriptions and rationale for features we've introduced to Crossbeam, but keep in mind that some of the written information is now out of date.

Conduct

The Crossbeam project adheres to the Rust Code of Conduct. This describes the minimum behavior expected from all contributors.

License

Licensed under either of

at your option.

Some Crossbeam subcrates have additional licensing notices. Take a look at other readme files in this repository for more information.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Comments
  • Implement PartialEq + Eq for channel

    Implement PartialEq + Eq for channel

    As the title says, would it possible/acceptable to implement PartialEq and Eq for the channel types?

    I'm wrapping a Sender in another type which I would like to implement Eq for. If approved I'll take a look at implementing it.

    feature question crossbeam-channel 
    opened by Thomasdezeeuw 48
  • Add preliminary support for loom

    Add preliminary support for loom

    This patch only adds support to parts of utils and to epoch. Some parts of utils had to be left out, since they rely on AtomicUsize::new being const (which it is not in loom). Other parts had to be left out due to the lack of thread::Thread in loom. All the parts needed for epoch were successfully moved to loom.

    The one loom test currently passes! I've added a loom pass on crossbeam-epoch to CI.

    Also, note that the uses of UnsafeCell in utils have not been moved to loom::cell::UnsafeCell, as loom's UnsafeCell does not support T: ?Sized, which AtomicCell depends on.

    Fixes #486.

    opened by jonhoo 36
  • Use more relaxed orderings in chase_lev

    Use more relaxed orderings in chase_lev

    I wasn't sure what to do with (most of) the orderings in resize-related methods, so I just left them as SeqCst.

    It would be nice to have benchmarks here. I am also not sure that my merging of techniques from the two papers is completely accurate, although all tests pass on my machine.

    I additionally changed the operations on top and bottom to be wrapping, but this shouldn't cause any issues.

    opened by rphmeier 36
  • Improving the garbage collector

    Improving the garbage collector

    Hi folks! :)

    I'd like to draw attention to this rayon issue: nikomatsakis/rayon#195 Here we see rayon suffering from bad performance after switching from crate deque to crossbeam.

    I've summarized in this comment what's wrong with crossbeam's epoch system and deque implementation. It suffers from multiple problems, but all of those are fixable. Some are easy, and some will require a lot of effort.

    While crossbeam has recently been going through a refactoring phase, I've designed a new epoch system from scratch. I could write a lot about it's inner workings and decisions that were made. The short story is that it's superior to crossbeam's in pretty much all areas:

    1. Pinning has much smaller overhead.
    2. The Atomic API is more ergonomic.
    3. Atomics can be tagged with several bits.
    4. Memory reclamation is incremental (no long pauses!).
    5. Data structure-local garbage queues are supported.
    6. Memory reclamation supports destructors and arbitrary destruction routines.
    7. Garbage can be eagerly collected by calling flush().
    8. Garbage is automatically flushed after 128 pinnings so that it doesn't get stuck for a long time.
    9. Rather than rotating garbage through 3 vectors where 2 are blocked, it is rotated through 2^63 vectors where two are blocked. This means more eager memory reclamation.
    10. Thread-local linked-list entries are properly deleted on thread exit.

    To solve all those problems, a complete rewrite was needed. Unfortunately, I don't know how to patch crossbeam's existing epoch system implementation to fix all of those - it really needs a fresh start.

    You can check out my new crate here: https://github.com/stjepang/coco There's a finished deque implementation that fixes all the problems rayon had with crossbeam. And it's much faster than crossbeam! :)

    Here's what I'm aiming to implement in the very near future:

    1. A Treiber stack (crossbeam's has slow CAS loops and leaks memory when dropped).
    2. A generic and easy-to-use lock-free skiplist: https://github.com/stjepang/skiplist (WIP)
    3. Fully featured bounded and unbounded MPMC channels with support for select macro: https://github.com/stjepang/channel (WIP)

    By the way, I've also developed a faster equivalent to MsQueue that also supports the pop operation with timeouts. In the end I deleted it because I want to write an even better queue. :)

    After that I'd like to explore several other types of maps, including hash maps. In order to implement all this, a solid and very performant epoch system is required - and now I have it! The design was evolving for some time but now it has finally settled on one that seems just right.

    I'm writing all this to ask the crossbeam team how they feel about it. @aturon has politely asked me not to split the crate ecosystem if possible, and I'd like to respect that. Ideally, it'd be best to merge the improvements into crossbeam. And for that to happen we need to reach a consensus as a team.

    In any case, I'll keep using coco to develop new data structures until crossbeam gets a better epoch system. Then we can pretty easily port it's data structures to crossbeam, as far as I'm concerned.

    Please don't get me wrong, I don't want to be too harsh to crossbeam. This is a magnificent project, a huge inspiration to my work, and I believe @aturon did fantastic job designing it! Thank you for that! :)

    That said, my position is that 90% of it has to be scratched and reimplemented if we want to solve existing problems and move on. If you're skeptical about a big rewrite, perhaps I could simply keep developing coco until we see if my ideas turn out to be any good? At least it's something the crossbeam team should keep an eye on...

    What do you think?

    opened by ghost 32
  • Future/Stream implementation for channel

    Future/Stream implementation for channel

    I suppose I'm reopening an issue on the archived dedicated channels repo (otherwise I'd just make a comment).

    @stjepang mentioned here that it would be easier to reimplement channels from scratch with Future support in mind ahead of time than to add Future support to the extant library.

    I have two questions:

    1. Is this on any public roadmap? I don't know nearly enough about async at the moment in Rust to valuably contribute, I don't believe.
    2. If one were to implement a Stream naiively using a crossbeam channel, would it not be efficient? I see an implementation exists as a benchmark, at least for some usages of the channel.

    The primary reason I ask this for my own needs is that there appear to be no stable, working MPMC channels that also implement Stream. My application essentially wants a stream of events that occur at a rate of approximately 5 events per millisecond with a couple dozen listeners to that same stream of events. I've created a mini benchmark (not good; no need to share in my opinion but if anyone is curious I can make it public) for using a bunch of futures::sync::mpsc::channels, one for each listener, and publishing on each of those Senders. It seems, given the benchmarks, that that might actually not be terribly inefficient (I can add 100 listeners then publish 10 messages per millisecond for 10 milliseconds in about 12 ms), but I'd rather do things the right way with a Receiver: Clone + Stream, if I can.

    opened by rivertam 26
  • Patch release for all crossbeam crates.

    Patch release for all crossbeam crates.

    This bumps the patch version for all sub-crates and the top-level crate and updates the changelogs with references to PRs where applicable. As far as I can tell, there are no breaking changes. The top-level crate has not change at all, though I bumped rand to 0.7 in all the crates, which I guess technically requires a patch bump if we do another release.

    Fixes #473. Closes #472. Closes #468. Closes #409. Not sure if this also solves #347 by virtue of #458?

    opened by jonhoo 25
  • Major-release candidate

    Major-release candidate

    Changes

    • [x] Remove scoped threads.
    • [x] Remove chase_lev as it is too complex to be in the crossbeam core crate (I plan making a side crate with this).
    • [x] Get rid of Owned<T> in favor of Box<T>.
    • [x] Give every single item detailed documentation.
    • [x] Write nontrivial code in semi-literate style.
    • [x] Refactor all the code to go from decent code to well above average code.
    • [x] Fix several bugs.
    • [x] Allow creation of Shared<T> from Guard.
    • [x] Add owned guards through Pinned.
    • [x] Remove the mem module, moving its items to the crate root.
    • [x] Remove ArcCell (eventually moved to the secondary crate).
    • [x] Quad-double the GC threshold to improve performance.
    • [x] Audit all the code.
    • [x] Rename CAS and CAS prefixed methods to compare-and-set.
    • [x] Get rid of cas().
    • [x] Add real compare-and-swap.
    • [x] Remove AtomicOption, which basically mirrored epoch::Atomic.
    • [x] Remove all the deprecated methods.
    • [x] Follow the naming conventions of libstd, renaming try_pop to pop.
    • [x] Rename pop/push on queues to queue/dequeue to mark that queues are queues, not stacks.
    • [x] Remove 3 use-after-frees in Atomic<T>.
    • [x] Remove several memory leaks all around the code.
    • [x] Remove the Scala benchmark, which seem to have no relavancy other than comparison.
    • [x] Import modules over direct items.
    • [x] Add Default to most of the types.
    • [x] Add map to Shared<T>, allowing internal referencing with owning_ref.
    • [x] Use lazy_static rather than manual implementation of it.
    • [ ] Get rid of CachePadded

    (was that all?)

    Right now, I'm going to get some sleep. I don't have much time tomorrow, so I request someone to help me with this: The tests are broken, from what I suspect to be a simple bug in CachePadded, but I'm too tired to figure it out.

    opened by ticki 25
  • Panic on `send` in channels by default?

    Panic on `send` in channels by default?

    Hi! This is somewhat of a continuation of "should dropping receiver close the sender" issue. I've migrated rust-analyzer to crossbeam-channel 0.3, and the thing I've noticed is that every .send is followed by .unwrap. Perhaps we should make this unwrapping behavior the default, and introduce a separate checked_send which returns a Result?

    Sending to a channel without receivers seems to indicate a programmer's error in the majority of cases: there's either a bug in communication structure such that receivers exit too early (direct bug), or a reciver is closed due to a panic (indirect bug). In Rust it is OK to panic on bugs by default, while providing checked alternatives:

    • indexing a vector with [] panic on out of bounds, and has .get for check access
    • in debug, integer overflow panics and has checked_add

    Returning a result by default seems to be bad:

    • it is confusing and is not a "pit of success": with Result<T, SendError> the user has choices:
      • forward error (boilerplate which just moves problem to the wrong place)
      • handle error, but correct handling is impossible in most cases
      • ignore error, which is OK (~~0.2 semantics~~ EDIT: not exactly, 0.2 blocked), but not optimal: it's better to see a panic than a deadlock
      • unwrap: this I think is correct (propagates bugs), but is a hard choice to make, especially for novice user, because "unwraps are bad"
    • it significantly devalues unwrap: unwrap is indeed bad in a lot of places, and its much easier to spot one during code review if there are fewer of them.
    • unwrap happens in the wrong place. If we panic inside crossbeam, we can set a custom message of "sending to a closed channel" instead of a generic one "unwrapped Err".

    As a counter point, a similar API is exposed by std's mutexes. They return Results by default, and almost all uses of .lock() are followed by .unwrap/.expect. I would argue that this design is also less ideal, and that std should have panicked by default, providing checked_lock methods for rare cases when you do want to handle this somehow.

    question crossbeam-channel design 
    opened by matklad 24
  • Miri: enable preemption again

    Miri: enable preemption again

    Miri has a work-around for https://github.com/rust-lang/rust/issues/55005 now.

    Also, "check-number-validity" is on by default these days. And I am not sure why "compare-exchange-weak-failure-rate=0.0" was set so let's see what happens when we remove it. :)

    opened by RalfJung 20
  • channel: implementation for receiving multiple items at once

    channel: implementation for receiving multiple items at once

    This PR implements try_recv_many, which allows a caller to receive multiple items in a single call. It is provided generally for all channels, but has a specific implementation override for bounded channels, and a few caveats:

    • there are no timeout-based or blocking variants
    • this will do almost nothing for some channels like after/zero because it uses the non-blocking try semantics
    • callers must allocate an Extend capable container themselves
    • mixed recv/recv_many performance may be suboptimal

    This method fills a niche use case: single consumers that can benefit from grabbing and processing multiple items at a time.

    feature feedback wanted crossbeam-channel design 
    opened by tobz 20
  • Implement PartialEq and Eq for channels

    Implement PartialEq and Eq for channels

    Currently for the never receiver always returns false as it is not possible to determine if two receivers are equal using the current structure.

    I've added a test to compare never receivers, but it is disabled as it currently doesn't pass.

    Closes #319.

    opened by Thomasdezeeuw 18
  • Confusing `compare_exchange` api in `crossbeam_epoch::Atomic`

    Confusing `compare_exchange` api in `crossbeam_epoch::Atomic`

    The current api for compare_exchange in crossbeam_epoch::Atomic is confusing because it differs from other implementations of compare-and-swap operations. Traditionally, on success, we expect the previously stored value to be returned. This is not the case with the current implementation.

    crossbeam_epoch api (from the docs):

    The return value is a result indicating whether the new pointer was written. On success the pointer that was written is returned. On failure the actual current value and new are returned.

    std api (from the docs)

    The return value is a result indicating whether the new value was written and containing the previous value. On success this value is guaranteed to be equal to current.

    opened by MatthewNielsen27 0
  • Channel that only stores latest message, blocking on Recv, but not Send

    Channel that only stores latest message, blocking on Recv, but not Send

    Is it possible to implement a channel that only stores only the latest value and returns the owned latest value on .recv()? This would be roughly similar to:

    • The async tokio::sync::watch, but would be owned rather than borrowed and non-async
    • bounded(1), except senders would never block, since sent messages would just replace the latest message, if there is one
    • (Mutex<T>, CondVar), except senders and receivers are separate like they are in channels, so senders cannot read the value, etc
    • single_value_channel, except recv would block

    Current Solution

    // A similar effect can be achieved by draining a bounded/unbounded channel, but this channel would be more efficient
    while let Ok(mut msg) = chan.recv() {
      // Drain all channel messages, and keep only the last
      while let Ok(m) = chan.try_recv() { msg = m; }
    
      // Do work with the latest message `msg`
    }
    

    Example API

    // Some new channel type
    let (tx, mut rx) = unbounded_watch_channel::<usize>();
    
    // Set latest value to 0.
    tx.send(0);
    // Set latest value to 1. This won't block since it's updating the latest value only (would deadlock if using bounded(1))
    tx.send(1);
    
    assert_eq!(rx.recv(), Ok(1));
    
    std::thread::spawn(move || {
      tx.send(2);
    });
    
    // This will block until the thread above sends 2
    assert_eq!(rx.recv(), Ok(2));
    
    // This will sit and wait forever (unlike the tokio::sync::watch api, which does not move values)
    x.recv();
    

    Example use-case

    If doing a non-interruptable refresh task, and n>1 RefreshRequest messages arrive in the channel during the refresh, the refresh task only runs once more instead of n times more.


    If this is something that could conceivably be added to crossbeam, I could try starting development on it. If not, I can fork the single_value_channel source and make it blocking

    opened by austenadler 0
  • Slight improvements to channel benchmark

    Slight improvements to channel benchmark

    I updated the channel benchmark slightly in order to help investigate what could be done about https://github.com/crossbeam-rs/crossbeam/issues/821. Unfortunately I couldn't figure out how to improve the spinning without regressing at lest half the benchmarks so I'm gonna leave for now.

    crossbeam-channel 
    opened by arthurprs 0
  • How to implement task dependency?

    How to implement task dependency?

    I'm interested in using Crossbeam's work stealing deques on my functional runtime, but I have an issue of task dependency. On my program, tasks aren't independent, but they depend on each-other, in the sense that a task A should only be executed once tasks B and C are completed, for example. Crossbeam's deque, as implemented, would allow task A to be stolen before its dependencies are completed, which would be incorrect. Is there any obvious way to extend the work stealer with a notion of dependency? One way I can think of would be to have a steal_and_swap and a swap primitive, so that, when a task is stolen, we replace it with a wait task.

    question 
    opened by VictorTaelin 1
  • Understanding the safety of channels in the event of a `fork`

    Understanding the safety of channels in the event of a `fork`

    I have some multi-threaded code which uses channels to send data back and forth. It's also possible that the program this runs in does a fork.

    1. I need to understand under what conditions locks may be held when sending or receiving data on the channels. I poked around and I didn't see any locks going on, but maybe I just didn't find them. Are there any locks I should be aware of?
    2. After the main thread forks, the child process will have a sender and a receiver appear based on memory state that they have a corresponding receiver and sender but they don't actually exist as the threads don't get created during the fork. I'm not sure if it would be safe to call the destructors of the child's sender and receiver in such situations. I can forget these objects and leak them, but if there's something better to be done, I'd rather do that, you know?
    opened by morrisonlevi 0
Releases(crossbeam-skiplist-0.1.1)
Owner
Crossbeam
Tools for concurrent programming in Rust
Crossbeam
Abstract over the atomicity of reference-counting pointers in rust

Archery Archery is a rust library that offers a way to abstraction over Rc and Arc smart pointers. This allows you to create data structures where the

Diogo Sousa 107 Nov 23, 2022
Rayon: A data parallelism library for Rust

Rayon Rayon is a data-parallelism library for Rust. It is extremely lightweight and makes it easy to convert a sequential computation into a parallel

null 7.8k Jan 7, 2023
Coroutine Library in Rust

coroutine-rs Coroutine library in Rust [dependencies] coroutine = "0.8" Usage Basic usage of Coroutine extern crate coroutine; use std::usize; use co

Rust中文社区 404 Dec 31, 2022
Coroutine I/O for Rust

Coroutine I/O Coroutine scheduling with work-stealing algorithm. WARN: Possibly crash because of TLS inline, check https://github.com/zonyitoo/coio-rs

ty 454 Dec 2, 2022
Cross-platform Rust wrappers for the USB ID Repository

usb-ids Cross-platform Rust wrappers for the USB ID Repository. This library bundles the USB ID database, allowing platforms other than Linux to query

William Woodruff 18 Dec 14, 2022
Rust Ethereum 2.0 Client

Lighthouse: Ethereum 2.0 An open-source Ethereum 2.0 client, written in Rust and maintained by Sigma Prime. Documentation Overview Lighthouse is: Read

Sigma Prime 2.1k Jan 6, 2023
Rust Parallel Iterator With Output Sequential Consistency

par_iter_sync: Parallel Iterator With Sequential Output Crate like rayon do not offer synchronization mechanism. This crate provides easy mixture of p

Congyu 1 Oct 30, 2021
Implementação de uma Skip List em Rust

SkipList SkipList é uma estrutura descrita em 1989 por William Pugh que se baseia em balancear de forma probabilística atalhos de um item a outro com

Rodrigo Crispim 3 Apr 27, 2022
Tools for concurrent programming in Rust

Crossbeam This crate provides a set of tools for concurrent programming: Atomics AtomicCell, a thread-safe mutable memory location.(no_std) AtomicCons

Crossbeam 5.7k Dec 30, 2022
⚙️ A curated list of static analysis (SAST) tools for all programming languages, config files, build tools, and more.

This repository lists static analysis tools for all programming languages, build tools, config files and more. The official website, analysis-tools.de

Analysis Tools 10.7k Jan 2, 2023
Cogo is a high-performance library for programming stackful coroutines with which you can easily develop and maintain massive concurrent programs.

Cogo is a high-performance library for programming stackful coroutines with which you can easily develop and maintain massive concurrent programs.

co-rs 47 Nov 17, 2022
Implementation of CSP for concurrent programming.

CSPLib Communicating Sequential Processes (CSP) Background Communicating Sequential Processes (CSP) is a way of writing a concurrent application using

Akira Hayakawa 4 Nov 9, 2022
Rust-verification-tools - RVT is a collection of tools/libraries to support both static and dynamic verification of Rust programs.

Rust verification tools This is a collection of tools/libraries to support both static and dynamic verification of Rust programs. We see static verifi

null 253 Dec 31, 2022
Competitive Programming Stress Test Tools

Competitive Programming Stress Test Tools 競技プログラミング用 ストレステストツール このプログラムの役割 のプログラムに対して,それより実行時間がかかるが確実に できる愚直プログラムと比較することで, となるテストケースを探し出す 最大コーナーケースに対し

Ryusei Ishikawa 7 Aug 28, 2021
Tree-sitter - An incremental parsing system for programming tools

tree-sitter Tree-sitter is a parser generator tool and an incremental parsing library. It can build a concrete syntax tree for a source file and effic

null 10.6k Jan 9, 2023
⚙️ A curated list of dynamic analysis tools for all programming languages, binaries, and more.

This repository lists dynamic analysis tools for all programming languages, build tools, config files and more. The official website, analysis-tools.d

Analysis Tools 650 Jan 4, 2023
An Extensible, Concurrent Web Framework for Rust

Iron Extensible, Concurrency Focused Web Development in Rust. Response Timer Example Note: This example works with the current iron code in this repos

null 6.1k Dec 27, 2022
Shuttle is a library for testing concurrent Rust code

Shuttle Shuttle is a library for testing concurrent Rust code. It is an implementation of a number of randomized concurrency testing techniques, inclu

Amazon Web Services - Labs 373 Dec 27, 2022
A Rust synchronisation primitive for "Multiplexed Concurrent Single-Threaded Read" access

exit-left verb; 1. To exit or disappear in a quiet, non-dramatic fashion, making way for more interesting events. 2. (imperative) Leave the scene, and

Jonathan de Jong 0 Dec 5, 2021
Concurrent and multi-stage data ingestion and data processing with Rust+Tokio

TokioSky Build concurrent and multi-stage data ingestion and data processing pipelines with Rust+Tokio. TokioSky allows developers to consume data eff

DanyalMh 29 Dec 11, 2022