A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...

Overview

Tokio

A runtime for writing reliable, asynchronous, and slim applications with the Rust programming language. It is:

  • Fast: Tokio's zero-cost abstractions give you bare-metal performance.

  • Reliable: Tokio leverages Rust's ownership, type system, and concurrency model to reduce bugs and ensure thread safety.

  • Scalable: Tokio has a minimal footprint, and handles backpressure and cancellation naturally.

Crates.io MIT licensed Build Status Discord chat

Website | Guides | API Docs | Chat

Overview

Tokio is an event-driven, non-blocking I/O platform for writing asynchronous applications with the Rust programming language. At a high level, it provides a few major components:

  • A multithreaded, work-stealing based task scheduler.
  • A reactor backed by the operating system's event queue (epoll, kqueue, IOCP, etc...).
  • Asynchronous TCP and UDP sockets.

These components provide the runtime components necessary for building an asynchronous application.

Example

A basic TCP echo server with Tokio:

use tokio::net::TcpListener;
use tokio::io::{AsyncReadExt, AsyncWriteExt};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let mut listener = TcpListener::bind("127.0.0.1:8080").await?;

    loop {
        let (mut socket, _) = listener.accept().await?;

        tokio::spawn(async move {
            let mut buf = [0; 1024];

            // In a loop, read data from the socket and write the data back.
            loop {
                let n = match socket.read(&mut buf).await {
                    // socket closed
                    Ok(n) if n == 0 => return,
                    Ok(n) => n,
                    Err(e) => {
                        eprintln!("failed to read from socket; err = {:?}", e);
                        return;
                    }
                };

                // Write the data back
                if let Err(e) = socket.write_all(&buf[0..n]).await {
                    eprintln!("failed to write to socket; err = {:?}", e);
                    return;
                }
            }
        });
    }
}

More examples can be found here. For a larger "real world" example, see the mini-redis repository.

To see a list of the available features flags that can be enabled, check our docs.

Getting Help

First, see if the answer to your question can be found in the Guides or the API documentation. If the answer is not there, there is an active community in the Tokio Discord server. We would be happy to try to answer your question. You can also ask your question on the discussions page.

Contributing

๐ŸŽˆ Thanks for your help improving the project! We are so happy to have you! We have a contributing guide to help you get involved in the Tokio project.

Related Projects

In addition to the crates in this repository, the Tokio project also maintains several other libraries, including:

  • hyper: A fast and correct HTTP/1.1 and HTTP/2 implementation for Rust.

  • tonic: A gRPC over HTTP/2 implementation focused on high performance, interoperability, and flexibility.

  • warp: A super-easy, composable, web server framework for warp speeds.

  • tower: A library of modular and reusable components for building robust networking clients and servers.

  • tracing (formerly tokio-trace): A framework for application-level tracing and async-aware diagnostics.

  • rdbc: A Rust database connectivity library for MySQL, Postgres and SQLite.

  • mio: A low-level, cross-platform abstraction over OS I/O APIs that powers tokio.

  • bytes: Utilities for working with bytes, including efficient byte buffers.

  • loom: A testing tool for concurrent Rust code

Supported Rust Versions

Tokio is built against the latest stable release. The minimum supported version is 1.45. The current Tokio version is not guaranteed to build on Rust versions earlier than the minimum supported version.

License

This project is licensed under the MIT license.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Tokio by you, shall be licensed as MIT, without any additional terms or conditions.

Issues
  • Proposing new `AsyncRead` / `AsyncWrite` traits

    Proposing new `AsyncRead` / `AsyncWrite` traits

    Introduce new AsyncRead / AsyncWrite โ€‹ This PR introduces new versions of AsyncRead / AsyncWrite traits. The proposed changes aim to improve: โ€‹

    • ergonomics.
    • integration of vectored operations
    • working with uninitialized byte slices. โ€‹

    Overview

    โ€‹ The PR changes the AsyncRead and AsyncWrite traits to accept T: Buf and T: BufMut values instead of &[u8] and &mut [u8]. Because &[u8] implements Buf and &mut [u8] implements BufMut, the same calling patterns used today are still possible. Additionally, any type that implements Buf and BufMut may be used. This includes Cursor<&[u8]>, Bytes, ... โ€‹

    Improvement in ergonomics

    โ€‹ Calls to read and write accept buffers, but do not necessary use up the entirety of the buffer. Both functions return a usize representing the number of bytes read / written. Because of this, it is common to write loops such as: โ€‹

    let mut rem = &my_data[..];
    โ€‹
    while !rem.is_empty() {
        let n = my_socket.write(rem).await?;
        rem = &rem[n..];
    }
    

    โ€‹ The key point to notice is having to use the return value to update the position in the cursor. This is both common and error prone. The Buf / BufMut traits aim to ease this by building the cursor concept directly into the buffer. By using these traits with AsyncRead / AsyncWrite, the above loop can be simplified as: โ€‹

    let mut buf = Cursor::new(&my_data[..]);
    โ€‹
    while buf.has_remaining() {
        my_socket.write(&mut buf).await?;
    }
    

    โ€‹ A small reduction in code, but it removes an error prone bit of logic that must be often repeated. โ€‹

    Integration of vectored operations

    โ€‹ In the AsyncRead / AsyncWrite traits provided by futures-io, vectored operations are covered using separate fns: poll_read_vectored and poll_write_vectored. These two functions have default implementations that call the non-vectored operations. โ€‹ This has a draw back, when implementing AsyncRead / AsyncWrite, usually as a layer on top of a type such as TcpStream, the implementor must not forget to impleement these two additional functions. Otherwise, the implementation will not be able to use vectored operations even if the underlying TcpStream supports it. Secondly, it requires duplication of logic: one poll_read implementation and one poll_read_vectored implementation. It is possible to implement one in terms of the other, but this can result in sub-optimial implementations. โ€‹ Imagine a situation where a rope data structure is being written to a socket. This structure is comprised of many smaller byte slices (perhaps thousands). To write it efficiently to the socket, avoiding copying data is preferable. To do this, the byte slices need to be loaded in an IoSlice. Since modern linux systems support a max of 1024 slices, we initialize an array of 1024 slices, iterate the rope to populate this array and call poll_write_vectored. The problem is that, as the caller, we don't know if the AsyncWrite type supports vectored operations or not, poll_write_vectored is called optimistically. However, the implementation "forgot" to proxy its function to TcpStream, so poll_read is called w/ the first entry in the IoSlice. The problem is, for each call to poll_read_vectored, we must iterate 1024 nodes in our rope to only have one chunk written at a time. โ€‹ By using T: Buf as the argument, the decision of whether or not to use vectored operations is left up to the leaf AsyncWrite type. Intermediate layers only implement poll_write w/ T: Buf and pass it along to the inner stream. The TcpStream implementation will know that it supports vectored operations and know how many slices it can write at a time and do "the right thing". โ€‹

    Working with uninitialized byte slices

    โ€‹ When passing buffers to AsyncRead, it is desirable to pass in uninitialized memory which the poll_read call will write to. This avoids the expensive step of zeroing out the memory (doing so has measurable impact at the macro level). The problem is that uninitialized memory is "unsafe", as such, care must be taken. โ€‹ Tokio initially attempted to handle this by adding a prepare_uninitialized_buffer function. std is investigating adding a similar though improved variant of this API. However, over the years, we have learned that the prepare_uninitialized_buffer API is sub optimal for multiple reasons. โ€‹ First, the same problem applies as vectored operations. If an implementation "forgets" to implement prepare_uninitialized_buffer then all slices must be zeroed out before passing it to poll_read, even if the implementation does "the right thing" (not read from initialized memory). In practice, most implementors end up forgetting to implement this function, resulting in memory being zeroed out. โ€‹ Secondly, implementations of AsyncRead that should not require unsafe to implement now must add unsafe simply to avoid having memory zeroed out. โ€‹ Switching the argument to T: BufMut solves this problem via the BufMut trait. First, BufMut provides low-level functions that return &mut [MaybeUninitialized<u8>]. Second, it provides utility functions that provide safe APIs for writing to the buffer (put_slice, put_u8, ...). Again, only the leaf AsyncRead implementations (TcpStream) must use the unsafe APIs. All layers may take advantage of uninitialized memory without the associated unsafety. โ€‹

    Drawbacks

    โ€‹ The primary drawback is genericizing the AsyncRead and AsyncWrite traits. This adds complexity. We feel that the added benefits discussed above outweighs the drawbacks, but only trying it out will validate it. โ€‹

    Relation to futures-io, std, and roadmap

    โ€‹ The relationship between tokio's I/O traits and futures-io has come up a few times in the past. Tokio has historically maintained its own traits. futures-io has just been released with a simplified version of the traits. There is also talk of standardizing AsyncRead and AsyncWrite in std. Because of this, we believe that now is the perfect time to experiment with these traits. This will allow us to gain more experience before committing to traits in std. โ€‹ The next version of Tokio will not be 1.0. This allows us to experiment with these traits and remove them for 1.0 if they do not pan out. โ€‹ Once AsyncRead / AsyncWrite are added to std, Tokio will provide implementations for its types. Until then, tokio-util will provide a compatibility layer between Tokio types and futures-io.

    opened by seanmonstar 84
  • Structured Concurrency Support

    Structured Concurrency Support

    First of all a disclaimer: This issue is not yet a full proposal. This serves more as a collection of things to explore, and to gather feedback on interest.

    What is structured concurrency?

    Structured concurrency describes programming paradigm. Concurrent tasks are structured in a fashion where there exist clean task hierarchies, and where the lifetime of all sub-tasks/child-tasks is constrained within the lifetime of their parent task.

    The term was likely brought up first by Martin Sustrik in this blog post, and was a guiding idea behind the libdill library. @njsmith utilized the term in Notes on structured concurrency, or: Go statement considered harmful, and designed the python trio library around the paradigm. I highly recommend to read the blog post.

    The paradigm has also been adopted by Kotlin coroutines. @elizarov gives a talk at HydraConf about structured concurrency and the evolution of Kotlins async task model, which I also highly recommend to watch. It provides some hints on things to look out for, and how APIs could look like. Kotlins documentation around coroutines is also a good resource.

    Go adopted some support for structured concurrency with the errgroup package.

    Benefits of structured concurrency

    I again recommend to check out the linked resources, which also elaborate on this ๐Ÿ˜€

    In short: Applying the structured concurrency paradigm can simplify reasoning about concurrent programs and thereby reduce errors. It is helpful at preventing resource leaks, in the same fashion as RAII allows to avoid leaks on a scope level. It might also allow for optimizations.

    Examples around error reductions and simplifications

    Here is one motivating example of how structured concurrency can simplify things:

    We are building a building a web application A, which is intended to handle at least 1000 transactions per second. Internally each transaction requires a few concurrent interactions, which will involve reaching out to remote services. When one of those transactions fails, we need to perform certain actions. E.g. we need to call another service B for a cleanup or rollback. Without structured concurrency, we might have the idea just to do spawn(cleanup_task()) in order to do this. While this works, it has a side effect: cleanup tasks might still be running while the main webservice handler has already terminated. This sounds harmless at first, but can have surprising consequences: We obviously want our services to be resilient against overloads, so we limit the amount of concurrent requests to 2000 via an async Semaphore. This works fine for our main service handler. But what happens if lots of transactions are failing? How many cleanup tasks can run at the same point of time? The answer is unfortunately, that the number of those is effectively unbounded. Thereby our service can be overloaded through queuing up cleanup tasks - even though we protected ourself against too many concurrent API calls. This can lead to large scale outages in distributed systems.

    By making sure all cleanup logic is performed inside the lifetime/scope of the main service handler, we can guarantee that the number of cleanup tasks is also bounded by our Semaphore.

    Another example could be applying configuration changes at runtime: While our service is running we want to able to update it's configuration. After the configuration change is applied no transaction should still be utilizing the old configuration. What we need to do now is:

    • Disable the acceptor in order to drain requests before we can apply the config change
    • Wait for all ongoing transactions to complete
    • Cancel transactions if they take too long to complete.
    • Update the configuration
    • Restart the acceptor

    Without having a structured approach for concurrency, this is a lot more of a complicated problem than it sounds. Any old transaction might have spawned a subtask which might still be executing after we have updated the configuration. There is no easy way to check for the higher level code if everything has finished.

    Potential for optimizations

    The application of structured concurrency might allow for optimizations. E.g. we might be able to allow subtasks to borrow data inside the parent tasks scope without the need for additional heap allocations. Since the exact mechanisms are however not yet designed, the exact potential is unknown.

    Core requirements

    I think the core requirements for structured concurrency are:

    • A parent task will only finish once all child tasks have finished
    • When tasks are spawned, they need to be spawned in the context of a parent task. The parent needs to remember it's child tasks
    • Parent tasks need to have a mechanism to cancel child tasks
    • Errors in child tasks should lead the parent task to return an error as soon as possible, and all sibling tasks to get cancelled. This behavior is equivalent to the behavior of the try_join! macro in futures-rs.

    Regarding the last point I am not sure whether automatic error propagation is a required point of structured concurrency and whether it can be achieved on a task level, but it definitely makes things easier.

    Do we actually need to have builtin support for this?

    Rusts async/await mechanism already provides structured concurrency inside a particular task: By utilizing tools like select! or join! we can run multiple child-tasks which are constrained to the same lifetime - which is the current scope. This is not possible in Go or Kotlin - which require an explicit child-task to be spawned to achieve the behavior. Therefore the benefits might be lower.

    I built an examples in futures-intrusive around those mechanisms.

    However the concurrency inside a certain task will not scale very well, due requiring polling of all child Futures. Therefore real concurrent tasks will be required in most applications.

    On this level we have a set of tools in our toolbox that allow us to structure our current tasks manually:

    • Parent tasks can wait for child tasks to join via the use of Oneshot channels or the new JoinHandles
    • Parent tasks can forcefully cancel child tasks by just dropping them
    • Parent tasks can gracefully cancel parents via passing a cancellation signal.

    However these tools all require explicit code in order to guarantee correctness. Builtin support for structured concurrency could improve on usability and allow more developers to use good and correct defaults.

    And as mentioned earlier, I think builtin support could also allow for new usages, e.g. borrowing inside child tasks or potential scheduler improvements when switching between child tasks and parent tasks.

    The following posts are now mainly a braindump around how these requirements could be fulfilled and how they align with existing mechanisms.

    opened by Matthias247 46
  • how to implement stream r/w in parallel?

    how to implement stream r/w in parallel?

    As known, due to the ownership, we could not r/w the stream at the same time.

    The split in tokio seems like a faked split, because it uses mutex for r/w.

    Behind the scenes, split ensures that if we both try to read and write at the same time, only one of them happen at a time.

    Although we do not block on syscall, which is just the goal of async programming, but the syscall itself is locked by that mutex, i.e. when the read() syscall is processing, we could not do write() syscall. That seems very silly, with obvious performance impact. At syscall level, the read and write could be in parallel, which is meaningful for full-duplex app protocols (e.g. http 1.1 with pipe-lining and http2),

    Could we do a real split? Like we could clone mio::net::TcpStream (https://docs.rs/mio/0.6.18/mio/net/struct.TcpStream.html#method.try_clone), but sharing the registration and other stuff? That way, we do have real zero cost abstraction.

    A-tokio C-enhancement M-net 
    opened by kingluo 37
  • Lost wakeups in threadpool

    Lost wakeups in threadpool

    It has been reported (https://github.com/rust-lang-nursery/futures-rs/issues/1170) that there have been lost wake ups.

    Downgrading to threadpool 0.1.4 reportedly solves the problem. #459 is the only meaningful change.

    opened by carllerche 35
  • Added map associated function to MutexGuard that uses new MappedMutexGuard type

    Added map associated function to MutexGuard that uses new MappedMutexGuard type

    Part of #2471, extends the work of #2445. Both this PR and #2445 together close #2471.

    This PR introduces the MappedMutexGuard type and adds a map associated function to MutexGuard. The work here is largely based on #2445, so thanks @udoprog for the great PR. This was very easy to implement using your work as a reference.

    MappedMutexGuard works almost exactly the same as parking_lot::MappedMutexGuard, but adapted to the internals of tokio::sync::Mutex. The MappedMutexGuard type stores a reference to the semaphore from the original Mutex as well as a raw pointer *mut T that is the result of calling the function passed to map. The Mutex does not hold a mutable reference to its data (it uses an UnsafeCell), so I do not believe that it is possible to accidentally run into any aliasing issues.

    I added documentation based on the work in #2445 and the documentation in parking_lot/lock_api. There are doctests in both map and try_map that are almost exactly the same as the ones in #2445 for RwLockWriteGuard. I generated the docs with cargo doc and everything looks great. :tada:

    A-tokio C-enhancement M-sync 
    opened by sunjay 31
  • Consider defaulting Tokio features to off instead of on.

    Consider defaulting Tokio features to off instead of on.

    As per #1318, Tokio has been merged into a single crate and components are split by feature flag. Now, with all features enabled, the tokio crate is quite heavy.

    Regardless of the direction, two things will happen:

    • documentation: the required feature flag for all components will be documented similar to how Tonic does this (see transport module here: https://docs.rs/tonic/0.1.0-alpha.6/tonic/).
    • meta feature: A full feature flag will be provided that enables all Tokio feature flags.

    The question is, should default include all features, no features, or some features.

    Default on

    One of the main drawbacks mentioned in #1318 was that, when features are enabled by default, libraries will accidentally depend on more features than necessary. Doing so will force these features to be enabled by consumers of the library. Also, the end user can accidentally use features that were enabled by the dependency. When the dependency changes its feature flags, the application breaks as the required features are no longer available.

    Default off

    An alternative would be to default to no feature flags enabled by default. In this case, depending on tokio will only enable core traits (AsyncRead, AsyncWrite, ToSocketAddrs) and an empty Runtime type that doesn't do much when used. Getting started guides, examples, the README would instruct users to depend on tokio as:

    tokio = { version = "0.2.0", features = ["full"] }
    

    Libraries will be instructed to pick only the features they require. The primary drawback of this strategy is that it adds a bump to the getting started flow.

    Default some

    A middle ground would be to define a subset of features that should be enabled by default. It is unclear how to pick the features to enable by default as different Tokio users use significantly different feature sets. Because the choice is arbitrary, the end user will have no way to intuit if a feature is enabled by default or not.

    C-proposal 
    opened by carllerche 29
  • rfc: collapse Tokio sub crates into single `tokio` crate

    rfc: collapse Tokio sub crates into single `tokio` crate

    There has been frustration among Tokio users regarding the number of crates pulled in when depending on Tokio. Here is an opportunity to discuss an alternative strategy. By doing this RFC, users who are happy with the current situation may express this.

    Summary

    Do not maintain tokio-* sub crates, instead all Tokio code will exist in a single tokio crate and components are enabled or disabled using feature flags.

    For example, depending on only the timer functionality could be done with:

    tokio = { version = "0.2.0", default-features = false, features = [ "timer" ] }
    

    By default, tokio would have the same components enabled as it does today.

    Motivation

    Maintaining a large number of crates comes with an increased maintainership burden. Maintaining correct dependencies between crates is complex. Users feel that large number of dependencies == bloat. Additional rational can be found here.

    Details

    Tokio must maintain semver stability of its core APIs. This includes traits as well as some types, such as TcpStream. Tokio would like to be able to release breaking changes to less fundamental APIs without having to break the entire Tokio ecosystem.

    Currently, Tokio achieves this goal by breaking up all the various components into individual crates. Doing this allows less stable components to release breaking changes without touching stable components. However, this strategy has drawbacks (see Motivation section).

    In this proposal, all Tokio components would be moved into a single crate. Each component would have an associated feature flag, similar to how Tokio does it today.

    Not much would change for application developers, they would still just depend on tokio and enable / disable feature flags as needed. Library developers would no longer depend on sub crates. Instead, they would depend on tokio and only pull in the features that they need.

    Type stability

    Core types can maintain stability between breaking semver releases. For example, if the TcpStream type does not change between Tokio version 0.2 and Tokio version 0.3, then the following steps would be taken to release 0.3:

    • Release tokio 0.3
    • Update tokio 0.2 to depend on tokio 0.3.
    • Replace the implementation of TcpStream in 0.3 by re-exporting the implementation from 0.3.
    • Release a new patch version for 0.2 including the re-exported TcpStream type from 0.3.

    By doing this, TcpStream from 0.2 and 0.3 are the same type.

    Drawbacks

    • The breaking change release process becomes more complicated as all untouched types must be re-exported in the old version.
    • If a user does not update the patch 0.2 patch release in the above scenario, they can end up with both 0.2 and 0.3.

    Alternatives

    Continue to release new crates for each component.

    C-proposal 
    opened by carllerche 28
  • Add cooperative task yielding

    Add cooperative task yielding

    Motivation

    A single call to poll on a top-level task may potentially do a lot of work before it returns Poll::Pending. If a task runs for a long period of time without yielding back to the executor, it can starve other tasks waiting on that executor to execute them, or drive underlying resources. See for example https://github.com/rust-lang/futures-rs/issues/2047, https://github.com/rust-lang/futures-rs/issues/1957, and https://github.com/rust-lang/futures-rs/issues/869. Since Rust does not have a runtime, it is difficult to forcibly preempt a long-running task.

    Consider a future like this one:

    # use tokio::stream::StreamExt;
    async fn drop_all<I: Stream>(input: I) {
        while let Some(_) = input.next().await {}
    }
    

    It may look harmless, but consider what happens under heavy load if the input stream is always ready. If we spawn drop_all, the task will never yield, and will starve other tasks and resources on the same executor.

    Solution

    The preemption module provides an opt-in mechanism for futures to collaborate with the executor to avoid starvation. With opt-in preemption, the problem above is alleviated:

    # use tokio::stream::StreamExt;
    async fn drop_all<I: Stream>(input: I) {
        while let Some(_) = input.next().await {
            tokio::preempt_check!();
        }
    }
    

    The call to [preempt_check!] will coordinate with the executor to make sure that every so often control is yielded back to the executor so it can run other tasks.

    Implementation

    The implementation uses a thread-local counter that simply counts how many "preemption points" we have passed since the task was first polled. Once the "budget" has been spent, any subsequent preemption points will return Poll::Pending, eventually making the top-level task yield. When it finally does yield, the executor resets the budget before running the next task.

    opened by jonhoo 27
  • Performance regression in tokio-threadpool 0.1.8

    Performance regression in tokio-threadpool 0.1.8

    Version

    tokio 0.1.11 tokio-codec 0.1.1 tokio-core 0.1.17 tokio-current-thread 0.1.3 tokio-executor 0.1.5 tokio-fs 0.1.4 tokio-io 0.1.10 tokio-openssl 0.2.1 tokio-pool 0.1.0 tokio-process 0.2.2 tokio-proto 0.1.1 tokio-reactor 0.1.6 tokio-service 0.1.0 tokio-signal 0.2.6 tokio-tcp 0.1.2 tokio-threadpool 0.1.8 tokio-timer 0.1.2 tokio-timer 0.2.7 tokio-tls 0.1.4 tokio-tls-api 0.1.20 tokio-udp 0.1.2 tokio-uds 0.1.7 tokio-uds 0.2.3

    Platform

    Linux, 64-bit.

    Subcrates

    tokio-threadpool

    Description

    One of our tests shows a large performance regression (20x slowdown in p95 request latency; p75 is unchanged) with tokio-threadpool 0.1.8. Rolling just that crate back to 0.1.7 restores the original performance.

    opened by jsgf 27
  • net: Expose `resolve_host`, a function that asynchronously resolves DNS.

    net: Expose `resolve_host`, a function that asynchronously resolves DNS.

    Currently,tokio does not provide a mechanism to resolve DNS asynchronously. This means that libraries like Hyper must re-implement functionality that exists in tokio themselves.

    opened by davidbarsky 26
  • tracing: emit waker op as str instead as Debug

    tracing: emit waker op as str instead as Debug

    It was an mistake to emit the op as a dyn Debug, it should be emitted as a string directly.

    opened by seanmonstar 0
  • time: fix time::advance() with sub-ms durations

    time: fix time::advance() with sub-ms durations

    Update the advance logic to factor in the timer's ms rounding.

    Fixes #3837

    opened by carllerche 3
  • rfc: Runtime stats

    rfc: Runtime stats

    Rendered

    This RFC proposes the low level stats implementation within tokio to be used by metrics aggregators/collectors to expose within dashboards such as grafana, etc. This low level stats will be the foundation for tokio's future runtime observability goals and do not present a complete story since they will mostly be raw values that are unaggregated.

    A-tokio M-runtime 
    opened by LucioFranco 0
  • Reenable test_socket_pair on FreeBSD

    Reenable test_socket_pair on FreeBSD

    It was disabled due to an OS bug that has been fixed on all currently supported releases of FreeBSD

    Motivation

    To increase test coverage

    Solution

    To reenable a test that no longer needs to be disabled

    A-ci A-tokio 
    opened by asomers 0
  • Allow consumers to implement POSIX AIO sources.

    Allow consumers to implement POSIX AIO sources.

    Unlike every other kqueue event source, POSIX AIO registers events not via kevent(2) but by a different mechanism that needs the kqueue's file descriptor. This commit adds a new PollAio type, designed to be used by an external crate that implements a POSIX AIO Future for use with Tokio's reactor.

    Fixes: #3197

    Motivation

    With Tokio 0.1 and 0.2, PollEvented was sufficiently powerful to allow external crates to implement a POSIX AIO event source. Beginning with Tokio 0.3, PollEvented became private, and it also lost some needed functionality. Thus, there was no possible way for a consumer to use POSIX AIO with Tokio.

    Solution

    This PR restores that ability. While leaving the actual implementation for an external crate, it adds just enough functionality for the external crate to hook in. It deliberately does not expose all of PollEvented, so as not to encourage consumers to use that type over AsyncFd. Instead, it creates a new, dedicated type.

    A-tokio M-net 
    opened by asomers 0
  • feat: Export `sync::notify::Notified` publicly.

    feat: Export `sync::notify::Notified` publicly.

    Motivation

    This is my attempt to get #3372 in, so addressed the comment there and opened this PR. Quoting that PR:

    I would like to avoid boxing the Notified future when storing it a struct in fede1024/rust-rdkafka#320 .

    Solution

    Add sync::futures module which exposes sync::notify::Notified.

    A-tokio M-sync 
    opened by yotamofek 0
  • Add drop guard for cancellation token

    Add drop guard for cancellation token

    Motivation

    Usually, cancellation is implemented as dropping a future. However, this does not work when a future internally spawns background tasks.

    Solution

    A new method drop_guard is added. It returns a special guard which cancels the token unless disarmed. Intended usage:

    async fn my_simple_fut() {
        let token = CancellationToken::new();
        let _g = token.clone().drop_guard();
        tokio::task::spawn(async move {
             some_lib::do_work(..., token);
        });
        // some other work here.
    }
    
    A-tokio-util M-sync 
    opened by MikailBag 0
  • time::advance advances too far when given a Duration of sub-millisecond granularity

    time::advance advances too far when given a Duration of sub-millisecond granularity

    Version

    tokio-repro v0.1.0 (/Users/hallmaxw/tokio-repro)
    โ””โ”€โ”€ tokio v1.6.1
        โ””โ”€โ”€ tokio-macros v1.2.0 (proc-macro)
    

    Platform Darwin Kernel Version 19.6.0

    Description

    time::advance advances time too far when it's given a Duration of sub-millisecond granularity. This worked prior to this commit, which was included in tokio 1.6 https://github.com/tokio-rs/tokio/commit/2b9b55810847b4c7855e3de82f432ca997600f30

    I assume this is happening because the above commit updates time::advance to use sleep_until, which operates at millisecond granularity.

    Here's some code to reproduce the issue:

    #[cfg(test)]
    mod tests {
        use std::time::Duration;
    
        use tokio::time::{self, Instant};
    
        #[tokio::test]
        async fn test_time_advance() {
            time::pause();
            let start_time = Instant::now();
            time::advance(Duration::from_micros(3_141_592)).await;
    
            // The duration elapsed is the duration passed to time::advance plus
            // an extra 1 ms. You'd expect the duration elapsed to just be the duration
            // passed to time::advance
            assert_eq!(
                start_time.elapsed(),
                Duration::from_micros(3_141_592 + 1_000)
            )
        }
    }
    

    I expected the duration elapsed to be the exact duration passed to time::advance.

    Instead, the duration elapsed was the duration passed to time::advance plus an additional millisecond.

    cc: @LucioFranco

    A-tokio C-bug M-time 
    opened by hallmaxw 6
  • Do something about the sync_mpsc benchmark

    Do something about the sync_mpsc benchmark

    The sync_mpsc benchmark keeps failing due to fluctuations in its running time. We should do something about this.

    A-ci 
    opened by Darksonn 1
  • Check for buggy preadv2

    Check for buggy preadv2

    Closes: #3803

    cc @br0adcast @wmanley

    A-tokio M-fs 
    opened by Darksonn 7
Releases(tokio-1.6.1)
  • tokio-1.6.1(May 28, 2021)

  • tokio-1.6.0(May 14, 2021)

    1.6.0 (May 14, 2021)

    Added

    • fs: try doing a non-blocking read before punting to the threadpool (#3518)
    • io: add write_all_buf to AsyncWriteExt (#3737)
    • io: implement AsyncSeek for BufReader, BufWriter, and BufStream (#3491)
    • net: support non-blocking vectored I/O (#3761)
    • sync: add mpsc::Sender::{reserve_owned, try_reserve_owned} (#3704)
    • sync: add a MutexGuard::map method that returns a MappedMutexGuard (#2472)
    • time: add getter for Interval's period (#3705)

    Fixed

    • io: wake pending writers on DuplexStream close (#3756)
    • process: avoid redundant effort to reap orphan processes (#3743)
    • signal: use std::os::raw::c_int instead of libc::c_int on public API (#3774)
    • sync: preserve permit state in notify_waiters (#3660)
    • task: update JoinHandle panic message (#3727)
    • time: prevent time::advance from going too far (#3712)

    Documented

    • net: hide net::unix::datagram module from docs (#3775)
    • process: updated example (#3748)
    • sync: Barrier doc should use task, not thread (#3780)
    • task: update documentation on block_in_place (#3753)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.5.0(Apr 12, 2021)

    1.5.0 (April 12, 2021)

    Added

    • io: add AsyncSeekExt::stream_position (#3650)
    • io: add AsyncWriteExt::write_vectored (#3678)
    • io: add a copy_bidirectional utility (#3572)
    • net: implement IntoRawFd for TcpSocket (#3684)
    • sync: add OnceCell (#3591)
    • sync: add OwnedRwLockReadGuard and OwnedRwLockWriteGuard (#3340)
    • sync: add Semaphore::is_closed (#3673)
    • sync: add mpsc::Sender::capacity (#3690)
    • sync: allow configuring RwLock max reads (#3644)
    • task: add sync_scope for LocalKey (#3612)

    Fixed

    • chore: try to avoid noalias attributes on intrusive linked list (#3654)
    • rt: fix panic in JoinHandle::abort() when called from other threads (#3672)
    • sync: don't panic in oneshot::try_recv (#3674)
    • sync: fix notifications getting dropped on receiver drop (#3652)
    • sync: fix Semaphore permit overflow calculation (#3644)

    Documented

    • io: clarify requirements of AsyncFd (#3635)
    • runtime: fix unclear docs for {Handle,Runtime}::block_on (#3628)
    • sync: document that Semaphore is fair (#3693)
    • sync: improve doc on blocking mutex (#3645)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.4.0(Mar 20, 2021)

    Added

    • macros: introduce biased argument for select! (#3603)
    • runtime: add Handle::block_on (#3569)

    Fixed

    • runtime: avoid unnecessary polling of block_on future (#3582)
    • runtime: fix memory leak/growth when creating many runtimes (#3564)
    • runtime: mark EnterGuard with must_use (#3609)

    Documented

    • chore: mention fix for building docs in contributing guide (#3618)
    • doc: add link to PollSender (#3613)
    • doc: alias sleep to delay (#3604)
    • sync: improve Mutex FIFO explanation (#3615)
    • timer: fix double newline in module docs (#3617)
    Source code(tar.gz)
    Source code(zip)
  • tokio-stream-0.1.5(Mar 20, 2021)

  • tokio-util-0.6.5(Mar 20, 2021)

  • tokio-1.3.0(Mar 9, 2021)

    Added

    • coop: expose an unconstrained() opt-out (#3547)
    • net: add into_std for net types without it (#3509)
    • sync: add same_channel method to mpsc::Sender (#3532)
    • sync: add {try_,}acquire_many_owned to Semaphore (#3535)
    • sync: add back RwLockWriteGuard::map and RwLockWriteGuard::try_map (#3348)

    Fixed

    • sync: allow oneshot::Receiver::close after successful try_recv (#3552)
    • time: do not panic on timeout(Duration::MAX) (#3551)

    Documented

    • doc: doc aliases for pre-1.0 function names (#3523)
    • io: fix typos (#3541)
    • io: note the EOF behaviour of read_until (#3536)
    • io: update AsyncRead::poll_read doc (#3557)
    • net: update UdpSocket splitting doc (#3517)
    • runtime: add link to LocalSet on new_current_thread (#3508)
    • runtime: update documentation of thread limits (#3527)
    • sync: do not recommend join_all for Barrier (#3514)
    • sync: documentation for oneshot (#3592)
    • sync: rename notify to notify_one (#3526)
    • time: fix typo in Sleep doc (#3515)
    • time: sync interval.rs and time/mod.rs docs (#3533)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.2.0(Feb 6, 2021)

    Added

    • signal: make Signal::poll_recv method public (#3383)

    Fixed

    • time: make test-util paused time fully deterministic (#3492)

    Documented

    • sync: link to new broadcast and watch wrappers (#3504)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.1.1(Jan 29, 2021)

  • tokio-1.0.3(Jan 28, 2021)

  • tokio-1.1.0(Jan 22, 2021)

    Added

    • net: add try_read_buf and try_recv_buf (#3351)
    • mpsc: Add Sender::try_reserve function (#3418)
    • sync: add RwLock try_read and try_write methods (#3400)
    • io: add ReadBuf::inner_mut (#3443)

    Changed

    • macros: improve select! error message (#3352)
    • io: keep track of initialized bytes in read_to_end (#3426)
    • runtime: consolidate errors for context missing (#3441)

    Fixed

    • task: wake LocalSet on spawn_local (#3369)
    • sync: fix panic in broadcast::Receiver drop (#3434)

    Documented

    • stream: link to new Stream wrappers in tokio-stream (#3343)
    • docs: mention that test-util feature is not enabled with full (#3397)
    • process: add documentation to process::Child fields (#3437)
    • io: clarify AsyncFd docs about changes of the inner fd (#3430)
    • net: update datagram docs on splitting (#3448)
    • time: document that Sleep is not Unpin (#3457)
    • sync: add link to PollSemaphore (#3456)
    • task: add LocalSet example (#3438)
    • sync: improve bounded mpsc documentation (#3458)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.0.2(Jan 14, 2021)

  • tokio-1.0.1(Dec 25, 2020)

    This release fixes a soundness hole caused by the combination of RwLockWriteGuard::map and RwLockWriteGuard::downgrade by removing the map function. This is a breaking change, but breaking changes are allowed under our semver policy when they are required to fix a soundness hole. (See this RFC for more.)

    Note that we have chosen not to do a deprecation cycle or similar because Tokio 1.0.0 was released two days ago, and therefore the impact should be minimal.

    Due to the soundness hole, we have also yanked Tokio version 1.0.0.

    Removed

    • sync: remove RwLockWriteGuard::map and RwLockWriteGuard::try_map (#3345)

    Fixed

    • docs: remove stream feature from docs (#3335)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.0.0(Dec 23, 2020)

    Commit to the API and long-term support.

    Announcement and more details.

    Fixed

    • sync: spurious wakeup in watch (#3234).

    Changed

    • io: rename AsyncFd::with_io() to try_io() (#3306)
    • fs: avoid OS specific *Ext traits in favor of conditionally defining the fn (#3264).
    • fs: Sleep is !Unpin (#3278).
    • net: pass SocketAddr by value (#3125).
    • net: TcpStream::poll_peek takes ReadBuf (#3259).
    • rt: rename runtime::Builder::max_threads() to max_blocking_threads() (#3287).
    • time: require current_thread runtime when calling time::pause() (#3289).

    Removed

    • remove tokio::prelude (#3299).
    • io: remove AsyncFd::with_poll() (#3306).
    • net: remove {Tcp,Unix}Stream::shutdown() in favor of AsyncWrite::shutdown() (#3298).
    • stream: move all stream utilities to tokio-stream until Stream is added to std (#3277).
    • sync: mpsc try_recv() due to unexpected behavior (#3263).
    • tracing: make unstable as tracing-core is not 1.0 yet (#3266).

    Added

    • fs: poll_* fns to DirEntry (#3308).
    • io: poll_* fns to io::Lines, io::Split (#3308).
    • io: _mut method variants to AsyncFd (#3304).
    • net: poll_* fns to UnixDatagram (#3223).
    • net: UnixStream readiness and non-blocking ops (#3246).
    • sync: UnboundedReceiver::blocking_recv() (#3262).
    • sync: watch::Sender::borrow() (#3269).
    • sync: Semaphore::close() (#3065).
    • sync: poll_recv fns to mpsc::Receiver, mpsc::UnboundedReceiver (#3308).
    • time: poll_tick fn to time::Interval (#3316).
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.3.6(Dec 17, 2020)

    Released December 14, 2020

    Fixed

    • rt: fix deadlock in shutdown (#3228)
    • rt: fix panic in task abort when off rt (#3159)
    • sync: make add_permits panic with usize::MAX >> 3 permits (#3188)
    • time: Fix race condition in timer drop (#3229)
    • watch: fix spurious wakeup (#3244)

    Added

    • example: add back udp-codec example (#3205)
    • net: add TcpStream::into_std (#3189)
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.3.5(Dec 2, 2020)

    Fixed

    • rt: fix shutdown_timeout(0) (#3196).
    • time: fixed race condition with small sleeps (#3069).

    Added

    • io: AsyncFd::with_interest() (#3167).
    • signal: CtrlC stream on windows (#3186).
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.3.4(Nov 18, 2020)

    Fixed

    • stream: StreamMap Default impl bound (#3093).
    • io: AsyncFd::into_inner() should deregister the FD (#3104).

    Changed

    • meta: parking_lot feature enabled with full (#3119).

    Added

    • io: AsyncWrite vectored writes (#3149).
    • net: TCP/UDP readiness and non-blocking ops (#3130, #2743, #3138).
    • net: TCP socket option (linger, send/recv buf size) (#3145, #3143).
    • net: PID field in UCred with solaris/illumos (#3085).
    • rt: runtime::Handle allows spawning onto a runtime (#3079).
    • sync: Notify::notify_waiters() (#3098).
    • sync: acquire_many(), try_acquire_many() to Semaphore (#3067).
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.2.23(Nov 12, 2020)

    Maintenance release.

    Fixes

    • time: report correct error for timers that exceed max duration (#2023)
    • time: fix resetting expired timers causing panics (#2587)
    • macros: silence unreachable_code warning in select! (#2678)
    • rt: fix potential leak during runtime shutdown (#2649)
    • sync: fix missing notification during mpsc close (#2854)

    Changes

    • io: always re-export std::io (#2606)
    • dependencies: update parking_lot dependency to 0.11.0 (#2676)
    • io: rewrite read_to_end and read_to_string (#2560)
    • coop: reset coop budget when blocking in block_on (#2711)
    • sync: better Debug for Mutex (#2725)
    • net: make UnixListener::poll_accept public (#2880)
    • dep: raise lazy_static to 1.4.0 (#3132)
    • dep: raise slab to 0.4.2 (#3132)

    Added

    • io: add io::duplex() as bidirectional reader/writer (#2661)
    • net: introduce split and into_split on UnixDatagram (#2557)
    • net: ensure that unix sockets have both split and into_split (#2687)
    • net: add try_recv/from & try_send/to to UnixDatagram (#1677)
    • net: Add UdpSocket::{try_send,try_send_to} methods (#1979)
    • net: implement ToSocketAddrs for (String, u16) (#2724)
    • io: add ReaderStream (#2714)
    • sync: implement map methods (#2771)
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.3.3(Nov 2, 2020)

    Fixes a soundness hole by adding a missing Send bound to Runtime::spawn_blocking().

    Fixed

    • rt: include missing Send, fixing soundness hole (#3089).
    • tracing: avoid huge trace span names (#3074).

    Added

    • net: TcpSocket::reuseport(), TcpSocket::set_reuseport() (#3083).
    • net: TcpSocket::reuseaddr() (#3093).
    • net: TcpSocket::local_addr() (#3093).
    • net: add pid to UCred (#2633).
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.3.2(Oct 27, 2020)

    Adds AsyncFd as a replacement for v0.2's PollEvented.

    Fixed

    • io: fix a potential deadlock when shutting down the I/O driver (#2903).
    • sync: RwLockWriteGuard::downgrade() bug (#2957).

    Added

    • io: AsyncFd for receiving readiness events on raw FDs (#2903).
    • net: poll_* function on UdpSocket (#2981).
    • net: UdpSocket::take_error() (#3051).
    • sync: oneshot::Sender::poll_closed() (#3032).
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.3.1(Oct 21, 2020)

    This release fixes a use-after-free in the IO driver. Additionally, the read_buf and write_buf methods have been added back to the IO traits, as the bytes crate is now on track to reach version 1.0 together with Tokio.

    Fixed

    • net: fix use-after-free (#3019).
    • fs: ensure buffered data is written on shutdown (#3009).

    Added

    • io: copy_buf() (#2884).
    • io: AsyncReadExt::read_buf(), AsyncReadExt::write_buf() for working with Buf/BufMut (#3003).
    • rt: Runtime::spawn_blocking() (#2980).
    • sync: watch::Sender::is_closed() (#2991).
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.3.0(Oct 15, 2020)

    This represents a 1.0 beta release. APIs are polished and future-proofed. APIs not included for 1.0 stabilization have been removed.

    Biggest changes are:

    • I/O driver internal rewrite. The windows implementation includes significant changes.
    • Runtime API is polished, especially with how it interacts with feature flag combinations.
    • Feature flags are simplified
      • rt-core and rt-util are combined to rt
      • rt-threaded is renamed to rt-multi-thread to match builder API
      • tcp, udp, uds, dns are combied to net.
      • parking_lot is included with full

    Changes

    • meta: Minimum supported Rust version is now 1.45.
    • io: AsyncRead trait now takes ReadBuf in order to safely handle reading into uninitialized memory (#2758).
    • io: Internal I/O driver storage is now able to compact (#2757).
    • rt: Runtime::block_on now takes &self (#2782).
    • sync: watch reworked to decouple receiving a change notification from receiving the value (#2814, #2806).
    • sync: Notify::notify is renamed to notify_one (#2822).
    • process: Child::kill is now an async fn that cleans zombies (#2823).
    • sync: use const fn constructors as possible (#2833, #2790)
    • signal: reduce cross-thread notification (#2835).
    • net: tcp,udp,uds types support operations with &self (#2828, #2919, #2934).
    • sync: blocking mpsc channel supports send with &self (#2861).
    • time: rename delay_for and delay_until to sleep and sleep_until (#2826).
    • io: upgrade to mio 0.7 (#2893).
    • io: AsyncSeek trait is tweaked (#2885).
    • fs: File operations take &self (#2930).
    • rt: runtime API, and #[tokio::main] macro polish (#2876)
    • rt: Runtime::enter uses an RAII guard instead of a closure (#2954).
    • net: the from_std function on all sockets no longer sets socket into non-blocking mode (#2893)

    Added

    • sync: map function to lock guards (#2445).
    • sync: blocking_recv and blocking_send fns to mpsc for use outside of Tokio (#2685).
    • rt: Builder::thread_name_fn for configuring thread names (#1921).
    • fs: impl FromRawFd and FromRawHandle for File (#2792).
    • process: Child::wait and Child::try_wait (#2796).
    • rt: support configuring thread keep-alive duration (#2809).
    • rt: task::JoinHandle::abort forcibly cancels a spawned task (#2474).
    • sync: RwLock write guard to read guard downgrading (#2733).
    • net: add poll_* functions that take &self to all net types (#2845)
    • sync: get_mut() for Mutex, RwLock (#2856).
    • sync: mpsc::Sender::closed() waits for Receiver half to close (#2840).
    • sync: mpsc::Sender::is_closed() returns true if Receiver half is closed (#2726).
    • stream: iter and iter_mut to StreamMap (#2890).
    • net: implement AsRawSocket on windows (#2911).
    • net: TcpSocket creates a socket without binding or listening (#2920).

    Removed

    • io: vectored ops are removed from AsyncRead, AsyncWrite traits (#2882).
    • io: mio is removed from the public API. PollEvented andRegistration are removed (#2893).
    • io: remove bytes from public API. Buf and BufMut implementation are removed (#2908).
    • time: DelayQueue is moved to tokio-util (#2897).

    Fixed

    • io: stdout and stderr buffering on windows (#2734).
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.2.22(Jul 21, 2020)

    This release introduces initial support for tracing instrumentation within the Tokio runtime, enabled by the "tracing" feature flag. In addition, it contains a number of bug fixes and API additions.

    Fixes

    • docs: misc improvements (#2572, #2658, #2663, #2656, #2647, #2630, #2487, #2621, #2624, #2600, #2623, #2622, #2577, #2569, #2589, #2575, #2540, #2564, #2567, #2520, #2521, #2493)
    • rt: allow calls to block_on inside calls to block_in_place that are themselves inside block_on (#2645)
    • net: fix non-portable behavior when dropping TcpStream OwnedWriteHalf (#2597)
    • io: improve stack usage by allocating large buffers on directly on the heap (#2634)
    • io: fix unsound pin projection in AsyncReadExt::read_buf and AsyncWriteExt::write_buf (#2612)
    • io: fix unnecessary zeroing for AsyncRead implementors (#2525)
    • io: Fix BufReader not correctly forwarding poll_write_buf (#2654)
    • io: fix panic in AsyncReadExt::read_line (#2541)

    Changes

    • coop: returning Poll::Pending no longer decrements the task budget (#2549)

    Added

    • io: little-endian variants of AsyncReadExt and AsyncWriteExt methods (#1915)
    • task: add tracing instrumentation to spawned tasks (#2655)
    • sync: allow unsized types in Mutex and RwLock (via default constructors) (#2615)
    • net: add ToSocketAddrs implementation for &[SocketAddr] (#2604)
    • fs: add OpenOptionsExt for OpenOptions (#2515)
    • fs: add DirBuilder (#2524)

    Signed-off-by: Eliza Weisman [email protected]

    Source code(tar.gz)
    Source code(zip)
  • tokio-0.2.21(May 13, 2020)

    Bug fixes and API polish.

    Fixes

    • macros: disambiguate built-in #[test] attribute in macro expansion (#2503)
    • rt: LocalSet and task budgeting (#2462).
    • rt: task budgeting with block_in_place (#2502).
    • sync: release broadcast channel memory without sending a value (#2509).
    • time: notify when resetting a Delay to a time in the past (#2290).

    Added

    • io: get_mut, get_ref, and into_inner to Lines (#2450).
    • io: mio::Ready argument to PollEvented (#2419).
    • os: illumos support (#2486).
    • rt: Handle::spawn_blocking (#2501).
    • sync: OwnedMutexGuard for Arc<Mutex<T>> (#2455).
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.2.20(Apr 28, 2020)

    Fixes

    • sync: broadcast closing the channel no longer requires capacity (#2448).
    • rt: regression when configuring runtime with max_threads less than the number of CPUs (#2457).
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.2.19(Apr 24, 2020)

    Fixes

    • docs: misc improvements (#2400, #2405, #2414, #2420, #2423, #2426, #2427, #2434, #2436, #2440).
    • rt: support block_in_place in more contexts (#2409, #2410).
    • stream: no panic in merge() and chain() when using size_hint() (#2430).
    • task: include visibility modifier when defining a task-local (#2416).

    Added

    • rt: runtime::Handle::block_on (#2437).
    • sync: owned Semaphore permit (#2421).
    • tcp: owned split (#2270).
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.2.18(Apr 12, 2020)

    Fixes a regression with LocalSet that allowed !Send futures to cross threads.

    This change makes LocalSet !Send. The Send implementation was accidentally added in v0.2.14. Removing the Send implementation is not considered a breaking change as it fixes a soundness bug and the implementation was accidental.

    Fixes

    • task: LocalSet was incorrectly marked as Send (#2398)
    • io: correctly report WriteZero failure in write_int (#2334)
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.2.17(Apr 9, 2020)

    This release fixes a bug in the threaded scheduler that could result in panics under load (see #2382). Additionally, the default number of worker threads now uses the logical CPU count, so it will now respect scheduler affinity and cgroups CPU quotas.

    Fixes

    • rt: bug in work-stealing queue (#2387)

    Changes

    • rt: threadpool uses logical CPU count instead of physical by default (#2391)
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.2.16(Apr 3, 2020)

    This release fixes a regression in tokio::sync and a bug in tokio::fs::copy. It also adds a new APIs to tokio::time and tokio::io.

    Fixes

    • sync: fix a regression where Mutex, Semaphore, and RwLock futures no longer implement Sync (#2375)
    • fs: fix fs::copy not copying file permissions (#2354)

    Added

    • time: added deadline method to delay_queue::Expired (#2300)
    • io: added StreamReader (#2052)
    Source code(tar.gz)
    Source code(zip)
  • tokio-0.2.15(Apr 2, 2020)

    Fixes a queue regression and adds a new disarm fn to mpsc::Sender.

    Fixes

    • rt: fix queue regression (#2362).

    Added

    • sync: Add disarm to mpsc::Sender (#2358).
    Source code(tar.gz)
    Source code(zip)
Owner
Tokio
Rust's asynchronous runtime.
Tokio
Cross-platform, low level networking using the Rust programming language.

libpnet Linux โˆช OS X Build Status: Windows Build Status: Discussion and support: #libpnet on freenode / #rust-networking on irc.mozilla.org / #rust on

null 1.4k Jun 11, 2021
A simple message based networking library for the bevy framework

Spicy Networking for Bevy bevy_spicy_networking is a solution to the "How do I connect multiple clients to a single server" problem in your bevy games

Cabbit Studios 21 May 6, 2021
๐Ÿฅง Savoury implementation of the QUIC transport protocol and HTTP/3

quiche is an implementation of the QUIC transport protocol and HTTP/3 as specified by the IETF. It provides a low level API for processing QUIC packet

Cloudflare 4.8k Jun 13, 2021
a smol tcp/ip stack

smoltcp smoltcp is a standalone, event-driven TCP/IP stack that is designed for bare-metal, real-time systems. Its design goals are simplicity and rob

smoltcp 2k Jun 14, 2021
BLEZ - Asynchronous Bluetooth Low Energy on Linux for Rust

BLEZ - Asynchronous Bluetooth Low Energy on Linux for Rust This library provides an asynchronous, fully featured interface to the Bluetooth Low Energy

Sebastian Urban 35 Jun 13, 2021
A private network system that uses WireGuard under the hood.

innernet A private network system that uses WireGuard under the hood. See the announcement blog post for a longer-winded explanation. innernet is simi

Tonari, Inc 2k Jun 14, 2021
Docker daemon API in Rust

Bollard: an asynchronous rust client library for the docker API Bollard leverages the latest Hyper and Tokio improvements for an asynchronous API cont

Niel Drummond 204 May 20, 2021
Backroll is a pure Rust implementation of GGPO rollback networking library.

backroll-rs Backroll is a pure Rust implementation of GGPO rollback networking library. Development Status This is still in an untested alpha stage. A

Hourai Teahouse 80 Jun 13, 2021
Easy protocol definitions in Rust

protocol Documentation Easy protocol definitions in Rust. This crate adds a custom derive that can be added to types, allowing structured data to be s

Dylan McKay 107 Jun 3, 2021
Actor framework for Rust.

Actix Actor framework for Rust Documentation User Guide API Documentation API Documentation (master branch) Features Async and sync actors Actor commu

Actix 6.4k Jun 11, 2021
The Rust Implementation of libp2p networking stack.

Central repository for work on libp2p This repository is the central place for Rust development of the libp2p spec. Warning: While we are trying our b

libp2p 1.8k Jun 13, 2021
An asynchronous Prometheus exporter for iptables

iptables_exporter An asynchronous Prometheus exporter for iptables iptables_exporter runs iptables-save --counter and scrapes the output to build Prom

Kevin K. 4 May 14, 2021
A ยตTP (Micro/uTorrent Transport Library) library implemented in Rust

rust-utp A Micro Transport Protocol library implemented in Rust. API documentation Overview The Micro Transport Protocol is a reliable transport proto

Ricardo Martins 118 May 29, 2021
A Constrained Application Protocol(CoAP) library implemented in Rust.

coap-rs A fast and stable Constrained Application Protocol(CoAP) library implemented in Rust. Features: CoAP core protocol RFC 7252 CoAP Observe optio

Covertness 132 May 20, 2021