A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...

Overview

Tokio

A runtime for writing reliable, asynchronous, and slim applications with the Rust programming language. It is:

  • Fast: Tokio's zero-cost abstractions give you bare-metal performance.

  • Reliable: Tokio leverages Rust's ownership, type system, and concurrency model to reduce bugs and ensure thread safety.

  • Scalable: Tokio has a minimal footprint, and handles backpressure and cancellation naturally.

Crates.io MIT licensed Build Status Discord chat

Website | Guides | API Docs | Chat

Overview

Tokio is an event-driven, non-blocking I/O platform for writing asynchronous applications with the Rust programming language. At a high level, it provides a few major components:

  • A multithreaded, work-stealing based task scheduler.
  • A reactor backed by the operating system's event queue (epoll, kqueue, IOCP, etc...).
  • Asynchronous TCP and UDP sockets.

These components provide the runtime components necessary for building an asynchronous application.

Example

A basic TCP echo server with Tokio:

use tokio::net::TcpListener;
use tokio::io::{AsyncReadExt, AsyncWriteExt};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let mut listener = TcpListener::bind("127.0.0.1:8080").await?;

    loop {
        let (mut socket, _) = listener.accept().await?;

        tokio::spawn(async move {
            let mut buf = [0; 1024];

            // In a loop, read data from the socket and write the data back.
            loop {
                let n = match socket.read(&mut buf).await {
                    // socket closed
                    Ok(n) if n == 0 => return,
                    Ok(n) => n,
                    Err(e) => {
                        eprintln!("failed to read from socket; err = {:?}", e);
                        return;
                    }
                };

                // Write the data back
                if let Err(e) = socket.write_all(&buf[0..n]).await {
                    eprintln!("failed to write to socket; err = {:?}", e);
                    return;
                }
            }
        });
    }
}

More examples can be found here. For a larger "real world" example, see the mini-redis repository.

To see a list of the available features flags that can be enabled, check our docs.

Getting Help

First, see if the answer to your question can be found in the Guides or the API documentation. If the answer is not there, there is an active community in the Tokio Discord server. We would be happy to try to answer your question. You can also ask your question on the discussions page.

Contributing

🎈 Thanks for your help improving the project! We are so happy to have you! We have a contributing guide to help you get involved in the Tokio project.

Related Projects

In addition to the crates in this repository, the Tokio project also maintains several other libraries, including:

  • hyper: A fast and correct HTTP/1.1 and HTTP/2 implementation for Rust.

  • tonic: A gRPC over HTTP/2 implementation focused on high performance, interoperability, and flexibility.

  • warp: A super-easy, composable, web server framework for warp speeds.

  • tower: A library of modular and reusable components for building robust networking clients and servers.

  • tracing (formerly tokio-trace): A framework for application-level tracing and async-aware diagnostics.

  • rdbc: A Rust database connectivity library for MySQL, Postgres and SQLite.

  • mio: A low-level, cross-platform abstraction over OS I/O APIs that powers tokio.

  • bytes: Utilities for working with bytes, including efficient byte buffers.

  • loom: A testing tool for concurrent Rust code

Supported Rust Versions

Tokio is built against the latest stable release. The minimum supported version is 1.45. The current Tokio version is not guaranteed to build on Rust versions earlier than the minimum supported version.

License

This project is licensed under the MIT license.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Tokio by you, shall be licensed as MIT, without any additional terms or conditions.

Comments
  • Proposing new `AsyncRead` / `AsyncWrite` traits

    Proposing new `AsyncRead` / `AsyncWrite` traits

    Introduce new AsyncRead / AsyncWrite ​ This PR introduces new versions of AsyncRead / AsyncWrite traits. The proposed changes aim to improve: ​

    • ergonomics.
    • integration of vectored operations
    • working with uninitialized byte slices. ​

    Overview

    ​ The PR changes the AsyncRead and AsyncWrite traits to accept T: Buf and T: BufMut values instead of &[u8] and &mut [u8]. Because &[u8] implements Buf and &mut [u8] implements BufMut, the same calling patterns used today are still possible. Additionally, any type that implements Buf and BufMut may be used. This includes Cursor<&[u8]>, Bytes, ... ​

    Improvement in ergonomics

    ​ Calls to read and write accept buffers, but do not necessary use up the entirety of the buffer. Both functions return a usize representing the number of bytes read / written. Because of this, it is common to write loops such as: ​

    let mut rem = &my_data[..];
    ​
    while !rem.is_empty() {
        let n = my_socket.write(rem).await?;
        rem = &rem[n..];
    }
    

    ​ The key point to notice is having to use the return value to update the position in the cursor. This is both common and error prone. The Buf / BufMut traits aim to ease this by building the cursor concept directly into the buffer. By using these traits with AsyncRead / AsyncWrite, the above loop can be simplified as: ​

    let mut buf = Cursor::new(&my_data[..]);
    ​
    while buf.has_remaining() {
        my_socket.write(&mut buf).await?;
    }
    

    ​ A small reduction in code, but it removes an error prone bit of logic that must be often repeated. ​

    Integration of vectored operations

    ​ In the AsyncRead / AsyncWrite traits provided by futures-io, vectored operations are covered using separate fns: poll_read_vectored and poll_write_vectored. These two functions have default implementations that call the non-vectored operations. ​ This has a draw back, when implementing AsyncRead / AsyncWrite, usually as a layer on top of a type such as TcpStream, the implementor must not forget to impleement these two additional functions. Otherwise, the implementation will not be able to use vectored operations even if the underlying TcpStream supports it. Secondly, it requires duplication of logic: one poll_read implementation and one poll_read_vectored implementation. It is possible to implement one in terms of the other, but this can result in sub-optimial implementations. ​ Imagine a situation where a rope data structure is being written to a socket. This structure is comprised of many smaller byte slices (perhaps thousands). To write it efficiently to the socket, avoiding copying data is preferable. To do this, the byte slices need to be loaded in an IoSlice. Since modern linux systems support a max of 1024 slices, we initialize an array of 1024 slices, iterate the rope to populate this array and call poll_write_vectored. The problem is that, as the caller, we don't know if the AsyncWrite type supports vectored operations or not, poll_write_vectored is called optimistically. However, the implementation "forgot" to proxy its function to TcpStream, so poll_read is called w/ the first entry in the IoSlice. The problem is, for each call to poll_read_vectored, we must iterate 1024 nodes in our rope to only have one chunk written at a time. ​ By using T: Buf as the argument, the decision of whether or not to use vectored operations is left up to the leaf AsyncWrite type. Intermediate layers only implement poll_write w/ T: Buf and pass it along to the inner stream. The TcpStream implementation will know that it supports vectored operations and know how many slices it can write at a time and do "the right thing". ​

    Working with uninitialized byte slices

    ​ When passing buffers to AsyncRead, it is desirable to pass in uninitialized memory which the poll_read call will write to. This avoids the expensive step of zeroing out the memory (doing so has measurable impact at the macro level). The problem is that uninitialized memory is "unsafe", as such, care must be taken. ​ Tokio initially attempted to handle this by adding a prepare_uninitialized_buffer function. std is investigating adding a similar though improved variant of this API. However, over the years, we have learned that the prepare_uninitialized_buffer API is sub optimal for multiple reasons. ​ First, the same problem applies as vectored operations. If an implementation "forgets" to implement prepare_uninitialized_buffer then all slices must be zeroed out before passing it to poll_read, even if the implementation does "the right thing" (not read from initialized memory). In practice, most implementors end up forgetting to implement this function, resulting in memory being zeroed out. ​ Secondly, implementations of AsyncRead that should not require unsafe to implement now must add unsafe simply to avoid having memory zeroed out. ​ Switching the argument to T: BufMut solves this problem via the BufMut trait. First, BufMut provides low-level functions that return &mut [MaybeUninitialized<u8>]. Second, it provides utility functions that provide safe APIs for writing to the buffer (put_slice, put_u8, ...). Again, only the leaf AsyncRead implementations (TcpStream) must use the unsafe APIs. All layers may take advantage of uninitialized memory without the associated unsafety. ​

    Drawbacks

    ​ The primary drawback is genericizing the AsyncRead and AsyncWrite traits. This adds complexity. We feel that the added benefits discussed above outweighs the drawbacks, but only trying it out will validate it. ​

    Relation to futures-io, std, and roadmap

    ​ The relationship between tokio's I/O traits and futures-io has come up a few times in the past. Tokio has historically maintained its own traits. futures-io has just been released with a simplified version of the traits. There is also talk of standardizing AsyncRead and AsyncWrite in std. Because of this, we believe that now is the perfect time to experiment with these traits. This will allow us to gain more experience before committing to traits in std. ​ The next version of Tokio will not be 1.0. This allows us to experiment with these traits and remove them for 1.0 if they do not pan out. ​ Once AsyncRead / AsyncWrite are added to std, Tokio will provide implementations for its types. Until then, tokio-util will provide a compatibility layer between Tokio types and futures-io.

    opened by seanmonstar 84
  • Structured Concurrency Support

    Structured Concurrency Support

    First of all a disclaimer: This issue is not yet a full proposal. This serves more as a collection of things to explore, and to gather feedback on interest.

    What is structured concurrency?

    Structured concurrency describes programming paradigm. Concurrent tasks are structured in a fashion where there exist clean task hierarchies, and where the lifetime of all sub-tasks/child-tasks is constrained within the lifetime of their parent task.

    The term was likely brought up first by Martin Sustrik in this blog post, and was a guiding idea behind the libdill library. @njsmith utilized the term in Notes on structured concurrency, or: Go statement considered harmful, and designed the python trio library around the paradigm. I highly recommend to read the blog post.

    The paradigm has also been adopted by Kotlin coroutines. @elizarov gives a talk at HydraConf about structured concurrency and the evolution of Kotlins async task model, which I also highly recommend to watch. It provides some hints on things to look out for, and how APIs could look like. Kotlins documentation around coroutines is also a good resource.

    Go adopted some support for structured concurrency with the errgroup package.

    Benefits of structured concurrency

    I again recommend to check out the linked resources, which also elaborate on this 😀

    In short: Applying the structured concurrency paradigm can simplify reasoning about concurrent programs and thereby reduce errors. It is helpful at preventing resource leaks, in the same fashion as RAII allows to avoid leaks on a scope level. It might also allow for optimizations.

    Examples around error reductions and simplifications

    Here is one motivating example of how structured concurrency can simplify things:

    We are building a building a web application A, which is intended to handle at least 1000 transactions per second. Internally each transaction requires a few concurrent interactions, which will involve reaching out to remote services. When one of those transactions fails, we need to perform certain actions. E.g. we need to call another service B for a cleanup or rollback. Without structured concurrency, we might have the idea just to do spawn(cleanup_task()) in order to do this. While this works, it has a side effect: cleanup tasks might still be running while the main webservice handler has already terminated. This sounds harmless at first, but can have surprising consequences: We obviously want our services to be resilient against overloads, so we limit the amount of concurrent requests to 2000 via an async Semaphore. This works fine for our main service handler. But what happens if lots of transactions are failing? How many cleanup tasks can run at the same point of time? The answer is unfortunately, that the number of those is effectively unbounded. Thereby our service can be overloaded through queuing up cleanup tasks - even though we protected ourself against too many concurrent API calls. This can lead to large scale outages in distributed systems.

    By making sure all cleanup logic is performed inside the lifetime/scope of the main service handler, we can guarantee that the number of cleanup tasks is also bounded by our Semaphore.

    Another example could be applying configuration changes at runtime: While our service is running we want to able to update it's configuration. After the configuration change is applied no transaction should still be utilizing the old configuration. What we need to do now is:

    • Disable the acceptor in order to drain requests before we can apply the config change
    • Wait for all ongoing transactions to complete
    • Cancel transactions if they take too long to complete.
    • Update the configuration
    • Restart the acceptor

    Without having a structured approach for concurrency, this is a lot more of a complicated problem than it sounds. Any old transaction might have spawned a subtask which might still be executing after we have updated the configuration. There is no easy way to check for the higher level code if everything has finished.

    Potential for optimizations

    The application of structured concurrency might allow for optimizations. E.g. we might be able to allow subtasks to borrow data inside the parent tasks scope without the need for additional heap allocations. Since the exact mechanisms are however not yet designed, the exact potential is unknown.

    Core requirements

    I think the core requirements for structured concurrency are:

    • A parent task will only finish once all child tasks have finished
    • When tasks are spawned, they need to be spawned in the context of a parent task. The parent needs to remember it's child tasks
    • Parent tasks need to have a mechanism to cancel child tasks
    • Errors in child tasks should lead the parent task to return an error as soon as possible, and all sibling tasks to get cancelled. This behavior is equivalent to the behavior of the try_join! macro in futures-rs.

    Regarding the last point I am not sure whether automatic error propagation is a required point of structured concurrency and whether it can be achieved on a task level, but it definitely makes things easier.

    Do we actually need to have builtin support for this?

    Rusts async/await mechanism already provides structured concurrency inside a particular task: By utilizing tools like select! or join! we can run multiple child-tasks which are constrained to the same lifetime - which is the current scope. This is not possible in Go or Kotlin - which require an explicit child-task to be spawned to achieve the behavior. Therefore the benefits might be lower.

    I built an examples in futures-intrusive around those mechanisms.

    However the concurrency inside a certain task will not scale very well, due requiring polling of all child Futures. Therefore real concurrent tasks will be required in most applications.

    On this level we have a set of tools in our toolbox that allow us to structure our current tasks manually:

    • Parent tasks can wait for child tasks to join via the use of Oneshot channels or the new JoinHandles
    • Parent tasks can forcefully cancel child tasks by just dropping them
    • Parent tasks can gracefully cancel parents via passing a cancellation signal.

    However these tools all require explicit code in order to guarantee correctness. Builtin support for structured concurrency could improve on usability and allow more developers to use good and correct defaults.

    And as mentioned earlier, I think builtin support could also allow for new usages, e.g. borrowing inside child tasks or potential scheduler improvements when switching between child tasks and parent tasks.

    The following posts are now mainly a braindump around how these requirements could be fulfilled and how they align with existing mechanisms.

    opened by Matthias247 50
  • Add initial named pipes support

    Add initial named pipes support

    This PR adds basic named pipes support.

    Motivation

    blackbeam/mysql_async#132

    Solution

    Implementation directly uses miow::NamedPipeBuilder to avoid making a PR for mio.

    It provides two public types:

    • NamedPipeServer – which is a server implementation with the familiar accept() method.
    • NamedPipe – that represents client or server connection.

    NamedPipeServer will hold at least one free instance of a pipe to maintain its existence. Mutex is used within accept() (only for mem::swap) to keep it shared. NamedPipeServer::new() will create the first instance with FILE_FLAG_FIRST_PIPE_INSTANCE flag to avoid named pipe instance creation race condition.

    NamedPipe wraps mio::NamedPipe and provides the connect() method for client-side connections. connect() will wait for a server instance using the approach similar to a one used in .NET, namely it'll call WaitNamedPipe with default timeout using the spawn_blocking (I couldn't find a better solution). Unlike .NET, connect() won't wait if pipe doesn't exist and will error immediately.

    @udoprog, regarding disconnect()NamedPipe doesn't provide it since it won't play well with NamedPipeServer, one should simply drop an instance. Also I don't think that we should call it in poll_shutdown.

    @fussybeaver, regarding security_qos_flags - implementation unconditionally adds SECURITY_IDENTIFICATION

    Other thoughts

    Maybe it's too high level.

    Related issues

    #3118

    A-tokio M-net 
    opened by blackbeam 37
  • how to implement stream r/w in parallel?

    how to implement stream r/w in parallel?

    As known, due to the ownership, we could not r/w the stream at the same time.

    The split in tokio seems like a faked split, because it uses mutex for r/w.

    Behind the scenes, split ensures that if we both try to read and write at the same time, only one of them happen at a time.

    Although we do not block on syscall, which is just the goal of async programming, but the syscall itself is locked by that mutex, i.e. when the read() syscall is processing, we could not do write() syscall. That seems very silly, with obvious performance impact. At syscall level, the read and write could be in parallel, which is meaningful for full-duplex app protocols (e.g. http 1.1 with pipe-lining and http2),

    Could we do a real split? Like we could clone mio::net::TcpStream (https://docs.rs/mio/0.6.18/mio/net/struct.TcpStream.html#method.try_clone), but sharing the registration and other stuff? That way, we do have real zero cost abstraction.

    C-enhancement A-tokio M-net 
    opened by kingluo 37
  • Support Webassembly

    Support Webassembly

    Hey, Please provide wasm support to this library. Networking is one of the important feature to work on Rust+WASM codes. Not sure is there any tracker for this. Happy to contribute if any.

    Thanks

    A-tokio C-feature-request T-wasm 
    opened by AchalaSB 36
  • Lost wakeups in threadpool

    Lost wakeups in threadpool

    It has been reported (https://github.com/rust-lang-nursery/futures-rs/issues/1170) that there have been lost wake ups.

    Downgrading to threadpool 0.1.4 reportedly solves the problem. #459 is the only meaningful change.

    opened by carllerche 35
  • Guarantee that `File::write_all` writes all data (or at least tries)

    Guarantee that `File::write_all` writes all data (or at least tries)

    Motivation

    Ref: https://github.com/tokio-rs/tokio/issues/4296 https://github.com/tokio-rs/tokio/issues/4296#issuecomment-986005787

    In some cases, one could time successfully awaited calls to write_all (or write) with a shutdown of the runtime, and have the write not even be attempted. This can be a bit surprising.

    The purpose of this PR is to find a way (if possible) to fix that. There would be no guarantee that the write actually succeeds (any OS error could be hit at the time the write actually gets executed), but at least it would be attempted.

    Solution

    I have found a sequence of events that leads to spawn_blocking tasks being "ignored". I've written a note about it in a comment. I'm not sure if it's intentional that we won't try draining the queue of blocking tasks before shutting down. Couldn't we tweak the shutdown logic to execute all tasks that were scheduled before the call to shutdown?
    If users are concerned about shutting down the runtime taking a long time because of blocking tasks, they can call https://docs.rs/tokio/latest/tokio/runtime/struct.Runtime.html#method.shutdown_timeout or shutdown_background.

    The documentation in https://docs.rs/tokio/latest/tokio/runtime/struct.Runtime.html#shutdown says:

    The current thread will block until the shut down operation has completed. -Drain any scheduled work queues.

    So it should already be expected that shutting down a runtime could block to some extent?

    Do you think it would make sense to change the shutdown logic to execute all pending tasks? If so, I can figure out how to do the code change.

    A-tokio M-fs M-runtime M-task R-loom 
    opened by BraulioVM 31
  • Added map associated function to MutexGuard that uses new MappedMutexGuard type

    Added map associated function to MutexGuard that uses new MappedMutexGuard type

    Part of #2471, extends the work of #2445. Both this PR and #2445 together close #2471.

    This PR introduces the MappedMutexGuard type and adds a map associated function to MutexGuard. The work here is largely based on #2445, so thanks @udoprog for the great PR. This was very easy to implement using your work as a reference.

    MappedMutexGuard works almost exactly the same as parking_lot::MappedMutexGuard, but adapted to the internals of tokio::sync::Mutex. The MappedMutexGuard type stores a reference to the semaphore from the original Mutex as well as a raw pointer *mut T that is the result of calling the function passed to map. The Mutex does not hold a mutable reference to its data (it uses an UnsafeCell), so I do not believe that it is possible to accidentally run into any aliasing issues.

    I added documentation based on the work in #2445 and the documentation in parking_lot/lock_api. There are doctests in both map and try_map that are almost exactly the same as the ones in #2445 for RwLockWriteGuard. I generated the docs with cargo doc and everything looks great. :tada:

    C-enhancement A-tokio M-sync 
    opened by sunjay 31
  • Consider defaulting Tokio features to off instead of on.

    Consider defaulting Tokio features to off instead of on.

    As per #1318, Tokio has been merged into a single crate and components are split by feature flag. Now, with all features enabled, the tokio crate is quite heavy.

    Regardless of the direction, two things will happen:

    • documentation: the required feature flag for all components will be documented similar to how Tonic does this (see transport module here: https://docs.rs/tonic/0.1.0-alpha.6/tonic/).
    • meta feature: A full feature flag will be provided that enables all Tokio feature flags.

    The question is, should default include all features, no features, or some features.

    Default on

    One of the main drawbacks mentioned in #1318 was that, when features are enabled by default, libraries will accidentally depend on more features than necessary. Doing so will force these features to be enabled by consumers of the library. Also, the end user can accidentally use features that were enabled by the dependency. When the dependency changes its feature flags, the application breaks as the required features are no longer available.

    Default off

    An alternative would be to default to no feature flags enabled by default. In this case, depending on tokio will only enable core traits (AsyncRead, AsyncWrite, ToSocketAddrs) and an empty Runtime type that doesn't do much when used. Getting started guides, examples, the README would instruct users to depend on tokio as:

    tokio = { version = "0.2.0", features = ["full"] }
    

    Libraries will be instructed to pick only the features they require. The primary drawback of this strategy is that it adds a bump to the getting started flow.

    Default some

    A middle ground would be to define a subset of features that should be enabled by default. It is unclear how to pick the features to enable by default as different Tokio users use significantly different feature sets. Because the choice is arbitrary, the end user will have no way to intuit if a feature is enabled by default or not.

    C-proposal 
    opened by carllerche 29
  • rfc: collapse Tokio sub crates into single `tokio` crate

    rfc: collapse Tokio sub crates into single `tokio` crate

    There has been frustration among Tokio users regarding the number of crates pulled in when depending on Tokio. Here is an opportunity to discuss an alternative strategy. By doing this RFC, users who are happy with the current situation may express this.

    Summary

    Do not maintain tokio-* sub crates, instead all Tokio code will exist in a single tokio crate and components are enabled or disabled using feature flags.

    For example, depending on only the timer functionality could be done with:

    tokio = { version = "0.2.0", default-features = false, features = [ "timer" ] }
    

    By default, tokio would have the same components enabled as it does today.

    Motivation

    Maintaining a large number of crates comes with an increased maintainership burden. Maintaining correct dependencies between crates is complex. Users feel that large number of dependencies == bloat. Additional rational can be found here.

    Details

    Tokio must maintain semver stability of its core APIs. This includes traits as well as some types, such as TcpStream. Tokio would like to be able to release breaking changes to less fundamental APIs without having to break the entire Tokio ecosystem.

    Currently, Tokio achieves this goal by breaking up all the various components into individual crates. Doing this allows less stable components to release breaking changes without touching stable components. However, this strategy has drawbacks (see Motivation section).

    In this proposal, all Tokio components would be moved into a single crate. Each component would have an associated feature flag, similar to how Tokio does it today.

    Not much would change for application developers, they would still just depend on tokio and enable / disable feature flags as needed. Library developers would no longer depend on sub crates. Instead, they would depend on tokio and only pull in the features that they need.

    Type stability

    Core types can maintain stability between breaking semver releases. For example, if the TcpStream type does not change between Tokio version 0.2 and Tokio version 0.3, then the following steps would be taken to release 0.3:

    • Release tokio 0.3
    • Update tokio 0.2 to depend on tokio 0.3.
    • Replace the implementation of TcpStream in 0.3 by re-exporting the implementation from 0.3.
    • Release a new patch version for 0.2 including the re-exported TcpStream type from 0.3.

    By doing this, TcpStream from 0.2 and 0.3 are the same type.

    Drawbacks

    • The breaking change release process becomes more complicated as all untouched types must be re-exported in the old version.
    • If a user does not update the patch 0.2 patch release in the above scenario, they can end up with both 0.2 and 0.3.

    Alternatives

    Continue to release new crates for each component.

    C-proposal 
    opened by carllerche 28
  • net: Provide a raw NamedPipe implementation and builders

    net: Provide a raw NamedPipe implementation and builders

    This introduces a couple of new types:

    • tokio::net::windows::NamedPipe
    • tokio::net::windows::NamedPipeBuilder
    • tokio::net::windows::NamedPipeClientBuilder
    • tokio::net::windows::PipeMode

    It is based on a suggestion I did in #3388, which is to try and push the exported types to the minimum low level of abstraction necessary to enable their use. This is also a working proposal for #3511.

    It is also based on the API proposed by @Darksonn here. With some removals and changes.

    In particular I try to closely as possible:

    • Provide as close to 1:1 mapping between the exported functions and their corresponding system calls, each documented with a link to msdn.
    • Where it's supported and makes sense, make those functions async. WaitNamedPipe unfortunately only provides a blocking API, so it uses the asyncify approach that tokio::fs does and blocks in a worker thread (see comment).

    This provides two builders, because named pipe clients and servers are constructed differently. Clients through CreateFile, and servers (the one creating the named pipe) through CreateNamedPipe. I decided to call the one creating the named pipe NamedPipeBuilder, all though naming is preliminary.

    For how to use, see the included documentation and examples.

    Motivation

    Providing named pipes has been stuck for a while, because the current proposal implements a high level model. This PR ports the part from it into a lower level NamedPipe type which can either be used directly or to provide higher level APIs like the one suggested in #3388. Once this lands, it can be done external to the project where it can be more easily experimented with, like tokio-util.

    Solution

    This tries to implement the bare minimum async wrapping necessary to use named pipes while providing a minimum level of ergonomics and sanity checking.

    Note that even if we don't end up going with exporting low level APIs like this, hopefully this can be used as a cleaner internal API to build something else.

    Builders taking self

    I modified the builders to take self at one point instead of &mut self, because the following pattern seems more common when building named pipes:

    let server_builder = NamedPipeBuilder::new("\\\\.pipe\\mypipe")
        .first_pipe_instance(true);
    
    // somewhere else
    
    loop {
        let pipe = server_builder.create()?;
        pipe.connect().await?;
    }
    

    And if it returned &mut Self, it wouldn't be as convenient to use the functional style. But feel free to provide your input.

    A-tokio M-net 
    opened by udoprog 27
  • Feat: Adding a Wasm / WasmEdge target

    Feat: Adding a Wasm / WasmEdge target

    Is your feature request related to a problem? Please describe.

    Currently, when developers compile their tokio applications to Wasm, the applications are going to fail at runtime due to Wasm's lack of support for standard networking APIs. We would like to implement a feature flag for tokio to support the wasm32-wasi compiler target -- specifically for the WasmEdge Runtime.

    Describe the solution you'd like

    We would like to start an effort to merge our fork upstream to tokio's main repo.

    Describe alternatives you've considered

    Please see the background and our research below.

    Additional context

    One of the emerging use cases of WebAssembly (Wasm) is a lightweight and secure alternative to Linux containers. Leading container management tools, such as Docker, containerd, Podman / crun, and Kubernetes, have all supported Wasm containers, and in particular WasmEdge, for this reason.

    However, to run containerized microservices and serverless functions in Wasm, we need advanced networking sockets in the Wasm runtime. By “advanced”, I mean non-blocking and async sockets. Ideally, we would have a Wasm compiler target in tokio so that when tokio applications are compiled for the wasm32-wasi target, it will utilize Wasm socket APIs. Yet, the official Wasm standards have been slow to incorporate sockets.

    To support users and customers who wish to use tokio and Rust networking apps in Wasm today, the WasmEdge community decided to create its own libc-like socket API in Wasm. WasmEdge is the default Wasm runtime distributed with Docker Desktop and Fedora / Red Hat EPEL. It's networking sockets API can reach a wide developer audience.

    We forked tokio to support the WasmEdge socket API when compiling to wasm32-wasi. The approach has gained some tractions:

    Example 1: This is a popular microservice template for an HTTP service backed by a MySQL server. Both the HTTP server (using forked hyper) and MySQL client (using forked mysql_async) in the WasmEdge app are based on the tokio fork.

    https://github.com/second-state/microservice-rust-mysql

    Example 2: This is a port of the popular Dapr SDK to Wasm. It utilizes (a forked) request on top of the forked tokio to perform HTTP API calls.

    https://github.com/second-state/dapr-sdk-wasi https://github.com/second-state/dapr-wasm

    Forking tokio is only a temporary solution to demonstrate demands for this type of application. Now we would like to merge the fork back into tokio so that downstream crates that depend on tokio could also benefit.

    Specially, we would like to propose the following:

    • Add a feature flag wasmedge
    • When the feature is turned on AND the compiler target is wasm32-wasi, tokio will use WasmEdge sockets.

    Please let me know your thoughts and happy new year!

    A-tokio C-feature-request T-wasm 
    opened by juntao 7
  • metrics: Fix steal_count description and add steal_operations metric

    metrics: Fix steal_count description and add steal_operations metric

    Motivation

    The documentation of steal_count does not match what is actually counted.

    Solution

    Fix the documentation of steal_count. Additionally, add steal_operations as a metric to track what steal_count was originally documented to count. Having both metrics allows us to see some effects of different stealing policies on work-stealing.

    Closes #5281

    A-tokio M-metrics R-loom 
    opened by jschwe 0
  • tinyhttp example panics against curl request

    tinyhttp example panics against curl request

    Version

    $ cargo tree | grep tokio
    benches v0.0.0 (/home/ivan/tmp/envoy/tokio/benches)
    └── tokio v1.23.0 (/home/ivan/tmp/envoy/tokio/tokio)
        └── tokio-macros v1.8.2 (proc-macro) (/home/ivan/tmp/envoy/tokio/tokio-macros)
            └── tokio v1.23.0 (/home/ivan/tmp/envoy/tokio/tokio) (*)
        ├── tokio-stream v0.1.11 (/home/ivan/tmp/envoy/tokio/tokio-stream)
        │   └── tokio v1.23.0 (/home/ivan/tmp/envoy/tokio/tokio) (*)
        │   ├── tokio v1.23.0 (/home/ivan/tmp/envoy/tokio/tokio) (*)
        │   └── tokio-test v0.4.2 (/home/ivan/tmp/envoy/tokio/tokio-test)
        │       ├── tokio v1.23.0 (/home/ivan/tmp/envoy/tokio/tokio) (*)
        │       └── tokio-stream v0.1.11 (/home/ivan/tmp/envoy/tokio/tokio-stream) (*)
        │       └── tokio v1.23.0 (/home/ivan/tmp/envoy/tokio/tokio) (*)
        └── tokio-test v0.4.2 (/home/ivan/tmp/envoy/tokio/tokio-test) (*)
    ├── tokio-stream v0.1.11 (/home/ivan/tmp/envoy/tokio/tokio-stream) (*)
    └── tokio-util v0.7.4 (/home/ivan/tmp/envoy/tokio/tokio-util)
        ├── tokio v1.23.0 (/home/ivan/tmp/envoy/tokio/tokio) (*)
        ├── tokio v1.23.0 (/home/ivan/tmp/envoy/tokio/tokio) (*)
        ├── tokio-stream v0.1.11 (/home/ivan/tmp/envoy/tokio/tokio-stream) (*)
        └── tokio-test v0.4.2 (/home/ivan/tmp/envoy/tokio/tokio-test) (*)
    examples v0.0.0 (/home/ivan/tmp/envoy/tokio/examples)
    ├── tokio v1.23.0 (/home/ivan/tmp/envoy/tokio/tokio) (*)
    ├── tokio-stream v0.1.11 (/home/ivan/tmp/envoy/tokio/tokio-stream) (*)
    ├── tokio-util v0.7.4 (/home/ivan/tmp/envoy/tokio/tokio-util) (*)
    stress-test v0.1.0 (/home/ivan/tmp/envoy/tokio/stress-test)
    └── tokio v1.23.0 (/home/ivan/tmp/envoy/tokio/tokio) (*)
    tests-build v0.1.0 (/home/ivan/tmp/envoy/tokio/tests-build)
    tests-integration v0.1.0 (/home/ivan/tmp/envoy/tokio/tests-integration)
    └── tokio v1.23.0 (/home/ivan/tmp/envoy/tokio/tokio) (*)
    tokio v1.23.0 (/home/ivan/tmp/envoy/tokio/tokio) (*)
    tokio-macros v1.8.2 (proc-macro) (/home/ivan/tmp/envoy/tokio/tokio-macros) (*)
    tokio-stream v0.1.11 (/home/ivan/tmp/envoy/tokio/tokio-stream) (*)
    tokio-test v0.4.2 (/home/ivan/tmp/envoy/tokio/tokio-test) (*)
    tokio-util v0.7.4 (/home/ivan/tmp/envoy/tokio/tokio-util) (*)
    

    Platform

    $ uname -a
    Linux ivan-ThinkPad-T470 5.4.0-135-generic #152-Ubuntu SMP Wed Nov 23 20:19:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    

    Description tinyhttp example panics against HTTP requests

    running as :

    $ RUST_BACKTRACE=1 cargo run --example tinyhttp
        Finished dev [unoptimized + debuginfo] target(s) in 0.08s
         Running `target/debug/examples/tinyhttp`
    Listening on: 127.0.0.1:8080
    thread 'tokio-runtime-worker' panicked at 'attempt to subtract with overflow', examples/tinyhttp.rs:172:29
    stack backtrace:
       0: rust_begin_unwind
                 at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:584:5
       1: core::panicking::panic_fmt
                 at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panicking.rs:142:14
       2: core::panicking::panic
                 at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panicking.rs:48:5
       3: <tinyhttp::Http as tokio_util::codec::decoder::Decoder>::decode::{{closure}}
                 at ./examples/tinyhttp.rs:172:29
       4: <tinyhttp::Http as tokio_util::codec::decoder::Decoder>::decode
                 at ./examples/tinyhttp.rs:184:17
       5: <tokio_util::codec::framed_impl::FramedImpl<T,U,R> as futures_core::stream::Stream>::poll_next
                 at ./tokio-util/src/codec/framed_impl.rs:203:38
       6: <tokio_util::codec::framed::Framed<T,U> as futures_core::stream::Stream>::poll_next
                 at ./tokio-util/src/codec/framed.rs:301:9
       7: <&mut S as futures_core::stream::Stream>::poll_next
                 at /home/ivan/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-core-0.3.25/src/stream.rs:104:9
       8: <tokio_stream::stream_ext::next::Next<St> as core::future::future::Future>::poll
                 at ./tokio-stream/src/stream_ext/next.rs:42:9
       9: tinyhttp::process::{{closure}}
                 at ./examples/tinyhttp.rs:49:47
    ...
    

    curl request:

    $ curl -v localhost:8080/plaintext
    *   Trying 127.0.0.1:8080...
    * TCP_NODELAY set
    * Connected to localhost (127.0.0.1) port 8080 (#0)
    > GET /plaintext HTTP/1.1
    > Host: localhost:8080
    > User-Agent: curl/7.68.0
    > Accept: */*
    > 
    * Empty reply from server
    * Connection #0 to host localhost left intact
    curl: (52) Empty reply from server
    
    
    C-bug A-examples 
    opened by izderadicka 2
  • Performance reading from serial ports suffers due to excessive read() calls

    Performance reading from serial ports suffers due to excessive read() calls

    Version

    $ cargo tree | grep tokio | sed -e 's/.*\(tokio.*v[^ ]*\).*/\1/' | sort | uniq
    tokio-io-timeout v1.2.0
    tokio-macros v1.8.2
    tokio-serial v5.4.4
    tokio-stream v0.1.11
    tokio-util v0.7.4
    tokio v1.23.0
    

    Platform

    $ uname -a
    Linux rk-dell 5.15.0-56-generic #62-Ubuntu SMP Tue Nov 22 19:54:14 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    

    Description When reading from a serial port using AsyncFd in O_NONBLOCK mode, an extra read() call is performed:

    read(16, "1", 3013)                     = 1
    read(16, 0x55cc4763f34c, 3012)          = -1 EAGAIN (Resource temporarily unavailable)
    epoll_wait(3, [{events=EPOLLIN|EPOLLOUT, data={u32=2, u64=2}}], 1024, -1) = 1
    

    One can assume that if the buffer didn't fill up, the following read won't succeed, so it should proceed directly to epoll_wait instead of attempting the read and having it fail. Note that on some serial devices, they will never return more than a smaller buffer (say 64) so I would encourage this optimization to only take place when a single character is read.

    This was fixed for non-AsyncFd reads here: tokio-rs PR-4970 tokio-rs PR-4840

    C-bug A-tokio M-io 
    opened by rkuris 6
  • Support fifo pipes on unix

    Support fifo pipes on unix

    Currently, we provide the UnixStream type for reading from a unix domain socket on Linux, but Linux also provides pipes. Domain sockets and pipes are not the same (see here). Furthermore, opening a named pipe with UnixStream::connect will fail with an error. Currently the only way to asynchronously read from a pipe is to use AsyncFd.

    I would like to see a type in tokio::net for use with fifo pipes.

    E-help-wanted A-tokio M-net C-feature-request 
    opened by Darksonn 4
Releases(tokio-1.23.0)
  • tokio-1.23.0(Dec 5, 2022)

    Fixed

    • net: fix Windows named pipe connect (#5208)
    • io: support vectored writes for ChildStdin (#5216)
    • io: fix async fn ready() false positive for OS-specific events (#5231)

    Changed

    • runtime: yield_now defers task until after driver poll (#5223)
    • runtime: reduce amount of codegen needed per spawned task (#5213)
    • windows: replace winapi dependency with windows-sys (#5204)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.22.0(Nov 18, 2022)

    Added

    • runtime: add Handle::runtime_flavor (#5138)
    • sync: add Mutex::blocking_lock_owned (#5130)
    • sync: add Semaphore::MAX_PERMITS (#5144)
    • sync: add merge() to semaphore permits (#4948)
    • sync: add mpsc::WeakUnboundedSender (#5189)

    Added (unstable)

    • process: add Command::process_group (#5114)
    • runtime: export metrics about the blocking thread pool (#5161)
    • task: add task::id() and task::try_id() (#5171)

    Fixed

    • macros: don't take ownership of futures in macros (#5087)
    • runtime: fix Stacked Borrows violation in LocalOwnedTasks (#5099)
    • runtime: mitigate ABA with 32-bit queue indices when possible (#5042)
    • task: wake local tasks to the local queue when woken by the same thread (#5095)
    • time: panic in release mode when mark_pending called illegally (#5093)
    • runtime: fix typo in expect message (#5169)
    • runtime: fix unsync_load on atomic types (#5175)
    • task: elaborate safety comments in task deallocation (#5172)
    • runtime: fix LocalSet drop in thread local (#5179)
    • net: remove libc type leakage in a public API (#5191)
    • runtime: update the alignment of CachePadded (#5106)

    Changed

    • io: make tokio::io::copy continue filling the buffer when writer stalls (#5066)
    • runtime: remove coop::budget from LocalSet::run_until (#5155)
    • sync: make Notify panic safe (#5154)

    Documented

    • io: fix doc for write_i8 to use signed integers (#5040)
    • net: fix doc typos for TCP and UDP set_tos methods (#5073)
    • net: fix function name in UdpSocket::recv documentation (#5150)
    • sync: typo in TryLockError for RwLock::try_write (#5160)
    • task: document that spawned tasks execute immediately (#5117)
    • time: document return type of timeout (#5118)
    • time: document that timeout checks only before poll (#5126)
    • sync: specify return type of oneshot::Receiver in docs (#5198)

    Internal changes

    • runtime: use const Mutex::new for globals (#5061)
    • runtime: remove Option around mio::Events in io driver (#5078)
    • runtime: remove a conditional compilation clause (#5104)
    • runtime: remove a reference to internal time handle (#5107)
    • runtime: misc time driver cleanup (#5120)
    • runtime: move signal driver to runtime module (#5121)
    • runtime: signal driver now uses I/O driver directly (#5125)
    • runtime: start decoupling I/O driver and I/O handle (#5127)
    • runtime: switch io::handle refs with scheduler:Handle (#5128)
    • runtime: remove Arc from I/O driver (#5134)
    • runtime: use signal driver handle via scheduler::Handle (#5135)
    • runtime: move internal clock fns out of context (#5139)
    • runtime: remove runtime::context module (#5140)
    • runtime: keep driver cfgs in driver.rs (#5141)
    • runtime: add runtime::context to unify thread-locals (#5143)
    • runtime: rename some confusing internal variables/fns (#5151)
    • runtime: move coop mod into runtime (#5152)
    • runtime: move budget state to context thread-local (#5157)
    • runtime: move park logic into runtime module (#5158)
    • runtime: move Runtime into its own file (#5159)
    • runtime: unify entering a runtime with Handle::enter (#5163)
    • runtime: remove handle reference from each scheduler (#5166)
    • runtime: move enter into context (#5167)
    • runtime: combine context and entered thread-locals (#5168)
    • runtime: fix accidental unsetting of current handle (#5178)
    • runtime: move CoreStage methods to Core (#5182)
    • sync: name mpsc semaphore types (#5146)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.21.2(Sep 27, 2022)

    1.21.2 (September 27, 2022)

    This release removes the dependency on the once_cell crate to restore the MSRV of 1.21.x, which is the latest minor version at the time of release. (#5048)

    Source code(tar.gz)
    Source code(zip)
  • tokio-1.20.2(Sep 27, 2022)

  • tokio-1.18.3(Sep 27, 2022)

  • tokio-1.21.1(Sep 13, 2022)

  • tokio-1.21.0(Sep 2, 2022)

    1.21.0 (September 2, 2022)

    This release is the first release of Tokio to intentionally support WASM. The sync,macros,io-util,rt,time features are stabilized on WASM. Additionally the wasm32-wasi target is given unstable support for the net feature.

    Added

    • net: add device and bind_device methods to TCP/UDP sockets (#4882)
    • net: add tos and set_tos methods to TCP and UDP sockets (#4877)
    • net: add security flags to named pipe ServerOptions (#4845)
    • signal: add more windows signal handlers (#4924)
    • sync: add mpsc::Sender::max_capacity method (#4904)
    • sync: implement Weak version of mpsc::Sender (#4595)
    • task: add LocalSet::enter (#4765)
    • task: stabilize JoinSet and AbortHandle (#4920)
    • tokio: add track_caller to public APIs (#4805, #4848, #4852)
    • wasm: initial support for wasm32-wasi target (#4716)

    Fixed

    • miri: improve miri compatibility by avoiding temporary references in linked_list::Link impls (#4841)
    • signal: don't register write interest on signal pipe (#4898)
    • sync: add #[must_use] to lock guards (#4886)
    • sync: fix hang when calling recv on closed and reopened broadcast channel (#4867)
    • task: propagate attributes on task-locals (#4837)

    Changed

    • fs: change panic to error in File::start_seek (#4897)
    • io: reduce syscalls in poll_read (#4840)
    • process: use blocking threadpool for child stdio I/O (#4824)
    • signal: make SignalKind methods const (#4956)

    Internal changes

    • rt: extract basic_scheduler::Config (#4935)
    • rt: move I/O driver into runtime module (#4942)
    • rt: rename internal scheduler types (#4945)

    Documented

    • chore: fix typos and grammar (#4858, #4894, #4928)
    • io: fix typo in AsyncSeekExt::rewind docs (#4893)
    • net: add documentation to try_read() for zero-length buffers (#4937)
    • runtime: remove incorrect panic section for Builder::worker_threads (#4849)
    • sync: doc of watch::Sender::send improved (#4959)
    • task: add cancel safety docs to JoinHandle (#4901)
    • task: expand on cancellation of spawn_blocking (#4811)
    • time: clarify that the first tick of Interval::tick happens immediately (#4951)

    Unstable

    • rt: add unstable option to disable the LIFO slot (#4936)
    • task: fix incorrect signature in Builder::spawn_on (#4953)
    • task: make task::Builder::spawn* methods fallible (#4823)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.20.1(Jul 25, 2022)

  • tokio-1.20.0(Jul 13, 2022)

    1.20.0 (July 12, 2022)

    Added

    Changed

    • time: remove src/time/driver/wheel/stack.rs (#4766)
    • rt: clean up arguments passed to basic scheduler (#4767)
    • net: be more specific about winapi features (#4764)
    • tokio: use const initialized thread locals where possible (#4677)
    • task: various small improvements to LocalKey (#4795)

    Fixed

    Documented

    • fs: warn about performance pitfall (#4762)
    • chore: fix spelling (#4769)
    • sync: document spurious failures in oneshot (#4777)
    • sync: add warning for watch in non-Send futures (#4741)
    • chore: fix typo (#4798)

    Unstable

    • joinset: rename join_one to join_next (#4755)
    • rt: unhandled panic config for current thread rt (#4770)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.19.2(Jun 7, 2022)

  • tokio-1.19.1(Jun 5, 2022)

  • tokio-1.19.0(Jun 3, 2022)

    1.19.0 (June 3, 2022)

    Added

    • runtime: add is_finished method for JoinHandle and AbortHandle (#4709)
    • runtime: make global queue and event polling intervals configurable (#4671)
    • sync: add Notified::enable (#4705)
    • sync: add watch::Sender::send_if_modified (#4591)
    • sync: add resubscribe method to broadcast::Receiver (#4607)
    • net: add take_error to TcpSocket and TcpStream (#4739)

    Changed

    • io: refactor out usage of Weak in the io handle (#4656)

    Fixed

    • macros: avoid starvation in join! and try_join! (#4624)

    Documented

    • runtime: clarify semantics of tasks outliving block_on (#4729)
    • time: fix example for MissedTickBehavior::Burst (#4713)

    Unstable

    • metrics: correctly update atomics in IoDriverMetrics (#4725)
    • metrics: fix compilation with unstable, process, and rt, but without net (#4682)
    • task: add #[track_caller] to JoinSet/JoinMap (#4697)
    • task: add Builder::{spawn_on, spawn_local_on, spawn_blocking_on} (#4683)
    • task: add consume_budget for cooperative scheduling (#4498)
    • task: add join_set::Builder for configuring JoinSet tasks (#4687)
    • task: update return value of JoinSet::join_one (#4726)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.18.2(May 8, 2022)

  • tokio-1.18.1(May 2, 2022)

  • tokio-1.18.0(Apr 27, 2022)

    1.18.0 (April 27, 2022)

    This release adds a number of new APIs in tokio::net, tokio::signal, and tokio::sync. In addition, it adds new unstable APIs to tokio::task (Ids for uniquely identifying a task, and AbortHandle for remotely cancelling a task), as well as a number of bugfixes.

    Fixed

    • blocking: add missing #[track_caller] for spawn_blocking (#4616)
    • macros: fix select macro to process 64 branches (#4519)
    • net: fix try_io methods not calling Mio's try_io internally (#4582)
    • runtime: recover when OS fails to spawn a new thread (#4485)

    Added

    • net: add UdpSocket::peer_addr (#4611)
    • net: add try_read_buf method for named pipes (#4626)
    • signal: add SignalKind Hash/Eq impls and c_int conversion (#4540)
    • signal: add support for signals up to SIGRTMAX (#4555)
    • sync: add watch::Sender::send_modify method (#4310)
    • sync: add broadcast::Receiver::len method (#4542)
    • sync: add watch::Receiver::same_channel method (#4581)
    • sync: implement Clone for RecvError types (#4560)

    Changed

    • update mio to 0.8.1 (#4582)
    • macros: rename tokio::select!'s internal util module (#4543)
    • runtime: use Vec::with_capacity when building runtime (#4553)

    Documented

    • improve docs for tokio_unstable (#4524)
    • runtime: include more documentation for thread_pool/worker (#4511)
    • runtime: update Handle::current's docs to mention EnterGuard (#4567)
    • time: clarify platform specific timer resolution (#4474)
    • signal: document that Signal::recv is cancel-safe (#4634)
    • sync: UnboundedReceiver close docs (#4548)

    Unstable

    The following changes only apply when building with --cfg tokio_unstable:

    • task: add task::Id type (#4630)
    • task: add AbortHandle type for cancelling tasks in a JoinSet (#4530], [#4640)
    • task: fix missing doc(cfg(...)) attributes for JoinSet (#4531)
    • task: fix broken link in AbortHandle RustDoc (#4545)
    • metrics: add initial IO driver metrics (#4507)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.17.0(Feb 16, 2022)

    1.17.0 (February 15, 2022)

    This release updates the minimum supported Rust version (MSRV) to 1.49, the mio dependency to v0.8, and the (optional) parking_lot dependency to v0.12. Additionally, it contains several bug fixes, as well as internal refactoring and performance improvements.

    Fixed

    • time: prevent panicking in sleep with large durations (#4495)
    • time: eliminate potential panics in Instant arithmetic on platforms where Instant::now is not monotonic (#4461)
    • io: fix DuplexStream not participating in cooperative yielding (#4478)
    • rt: fix potential double panic when dropping a JoinHandle (#4430)

    Changed

    • update minimum supported Rust version to 1.49 (#4457)
    • update parking_lot dependency to v0.12.0 (#4459)
    • update mio dependency to v0.8 (#4449)
    • rt: remove an unnecessary lock in the blocking pool (#4436)
    • rt: remove an unnecessary enum in the basic scheduler (#4462)
    • time: use bit manipulation instead of modulo to improve performance (#4480)
    • net: use std::future::Ready instead of our own Ready future (#4271)
    • replace deprecated atomic::spin_loop_hint with hint::spin_loop (#4491)
    • fix miri failures in intrusive linked lists (#4397)

    Documented

    • io: add an example for tokio::process::ChildStdin (#4479)

    Unstable

    The following changes only apply when building with --cfg tokio_unstable:

    • task: fix missing location information in tracing spans generated by spawn_local (#4483)
    • task: add JoinSet for managing sets of tasks (#4335)
    • metrics: fix compilation error on MIPS (#4475)
    • metrics: fix compilation error on arm32v7 (#4453)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.14.1(Jan 31, 2022)

    This release backports a bug fix from 1.16.1

    Fixes a soundness bug in io::Take (#4428). The unsoundness is exposed when leaking memory in the given AsyncRead implementation and then overwriting the supplied buffer:

    impl AsyncRead for Buggy {
        fn poll_read(
            self: Pin<&mut Self>,
            cx: &mut Context<'_>,
            buf: &mut ReadBuf<'_>
        ) -> Poll<Result<()>> {
          let new_buf = vec![0; 5].leak();
          *buf = ReadBuf::new(new_buf);
          buf.put_slice(b"hello");
          Poll::Ready(Ok(()))
        }
    }
    

    Fixed

    • io: soundness don't expose uninitialized memory when using io::Take in edge case (#4428)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.8.5(Jan 30, 2022)

    This release backports a bug fix from 1.16.1

    Fixes a soundness bug in io::Take (#4428). The unsoundness is exposed when leaking memory in the given AsyncRead implementation and then overwriting the supplied buffer:

    impl AsyncRead for Buggy {
        fn poll_read(
            self: Pin<&mut Self>,
            cx: &mut Context<'_>,
            buf: &mut ReadBuf<'_>
        ) -> Poll<Result<()>> {
          let new_buf = vec![0; 5].leak();
          *buf = ReadBuf::new(new_buf);
          buf.put_slice(b"hello");
          Poll::Ready(Ok(()))
        }
    }
    

    Fixed

    • io: soundness don't expose uninitialized memory when using io::Take in edge case (#4428)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.16.1(Jan 28, 2022)

  • tokio-1.16.0(Jan 27, 2022)

    Fixes a soundness bug in io::Take (#4428). The unsoundness is exposed when leaking memory in the given AsyncRead implementation and then overwriting the supplied buffer:

    impl AsyncRead for Buggy {
        fn poll_read(
            self: Pin<&mut Self>,
            cx: &mut Context<'_>,
            buf: &mut ReadBuf<'_>
        ) -> Poll<Result<()>> {
          let new_buf = vec![0; 5].leak();
          *buf = ReadBuf::new(new_buf);
          buf.put_slice(b"hello");
          Poll::Ready(Ok(()))
        }
    }
    

    Also, this release includes improvements to the multi-threaded scheduler that can increase throughput by up to 20% in some cases (#4383).

    Fixed

    • io: soundness don't expose uninitialized memory when using io::Take in edge case (#4428)
    • fs: ensure File::write results in a write syscall when the runtime shuts down (#4316)
    • process: drop pipe after child exits in wait_with_output (#4315)
    • rt: improve error message when spawning a thread fails (#4398)
    • rt: reduce false-positive thread wakups in the multi-threaded scheduler (#4383)
    • sync: don't inherit Send from parking_lot::*Guard (#4359)

    Added

    • net: TcpSocket::linger() and set_linger() (#4324)
    • net: impl UnwindSafe for socket types (#4384)
    • rt: impl UnwindSafe for JoinHandle (#4418)
    • sync: watch::Receiver::has_changed() (#4342)
    • sync: oneshot::Receiver::blocking_recv() (#4334)
    • sync: RwLock blocking operations (#4425)

    Unstable

    The following changes only apply when building with --cfg tokio_unstable

    • rt: breaking change overhaul runtime metrics API (#4373)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.15.0(Dec 15, 2021)

    Fixed

    • io: add cooperative yielding support to io::empty() (#4300)
    • time: make timeout robust against budget-depleting tasks (#4314)

    Changed

    • update minimum supported Rust version to 1.46.

    Added

    • time: add Interval::reset() (#4248)
    • io: add explicit lifetimes to AsyncFdReadyGuard (#4267)
    • process: add Command::as_std() (#4295)

    Added (unstable)

    • tracing: instrument tokio::sync types (#4302)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.8.4(Nov 17, 2021)

    1.8.4 (November 15, 2021)

    This release backports a bugfix for a data race when sending and receiving on a closed oneshot channel ([RUSTSEC-2021-0124]) from v1.13.1.

    Fixed

    • sync: fix a data race between oneshot::Sender::send and awaiting a oneshot::Receiver when the oneshot has been closed (#4226)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.14.0(Nov 22, 2021)

    1.14.0 (November 15, 2021)

    Fixed

    • macros: fix compiler errors when using mut patterns in select! (#4211)
    • sync: fix a data race between oneshot::Sender::send and awaiting a oneshot::Receiver when the oneshot has been closed (#4226)
    • sync: make AtomicWaker panic safe (#3689)
    • runtime: fix basic scheduler dropping tasks outside a runtime context (#4213)

    Added

    • stats: add RuntimeStats::busy_duration_total (#4179, #4223)

    Changed

    • io: updated copy buffer size to match std::io::copy (#4209)

    Documented

    • io: rename buffer to file in doc-test (#4230)
    • sync: fix Notify example (#4212)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.13.1(Nov 17, 2021)

    1.13.1 (November 15, 2021)

    This release fixes a data race when sending and receiving on a closed oneshot channel (RUSTSEC-2021-0124).

    Fixed

    • sync: fix a data race between oneshot::Sender::send and awaiting a oneshot::Receiver when the oneshot has been closed (#4226)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.13.0(Oct 29, 2021)

    1.13.0 (October 29, 2021)

    Fixed

    • sync: fix Notify to clone the waker before locking its waiter list (#4129)
    • tokio: add riscv32 to non atomic64 architectures (#4185)

    Added

    • net: add poll_{recv,send}_ready methods to udp and uds_datagram (#4131)
    • net: add try_*, readable, writable, ready, and peer_addr methods to split halves (#4120)
    • sync: add blocking_lock to Mutex (#4130)
    • sync: add watch::Sender::send_replace (#3962, #4195)
    • sync: expand Debug for Mutex<T> impl to unsized T (#4134)
    • tracing: instrument time::Sleep (#4072)
    • tracing: use structured location fields for spawned tasks (#4128)

    Changed

    • io: add assert in copy_bidirectional that poll_write is sensible (#4125)
    • macros: use qualified syntax when polling in select! (#4192)
    • runtime: handle block_on wakeups better (#4157)
    • task: allocate callback on heap immediately in debug mode (#4203)
    • tokio: assert platform-minimum requirements at build time (#3797)

    Documented

    • docs: conversion of doc comments to indicative mood (#4174)
    • docs: add returning on the first error example for try_join! (#4133)
    • docs: fixing broken links in tokio/src/lib.rs (#4132)
    • signal: add example with background listener (#4171)
    • sync: add more oneshot examples (#4153)
    • time: document Interval::tick cancel safety (#4152)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.12.0(Sep 21, 2021)

    1.12.0 (September 21, 2021)

    Fixed

    • mpsc: ensure try_reserve error is consistent with try_send (#4119)
    • mpsc: use spin_loop_hint instead of yield_now (#4115)
    • sync: make SendError field public (#4097)

    Added

    • io: add POSIX AIO on FreeBSD (#4054)
    • io: add convenience method AsyncSeekExt::rewind (#4107)
    • runtime: add tracing span for block_on futures (#4094)
    • runtime: callback when a worker parks and unparks (#4070)
    • sync: implement try_recv for mpsc channels (#4113)

    Changed

    • macros: run runtime inside LocalSet when using macro (#4027)

    Documented

    • docs: clarify CPU-bound tasks on Tokio (#4105)
    • mpsc: document spurious failures on poll_recv (#4117)
    • mpsc: document that PollSender impls Sink (#4110)
    • task: document non-guarantees of yield_now (#4091)
    • time: document paused time details better (#4061, #4103)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.11.0(Aug 31, 2021)

    1.11.0 (August 31, 2021)

    Fixed

    • time: don't panic when Instant is not monotonic (#4044)
    • io: fix panic in fill_buf by not calling poll_fill_buf twice (#4084)

    Added

    • watch: add watch::Sender::subscribe (#3800)
    • process: add from_std to ChildStd* (#4045)
    • stats: initial work on runtime stats (#4043)

    Changed

    • tracing: change span naming to new console convention (#4042)
    • io: speed-up waking by using uninitialized array (#4055, #4071, #4075)

    Documented

    • time: make Sleep examples easier to find (#4040)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.10.1(Aug 24, 2021)

  • tokio-1.10.0(Aug 12, 2021)

    1.10.0 (August 12, 2021)

    Added

    • io: add (read|write)_f(32|64)[_le] methods (#4022)
    • io: add fill_buf and consume to AsyncBufReadExt (#3991)
    • process: add Child::raw_handle() on windows (#3998)

    Fixed

    • doc: fix non-doc builds with --cfg docsrs (#4020)
    • io: flush eagerly in io::copy (#4001)
    • runtime: a debug assert was sometimes triggered during shutdown (#4005)
    • sync: use spin_loop_hint instead of yield_now in mpsc (#4037)
    • tokio: the test-util feature depends on rt, sync, and time (#4036)

    Changes

    • runtime: reorganize parts of the runtime (#3979, #4005)
    • signal: make windows docs for signal module show up on unix builds (#3770)
    • task: quickly send task to heap on debug mode (#4009)

    Documented

    • io: document cancellation safety of AsyncBufReadExt (#3997)
    • sync: document when watch::send fails (#4021)
    Source code(tar.gz)
    Source code(zip)
  • tokio-1.8.3(Jul 26, 2021)

Owner
Tokio
Rust's asynchronous runtime.
Tokio
The Rust Implementation of libp2p networking stack.

Central repository for work on libp2p This repository is the central place for Rust development of the libp2p spec. Warning: While we are trying our b

libp2p 3k Jan 4, 2023
Backroll is a pure Rust implementation of GGPO rollback networking library.

backroll-rs Backroll is a pure Rust implementation of GGPO rollback networking library. Development Status This is still in an untested alpha stage. A

Hourai Teahouse 273 Dec 28, 2022
Painless peer-to-peer WebRTC networking for rust wasm

Matchbox Painless peer-to-peer WebRTC networking for rust wasm applications. The goal of the Matchbox project is to enable udp-like, unordered, unreli

Johan Klokkhammer Helsing 363 Jan 5, 2023
A simple message based networking library for the bevy framework

Spicy Networking for Bevy bevy_spicy_networking is a solution to the "How do I connect multiple clients to a single server" problem in your bevy games

Cabbit Studios 67 Jan 1, 2023
Bevy plugin for the GGRS P2P rollback networking library.

Bevy_GGRS Bevy plugin for the ?? GGRS P2P rollback networking library. The plugin creates a custom stage with a separate schedule, which handles corre

Georg Friedrich Schuppe 120 Jan 6, 2023
Final Project for "Computer Networking Security": A Layer-3 VPN implementation over TLS

Final Project for "Computer Networking Security": A Layer-3 VPN implementation over TLS

Siger Yang 2 Jun 7, 2022
Reliable p2p network connections in Rust with NAT traversal

Reliable p2p network connections in Rust with NAT traversal. One of the most needed libraries for any server-less / decentralised projects

MaidSafe-Archive 948 Dec 20, 2022
Reliable p2p network connections in Rust with NAT traversal

Reliable p2p network connections in Rust with NAT traversal. One of the most needed libraries for any server-less, decentralised project.

MaidSafe-Archive 948 Dec 20, 2022
🦀 A bit more reliable UDP written in Rust

AckUDP [EXPERIMENTAL] A bit more reliable version of UDP written in Rust. How to use? use std::{io, thread, time::Duration}; use ack_udp::AckUdp; #[

Ivan Davydov 3 May 9, 2023
Tachyon is a performant and highly parallel reliable udp library that uses a nack based model

Tachyon Tachyon is a performant and highly parallel reliable udp library that uses a nack based model. Strongly reliable Reliable fragmentation Ordere

Chris Ochs 47 Oct 15, 2022
Aggressively reliable delivery layer. Above UDP. Nothing else.

Aggressively reliable delivery layer. Above UDP. Nothing else.

IchHabeKeineNamen 2 Jun 5, 2022
BLEZ - Asynchronous Bluetooth Low Energy on Linux for Rust

BLEZ - Asynchronous Bluetooth Low Energy on Linux for Rust This library provides an asynchronous, fully featured interface to the Bluetooth Low Energy

Sebastian Urban 40 Oct 21, 2021
An asynchronous Prometheus exporter for iptables

iptables_exporter An asynchronous Prometheus exporter for iptables iptables_exporter runs iptables-save --counter and scrapes the output to build Prom

Kevin K. 21 Dec 29, 2022
General-purpose asynchronous socket stream.

async-socket This crate implements a general-purpose asynchronous socket. The Socket implements AsyncRead, AsyncWrite, Stream and Clone traits and thu

Kristijan Sedlak 3 Oct 20, 2021
Asynchronous Linux SocketCAN - Broadcast Manager support (BCM) with tokio

tokio-socketcan-bcm The Broadcast Manager protocol provides a command based configuration interface to filter and send (e.g. cyclic) CAN messages in k

Marcel 4 Nov 8, 2022
An asynchronous dumb exporter proxy for prometheus. This aggregates all the metrics and exposes as a single scrape endpoint.

A dumb light weight asynchronous exporter proxy This is a dumb lightweight asynchronous exporter proxy that will help to expose multiple application m

Dark streams 3 Dec 4, 2022
DNS Server written in Rust for fun, see https://dev.to/xfbs/writing-a-dns-server-in-rust-1gpn

DNS Fun Ever wondered how you can write a DNS server in Rust? No? Well, too bad, I'm telling you anyways. But don't worry, this is going to be a fun o

Patrick Elsen 26 Jan 13, 2023
A library for writing type-safe Durable Objects in Rust.

do-proxy A library for writing type-safe Durable Objects (DOs) in Rust. With do-proxy you can: Easily write type-safe APIs for Durable Objects. Abstra

Fisher Darling 12 Dec 4, 2022
The netns-rs crate provides an ultra-simple interface for handling network namespaces in Rust.

netns-rs The netns-rs crate provides an ultra-simple interface for handling network namespaces in Rust. Changing namespaces requires elevated privileg

OpenAnolis Community 7 Dec 15, 2022