Metal IO library for Rust

Overview

Mio – Metal IO

Mio is a fast, low-level I/O library for Rust focusing on non-blocking APIs and event notification for building high performance I/O apps with as little overhead as possible over the OS abstractions.

Crates.io MIT licensed Build Status Build Status

API documentation

This is a low level library, if you are looking for something easier to get started with, see Tokio.

Usage

To use mio, first add this to your Cargo.toml:

[dependencies]
mio = "0.7"

Next we can start using Mio. The following is quick introduction using TcpListener and TcpStream. Note that features = ["os-poll", "net"] must be specified for this example.

use std::error::Error;

use mio::net::{TcpListener, TcpStream};
use mio::{Events, Interest, Poll, Token};

// Some tokens to allow us to identify which event is for which socket.
const SERVER: Token = Token(0);
const CLIENT: Token = Token(1);

fn main() -> Result<(), Box<dyn Error>> {
    // Create a poll instance.
    let mut poll = Poll::new()?;
    // Create storage for events.
    let mut events = Events::with_capacity(128);

    // Setup the server socket.
    let addr = "127.0.0.1:13265".parse()?;
    let mut server = TcpListener::bind(addr)?;
    // Start listening for incoming connections.
    poll.registry()
        .register(&mut server, SERVER, Interest::READABLE)?;

    // Setup the client socket.
    let mut client = TcpStream::connect(addr)?;
    // Register the socket.
    poll.registry()
        .register(&mut client, CLIENT, Interest::READABLE | Interest::WRITABLE)?;

    // Start an event loop.
    loop {
        // Poll Mio for events, blocking until we get an event.
        poll.poll(&mut events, None)?;

        // Process each event.
        for event in events.iter() {
            // We can use the token we previously provided to `register` to
            // determine for which socket the event is.
            match event.token() {
                SERVER => {
                    // If this is an event for the server, it means a connection
                    // is ready to be accepted.
                    //
                    // Accept the connection and drop it immediately. This will
                    // close the socket and notify the client of the EOF.
                    let connection = server.accept();
                    drop(connection);
                }
                CLIENT => {
                    if event.is_writable() {
                        // We can (likely) write to the socket without blocking.
                    }

                    if event.is_readable() {
                        // We can (likely) read from the socket without blocking.
                    }

                    // Since the server just shuts down the connection, let's
                    // just exit from our event loop.
                    return Ok(());
                }
                // We don't expect any events with tokens other than those we provided.
                _ => unreachable!(),
            }
        }
    }
}

Features

  • Non-blocking TCP, UDP
  • I/O event queue backed by epoll, kqueue, and IOCP
  • Zero allocations at runtime
  • Platform specific extensions

Non-goals

The following are specifically omitted from Mio and are left to the user or higher-level libraries.

  • File operations
  • Thread pools / multi-threaded event loop
  • Timers

Platforms

Currently supported platforms:

  • Android
  • DragonFly BSD
  • FreeBSD
  • Linux
  • NetBSD
  • OpenBSD
  • Solaris
  • Windows (running under Wine is not supported, see issue #1444)
  • iOS
  • macOS

There are potentially others. If you find that Mio works on another platform, submit a PR to update the list!

Mio can handle interfacing with each of the event systems of the aforementioned platforms. The details of their implementation are further discussed in the Poll type of the API documentation (see above).

The Windows implementation for polling sockets is using the wepoll strategy. This uses the Windows AFD system to access socket readiness events.

Community

A group of Mio users hang out on Discord, this can be a good place to go for questions.

Contributing

Interested in getting involved? We would love to help you! For simple bug fixes, just submit a PR with the fix and we can discuss the fix directly in the PR. If the fix is more complex, start with an issue.

If you want to propose an API change, create an issue to start a discussion with the community. Also, feel free to talk with us in Discord.

Finally, be kind. We support the Rust Code of Conduct.

Comments
  • mio (and tokio) sockets are completely broken under wine

    mio (and tokio) sockets are completely broken under wine

    Testcase:

    #[tokio::main]
    async fn main() {
        let addr = std::net::SocketAddr::from(([0,0,0,0], 1234));
        tokio::net::TcpListener::bind(addr).await.unwrap();
    }
    

    On Windows, it succeeds:

    > mio-bug.exe
    
    

    On wine (any version), it crashes:

    $ wine mio-bug.exe
    thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "File not found." }', src\main.rs:3:88
    note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
    

    Note the very unusual and confusing ENOENT error code. To understand how it gets produced, we can use the following command:

    $ strace -e '!write' env WINEDEBUG=relay,ws2_32 wine mio-bug.exe
    
    unedited part of the log
    0140:Call ws2_32.socket(00000002,00000001,00000000) ret=14007a67d
    0140:Call ntdll.RtlAllocateHeap(00010000,00000008,00000048) ret=7f42dbcaae68
    0140:Ret  ntdll.RtlAllocateHeap() retval=00058a00 ret=7f42dbcaae68
    0140:Call ntdll.RtlInitUnicodeString(0021ddd0,7f42dbcb7b40 L"\\Device\\Afd") ret=7f42dbcaa8cc
    0140:Ret  ntdll.RtlInitUnicodeString() retval=00000018 ret=7f42dbcaa8cc
    0140:Call ntdll.NtOpenFile(0021ddb8,c0100000,0021ddf0,0021dde0,00000000,00000000) ret=7f42dbcaa947
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [], 8) = 0
    writev(3, [{iov_base=",\0\0\0\26\0\0\0\0\0\0\0\0\0\20\300\2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., iov_len=64}, {iov_base="\\\0D\0e\0v\0i\0c\0e\0\\\0A\0f\0d\0", iov_len=22}], 2) = 86
    read(4, "\0\0\0\0\0\0\0\0\200\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 64) = 64
    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
    0140:Ret  ntdll.NtOpenFile() retval=00000000 ret=7f42dbcaa947
    0140:Call ntdll.NtDeviceIoControlFile(00000080,00000000,00000000,00000000,0021dde0,00128320,0021ddc0,00000010,00000000,00000000) ret=7f42dbcaa9ab
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [], 8) = 0
    writev(3, [{iov_base="\201\0\0\0\20\0\0\0\0\0\0\0 \203\22\0\200\0\0\0\0\0\0\0\340\335!\0\0\0\0\0"..., iov_len=64}, {iov_base="\2\0\0\0\1\0\0\0\6\0\0\0\0\0\0\0", iov_len=16}], 2) = 80
    read(4, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 64) = 64
    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
    0140:Ret  ntdll.NtDeviceIoControlFile() retval=00000000 ret=7f42dbcaa9ab
    0140:Ret  ws2_32.socket() retval=00000080 ret=14007a67d
    0140:Call ws2_32.ioctlsocket(00000080,8004667e,0021dfd4) ret=14007a778
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [], 8) = 0
    read(4, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 64) = 64
    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [], 8) = 0
    read(4, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 64) = 64
    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
    0140:Ret  ws2_32.ioctlsocket() retval=00000000 ret=14007a778
    0140:Call ucrtbase.memset(0021dc70,00000000,00000004) ret=14009031b
    0140:Ret  ucrtbase.memset() retval=0021dc70 ret=14009031b
    0140:Call ucrtbase.memset(0021dd58,00000000,00000008) ret=14007ab78
    0140:Ret  ucrtbase.memset() retval=0021dd58 ret=14007ab78
    0140:Call ws2_32.bind(00000080,0021e088,00000010) ret=140073798
    0140:Call ntdll.wine_server_handle_to_fd(00000080,00000000,0021de50,00000000) ret=7f42dbcac3b7
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [], 8) = 0
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [HUP INT USR1 USR2 ALRM CHLD IO], 8) = 0
    read(4, "\0\0\0\0\0\0\0\0\3\0\0\0\0\0\0\0\237\1\22\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 64) = 64
    rt_sigprocmask(SIG_SETMASK, [HUP INT USR1 USR2 ALRM CHLD IO], NULL, 8) = 0
    recvmsg(9, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="\200\0\0\0", iov_len=4}], msg_iovlen=1, msg_control=[{cmsg_len=20, cmsg_level=SOL_SOCKET, cmsg_type=SCM_RIGHTS, cmsg_data=[72]}], msg_controllen=24, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 4
    fcntl(72, F_SETFD, FD_CLOEXEC)          = 0
    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
    0140:Ret  ntdll.wine_server_handle_to_fd() retval=00000000 ret=7f42dbcac3b7
    bind(72, {sa_family=AF_INET, sin_port=htons(1234), sin_addr=inet_addr("0.0.0.0")}, 16) = 0
    0140:Call ntdll.wine_server_release_fd(00000080,00000048) ret=7f42dbcac429
    close(72)                               = 0
    0140:Ret  ntdll.wine_server_release_fd() retval=00000000 ret=7f42dbcac429
    0140:Ret  ws2_32.bind() retval=00000000 ret=140073798
    0140:Call ws2_32.listen(00000080,00000400) ret=1400739a7
    0140:Call ntdll.wine_server_handle_to_fd(00000080,00000001,0021ddf0,00000000) ret=7f42dbca5b57
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [], 8) = 0
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [HUP INT USR1 USR2 ALRM CHLD IO], 8) = 0
    read(4, "\0\0\0\0\0\0\0\0\3\0\0\0\0\0\0\0\237\1\22\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 64) = 64
    rt_sigprocmask(SIG_SETMASK, [HUP INT USR1 USR2 ALRM CHLD IO], NULL, 8) = 0
    recvmsg(9, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="\200\0\0\0", iov_len=4}], msg_iovlen=1, msg_control=[{cmsg_len=20, cmsg_level=SOL_SOCKET, cmsg_type=SCM_RIGHTS, cmsg_data=[72]}], msg_controllen=24, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 4
    fcntl(72, F_SETFD, FD_CLOEXEC)          = 0
    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
    0140:Ret  ntdll.wine_server_handle_to_fd() retval=00000000 ret=7f42dbca5b57
    getsockname(72, {sa_family=AF_INET, sin_port=htons(1234), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
    listen(72, 1024)                        = 0
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [], 8) = 0
    read(4, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 64) = 64
    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
    0140:Call ntdll.wine_server_release_fd(00000080,00000048) ret=7f42dbca5bce
    close(72)                               = 0
    0140:Ret  ntdll.wine_server_release_fd() retval=00000000 ret=7f42dbca5bce
    0140:Ret  ws2_32.listen() retval=00000000 ret=1400739a7
    0140:Call ucrtbase.memcpy(0021db98,00035c30,00000008) ret=14008e0c9
    0140:Ret  ucrtbase.memcpy() retval=0021db98 ret=14008e0c9
    0140:Call ucrtbase.memcpy(00035c30,0021dca8,00000008) ret=14008e0c9
    0140:Ret  ucrtbase.memcpy() retval=00035c30 ret=14008e0c9
    0140:Call ucrtbase.memcpy(0021dc18,00035c30,00000008) ret=14008e0c9
    0140:Ret  ucrtbase.memcpy() retval=0021dc18 ret=14008e0c9
    0140:Call ucrtbase.memcpy(00035c30,0021dd28,00000008) ret=14008e0c9
    0140:Ret  ucrtbase.memcpy() retval=00035c30 ret=14008e0c9
    0140:Call ntdll.RtlAcquireSRWLockExclusive(0003b570) ret=14003e1be
    0140:Ret  ntdll.RtlAcquireSRWLockExclusive() retval=00000000 ret=14003e1be
    0140:Call KERNEL32.GetProcessHeap() ret=1400a0415
    0140:Ret  KERNEL32.GetProcessHeap() retval=00010000 ret=1400a0415
    0140:Call ntdll.RtlAllocateHeap(00010000,00000000,00000c00) ret=14008e83b
    0140:Ret  ntdll.RtlAllocateHeap() retval=00058a60 ret=14008e83b
    0140:Call ucrtbase.memcpy(0021d570,0021d638,00000038) ret=14001eeb5
    0140:Ret  ucrtbase.memcpy() retval=0021d570 ret=14001eeb5
    0140:Call ucrtbase.memcpy(0021d3a0,0021d470,00000038) ret=14002c897
    0140:Ret  ucrtbase.memcpy() retval=0021d3a0 ret=14002c897
    0140:Call ucrtbase.memcpy(0021d438,0021d3a0,00000038) ret=14002c8b8
    0140:Ret  ucrtbase.memcpy() retval=0021d438 ret=14002c8b8
    0140:Call ucrtbase.memcpy(0021d538,0021d438,00000038) ret=140015700
    0140:Ret  ucrtbase.memcpy() retval=0021d538 ret=140015700
    0140:Call ucrtbase.memcpy(0021d5f0,0021d528,00000048) ret=14001eee6
    0140:Ret  ucrtbase.memcpy() retval=0021d5f0 ret=14001eee6
    0140:Call ucrtbase.memcpy(0021d9b0,0021d5f0,00000048) ret=14005a57c
    0140:Ret  ucrtbase.memcpy() retval=0021d9b0 ret=14005a57c
    0140:Call ucrtbase.memcpy(0021d620,0021d950,00000058) ret=14004b545
    0140:Ret  ucrtbase.memcpy() retval=0021d620 ret=14004b545
    0140:Call ucrtbase.memcpy(0021d520,0021d620,00000058) ret=14002c5f7
    0140:Ret  ucrtbase.memcpy() retval=0021d520 ret=14002c5f7
    0140:Call ucrtbase.memcpy(0021d5c8,0021d520,00000058) ret=14002c618
    0140:Ret  ucrtbase.memcpy() retval=0021d5c8 ret=14002c618
    0140:Call ucrtbase.memcpy(0021d8f8,0021d5c8,00000058) ret=14004b576
    0140:Ret  ucrtbase.memcpy() retval=0021d8f8 ret=14004b576
    0140:Call ucrtbase.memcpy(00058a60,0021d5e0,00000060) ret=1400719ae
    0140:Ret  ucrtbase.memcpy() retval=00058a60 ret=1400719ae
    0140:Call ntdll.RtlReleaseSRWLockExclusive(0003b570) ret=14003e19e
    0140:Ret  ntdll.RtlReleaseSRWLockExclusive() retval=00000000 ret=14003e19e
    0140:Call ntdll.RtlAcquireSRWLockExclusive(0003b520) ret=140078bae
    0140:Ret  ntdll.RtlAcquireSRWLockExclusive() retval=00000000 ret=140078bae
    0140:Call ntdll.NtCreateFile(0021cbe8,00100000,1400b9d28,0021cbf0,00000000,00000000,00000003,00000001,00000000,00000000,00000000) ret=14008569f
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [], 8) = 0
    writev(3, [{iov_base=",\0\0\0\36\0\0\0\0\0\0\0\0\0\20\0\0\0\0\0\0\0\0\0\3\0\0\0\0\0\0\0"..., iov_len=64}, {iov_base="\\\0D\0e\0v\0i\0c\0e\0\\\0A\0f\0d\0\\\0M\0i\0o\0", iov_len=30}], 2) = 94
    read(4, "4\0\0\300\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 64) = 64
    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
    0140:Ret  ntdll.NtCreateFile() retval=c0000034 ret=14008569f
    0140:Call ntdll.RtlNtStatusToDosError(c0000034) ret=1400856b9
    0140:Ret  ntdll.RtlNtStatusToDosError() retval=00000002 ret=1400856b9
    0140:Call ntdll.RtlReleaseSRWLockExclusive(0003b520) ret=140078b8e
    0140:Ret  ntdll.RtlReleaseSRWLockExclusive() retval=00000000 ret=140078b8e
    0140:Call ntdll.RtlAcquireSRWLockExclusive(0003b570) ret=14003e1be
    0140:Ret  ntdll.RtlAcquireSRWLockExclusive() retval=00000000 ret=14003e1be
    0140:Call ntdll.RtlReleaseSRWLockExclusive(0003b570) ret=14003e19e
    0140:Ret  ntdll.RtlReleaseSRWLockExclusive() retval=00000000 ret=14003e19e
    0140:Call ws2_32.closesocket(00000080) ret=14008f78e
    0140:Call ntdll.wine_server_handle_to_fd(00000080,00000001,0021dcec,00000000) ret=7f42dbca3ed4
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [], 8) = 0
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [HUP INT USR1 USR2 ALRM CHLD IO], 8) = 0
    read(4, "\0\0\0\0\0\0\0\0\3\0\0\0\1\0\0\0\237\1\22\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 64) = 64
    rt_sigprocmask(SIG_SETMASK, [HUP INT USR1 USR2 ALRM CHLD IO], NULL, 8) = 0
    recvmsg(9, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="\200\0\0\0", iov_len=4}], msg_iovlen=1, msg_control=[{cmsg_len=20, cmsg_level=SOL_SOCKET, cmsg_type=SCM_RIGHTS, cmsg_data=[72]}], msg_controllen=24, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 4
    fcntl(72, F_SETFD, FD_CLOEXEC)          = 0
    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
    dup(72)                                 = 75
    0140:Ret  ntdll.wine_server_handle_to_fd() retval=00000000 ret=7f42dbca3ed4
    0140:Call ntdll.wine_server_release_fd(00000080,0000004b) ret=7f42dbca3ef0
    close(75)                               = 0
    0140:Ret  ntdll.wine_server_release_fd() retval=00000000 ret=7f42dbca3ef0
    0140:Call KERNEL32.CloseHandle(00000080) ret=7f42dbca3f4c
    0140:Call ntdll.NtClose(00000080) ret=7b04ed99
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [], 8) = 0
    read(4, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 64) = 64
    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
    close(72)                               = 0
    0140:Ret  ntdll.NtClose() retval=00000000 ret=7b04ed99
    0140:Ret  KERNEL32.CloseHandle() retval=00000001 ret=7f42dbca3f4c
    0140:Ret  ws2_32.closesocket() retval=00000000 ret=14008f78e
    

    The error code is produced by this part in the middle:

    0140:Call ntdll.NtCreateFile(0021cbe8,00100000,1400b9d28,0021cbf0,00000000,00000000,00000003,00000001,00000000,00000000,00000000) ret=14008569f
    rt_sigprocmask(SIG_BLOCK, [HUP INT USR1 USR2 ALRM CHLD IO], [], 8) = 0
    writev(3, [{iov_base=",\0\0\0\36\0\0\0\0\0\0\0\0\0\20\0\0\0\0\0\0\0\0\0\3\0\0\0\0\0\0\0"..., iov_len=64}, {iov_base="\\\0D\0e\0v\0i\0c\0e\0\\\0A\0f\0d\0\\\0M\0i\0o\0", iov_len=30}], 2) = 94
    read(4, "4\0\0\300\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 64) = 64
    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
    0140:Ret  ntdll.NtCreateFile() retval=c0000034 ret=14008569f
    0140:Call ntdll.RtlNtStatusToDosError(c0000034) ret=1400856b9
    0140:Ret  ntdll.RtlNtStatusToDosError() retval=00000002 ret=1400856b9
    

    This corresponds to the following lines in mio: https://github.com/tokio-rs/mio/blob/04050dbd87d7b79c33168c6a972a8bb35cf510f8/src/sys/windows/afd.rs#L177-L194

    The reason this call fails is because, while wine does have \Device\Afd, mio uses \Device\Afd\Mio for reasons it never bothers to explain, and wine implements \Device\Afd as an empty directory.


    I would normally consider something like this a defect in wine, but not in this case. Not only you are using a completely undocumented internal Winsock API in ways that are neither clear nor explained (here or in the wepoll codebase), but you also never mention the fact that error codes (the complete list of which you do not know because the API is undocumented) are passed to downstream consumers without any filtering, sanity checking, or indication (the human readable string in io::Error would be a great place to include the word "AFD"), and if that wasn't bad enough, this is the only backend provided by mio/tokio for networking on Windows. As far as I see, there is no workaround that can be added e.g. to an application using hyper, even if I can patch mio during the build (what am I supposed to do, rewrite the entire backend myself to use overlapped operations?)

    Could you please stop for a moment and imagine how much time I have spent tracking down that ENOENT in async code of a third party Rust application I have never seen before? Not a single thing in Linux, WinAPI, wine, tokio, hyper, or std can possibly return a ENOENT error for a networking operation, so of course networking was the last thing I would blame. (Mio does return NotFound in a few places, but those are clearly distinct since they are not OS errors.) That was made harder by the fact that Wine makes debugging more complex, and also because I could at first only reproduce this as a deeply netsed part of a larger modular application that included, among other things, two copies of the Chromium Embedding Framework that ran concurrently to the crashing code.

    This is the poster example of why you should not use undocumented APIs: it makes your code more fragile, harder to understand, harder to contribute to, and borderline impossible to debug for downstream users. I will never get these twenty hours of my life back. Don't do this! And if you for some reason have to do it, wrap it in hazard tape and provide a slow but obviously correct fallback.

    bug 
    opened by whitequark 61
  • Confusing behavior when multiple events for a single socket are fired in a single event loop iteration

    Confusing behavior when multiple events for a single socket are fired in a single event loop iteration

    Issue representing the confusion experienced in #172 and #179.

    My current understanding of the confusion is, given epoll & a socket that received a HUP and is writable, mio will first call the Handler::readable w/ ReadHint set to Hup followed by the Handler::writable event. The confusion arises when the socket is closed during the Handler::readable call, or perhaps the token associated with that token is reused. The next call to Handler::writable for the "finalized" socket is unexpected.

    Part of the original API design consideration is how to unify epoll w/ kqueue. With kqueue, the readable & writable notifications are provided separately. As far as I am aware, there is no guarantee as to which order (readable or writable) the events will be seen.

    I don't believe that either solutions described by @rrichardson in #179 would be possible to implement (efficiently) for kqueue. The reason why hup / error are provided as a ReadHint is that it is not 100% portable for mio to detect them. However, the hup / error state can be discovered by the mio by reading from the socket, which is why they are considered readable events. In other words, to correctly use mio, hup / error states must be discovered by listening for Handler::readable, reading from the socket, and seeing an error. As such, doing anything with the ReadHint argument passed to Handler::readable is simply an optimization. I also don't see a way to implement passing ReadHint (renamed to EventHint) to Handler::writable callback in a portable fashion.

    opened by carllerche 49
  • Proposal: Unify Sockets, Timers, and Channels

    Proposal: Unify Sockets, Timers, and Channels

    Unify Sockets, Timers, and Channels

    Currently, there are two runtime APIs: Poll and EventLoop. Poll is the abstraction handling readiness for sockets and waiting for events on these sockets. EventLoop is a wrapper around Poll providing a timeout API and a cross thread notification API as well as the loop / reactive (via Handler) abstraction.

    I am proposing to extract the timer and the notification channel features into standalone types that implement Evented and thus can be used directly with Poll. For example:

    let timer = mio::Timer::new();
    let poll = mio::Poll::new();
    
    poll.register(&timer, Token(0), EventSet::readable(), PollOpt::edge());
    
    timer.timeout("hello", Duration::from_millis(1_000));
    
    poll.poll();
    

    The insight is that a lot of the code that is currently windows specific is useful to all platforms. Stabilizing the API and providing it to all platforms allows implementing Evented for arbitrary types.

    Advantages

    • Mio would have a unified API.
    • No change in overhead for any platform.
    • Move (and improve) some of the currently windows-only code to all platforms.
    • The notification channel backend could be configurable:
    let (tx, rx) = mio::Channel::new(mpsc::channel());
    poll.register(&rx, Token(0), EventSet::readable(), PollOpt::edge());
    

    Disadvantages

    • Unsafe code
    • More code (lock-free algorithms)

    The primary disadvantage that I can think of is that the code path around timers & the notification channel become slightly more complicated. I don't believe that the change would have a meaningful performance impact.

    There is also additional code complexity for all platforms. However, this code complexity already exists for Windows.

    Behavior

    An Evented would mirror the behavior of a socket registered with epoll. Specifically, in a single threaded environment:

    • A value registered will trigger at most one notification per call to poll.
    • A value registered with readable interest & edge triggers a notification once when it becomes readable.
    • A value registered with readable interest & level triggers a notification every call to poll as long as the value is still readable.
    • A value registered (or reregistered) with readable interest triggers a notification immediately if it is currently readable.
    • If a value is registered with readable interest only and already has a pending writable notification, the event is discarded
    • If a value has any pending notifications and is deregistered, the pending notifications are cleared.
    • When a value is dropped, it will no longer trigger any further notifications.
    • Poll is permitted to fire of spurious readiness events except if the value has been dropped.

    In the presence of concurrency, specifically readiness being modified on a different thread than Poll, a best effort is made to preserve these semantics.

    Implementation

    This section will describe how to implement a custom Evented type as well as Mio's internals to handle it. For simplicity and performance, custom Evented types will only be able to be registered with a single Poll.

    It is important to note that the implementation is not intended to replace FD polling on epoll & kqueue. It is meant to work in conjunction with the OS's event queue to support types that cannot be implemented using a socket or other system type that is compatible with the system's event queue.

    Readiness Queue

    Poll will maintain an internal readiness queue, represented as a linked list. The linked list head pointer is an AtomicPtr. All of the nodes in the linked list are owned by the Poll instance.

    The type declarations are for illustration only. The actual implementations will have some additional memory safety requirements.

    struct Poll {
        readiness_queue: Arc<PollReadinessQueue>,
    }
    
    struct PollReadinessQueue {
        // All readiness nodes owned by the `Poll` instance. When the `Poll`
        // instance is freed, the list is walked and each Arc ref count is
        // decremented.
        head_all_nodes: Box<ReadinessNode>,
    
        // linked list of nodes that are pending some processing
        head_readiness: AtomicPtr<ReadinessNode>,
    
        // Hashed wheel timer for delayed readiness notifications
        readiness_wheel: Vec<AtomicPtr<ReadinessNode>>,
    }
    
    struct ReadinessNode {
        // Next node in ownership tracking queue
        next_all_nodes: Box<ReadinessNode>,
        // Used when the node is queued in the readiness linked list OR the
        // linked list for a hashed wheel slot.
        next_readiness: *mut ReadinessNode,
        // The Token used to register the `Evented` with `Poll`. This can change,
        // but only by calling `Poll` functions, so there will be no concurrency.
        token: Token,
        // The set of events to include in the notification on next poll
        events: AtomicUsize,
        // Tracks if the node is queued for readiness using the MSB, the
        // rest of the usize is the readiness delay.
        queued: AtomicUsize,
        // Both interest and opts can be mutated
        interest: Cell<EventSet>,
        // Poll opts
        opts: Cell<PollOpt>,
    }
    
    // Implements `Sync`, aka all functions are safe to call concurrently
    struct Registration {
        node: *mut ReadinessNode,
        queue: Arc<PollReadinessQueue>,
    }
    
    struct MyEvented {
        registration: Option<Registration>,
    }
    

    Registration

    When a MyEvented value is registered with the event loop, a new Registration value is obtained:

    my_evented.registration = Some(Registration::new(poll, token, interest));
    

    Registration will include the internal EventSet::dropped() event to the interest.

    Re-registration

    A Registration's interest & PollOpt can be changed by calling Registration::update:

    // poll: &Poll
    my_evented.registration.as_ref().unwrap()
        .update(poll, interest, opts);
    

    The Poll reference will not be used but will ensure that update is only called from a single thread (the thread that owns the Poll reference). This allows safe mutation of interest and opts without synchronization primitives.

    Registration will include the internal EventSet::dropped() event to the interest.

    Triggering readiness notifications

    Readiness can be updated using Registration::set_readiness and Registration::unset_readiness. These can be called concurrently. set_readiness adds the given events with the existing Registration readiness. unset_readiness subtracts the given events from the existing Registration.

    my_evented.registration.as_ref().unwrap().set_readiness(EventSet::readable());
    my_evented.registration.as_ref().unwrap().unset_readiness(EventSet::readable());
    

    Registration::set_readiness ensures that the registration node is queued for processing.

    Delaying readiness

    In order to support timeouts, Registration has the ability to schedule readiness notifications using Registration::delay_readiness(events, timeout).

    There is a big caveat. There is precise timing guarantee. A delayed readiness event could be triggered much earlier than requested. Also, the readiness timer is coarse grained, so by default will be rounded to 100ms or so. The one guarantee is that the event will be triggered no later than the requested timeout + the duration of a timer tick (100ms by default).

    Queuing Registration for processing

    First, atomically update Registration.queued. Attempt to set the MSB. Check the current delay value. If the requested delay is less than the current, update the delayed portion of queued.

    If the MSB was successfully set, then the current thread is responsible for queuing the registration node (pseudocode):

    loop {
        let ptr = PollReadinessQueue.readiness_head.get();
        ReadinessNode.next_readiness = ptr;
    
        if PollReadinessQueue.readiness_head.compare_and_swap(ptr, &ReadinessNode) {
            return;
        }
    }
    

    Dropping Registration

    Processing a drop is handled by setting readiness to an internal Dropped event:

    fn drop(&mut self) {
        self.registration.as_ref().unwrap()
            .set_readiness(EventSet::dropped());
    }
    

    The Registration value itself does not own any data, so there is nothing else to do.

    Polling

    On Poll::poll() the following happens:

    Reset the events on self

    self.events.clear();
    

    Atomically take ownership of the readiness queue:

    let ready_nodes = PollReadinessQueue.readiness_head.swap(ptr::null());
    

    The dequeued nodes are processed.

    for node in ready_nodes {
        // Mask the readiness info by the node's interest. This is needed to
        // support concurrent setting of readiness. Another thread may not
        // be aware of the latest interest value.
        let mut events = node.events.get() & node.interest;
    
        // Used to read the delay component of `Registration::queued`.
        let delay;
    
        if opts.is_edge() || events.is_empty() {
            // If the registration is edge, the node is always dequeued. If
            // it is level, we only dequeue the event when there are no
            // events (aka, no readiness). By not dequeing the event it will
            // be processed again next call to `poll`
            delay = unset_msb_and_read_delay_component(&node.queued);
    
            // Reload the events to ensure that we don't "lose" any
            // readiness notifications. Remember, it's ok to have
            // spurious notifications. 
            events = node.events.get() | node.interest;
        } else if !events.is_drop() {
            // Push the node back into the queue. This is done via a compare
            // and swap on `readiness_head`, pushing the node back to the
            // front.
            prepend(&ready_nodes, node);
    
            delay = read_delay_component(&node.queued);
        }
    
        if delay > 0 {
            node.update_delay_in_hashed_wheel(delay);
        } else {
            // The event will be fired immediately, if the node is currently
            // being delayed, remove it from the hashed wheel.
            if node.is_currently_in_hashed_wheel() {
                node.remove_from_hashed_wheel();
            }
    
            if events.is_drop() {
                // The registration is dropped. Unlink the node, freeing the
                // memory.
                node.unlink();
                continue;
            }
    
            if !events.is_empty() {
                // Track the event
                self.events.push_event(node.token, events);
            }
        }
    
    }
    

    The next step is to process all delayed readiness nodes that have reached their timeout. The code for this is similar to the current timer code.

    Integrating with Selector

    The readiness queue described above is not to replace socket notifications on epoll / kqueue / etc... It is to be used in conjuction.

    To handle this, PollReadinessQueue will be able to wakup the selector. This will be implemented in a similar fashion as the current channel implementation. A pipe will be used to force the selector to wakeup.

    The full logic of poll will look something like:

    let has_pending = !readiness_queue.is_empty();
    
    if has_pending {
        // Original timeout value is passed to the function...
        timeout = 0;
    }
    
    // Poll selector
    selector.poll(&mut self.events, timeout);
    
    // Process custom evented readiness queue as specified above.
    

    Implementing mio::Channel

    Channel is a mpsc queue such that when messages are pushed onto the channel, Poll is woken up and returns a readable readiness event for the Channel. The specific queue will be supplied on creation of Channel, allowing the user to choose the behavior around allocation and capacity.

    Channel will look something like:

    struct Channel<Q> {
        queue: Q,
    
        // Poll registration
        registration: Option<Registration>,
    
        // Tracks the number of pending messages.
        pending: AtomicUsize,
    }
    

    When a new message is sent over the channel:

    self.queue.push(msg);
    
    let prev = self.pending.fetch_add(1);
    
    if prev == 0 {
        // set readiness
        self.registration.as_ref().unwrap()
            .set_readiness(EventSet::readable());
    }
    

    When readiness is set, Poll will wake up with a readiness notification. The user can now "poll" off of the channel. The implementation of poll is something like:

    self.queue.poll().map(|msg| {
        let first = self.pending.get();
    
        if first == 1 {
            self.registration.as_ref().unwrap()
                .unset_readiness(EventSet::readable());
        }
    
        let second = self.pending.fetch_sub(1);
    
        if first == 1 && second > 0 {
            // There still are pending messages, reset readiness
            self.registration.as_ref().unwrap()
                .set_readiness(EventSet::readable());
        }
    
        msg
    })
    

    Implemented Timer

    Timer is a delay queue. Messages are pushed onto it with a delay after which the message can be "popped" from the queue. It is implemented using a hashed wheel timer strategy which is ideal in situations where large number of timeouts are required and the timer can use coarse precision (by default, 100ms ticks).

    The implementation is fairly straight forward. When a timeout is requested, the message is stored in the Timer implementation and Registration::delay_readiness is called with the timeout. There are some potential optimizations, but those are out of scope for this proposal.

    Windows

    The readiness queue described in this proposal would replace the current windows specific implementation. The proposal implementation would be more efficient as it avoids locking as well as uses lighter weight data structures (mostly, linked lists vs. vecs).

    Outstanding questions

    The biggest outstanding question would be what to do about EventLoop. If this proposal lands, then EventLoop becomes entirely a very light shim around Poll that dispatches events to the appropriate handler function.

    The entire implementation would look something like:

    pub fn run(&mut self, handler: &mut H) -> io::Result<()> {
        self.run = true;
    
        while self.run {
            self.poll.poll();
    
            for event in self.poll.events() {
                handler.ready(self, event.token(), event.kind());
            }
    
            handler.tick(self);
        }
    }
    

    It will also not be possible to maintain API compatibility. Handler::notify and Handler::timeout will no longer exist as EventLoop does not know the difference between those two types and other Evented types that have notifications called through ready.

    The options are:

    • Update EventLoop to follow the new API and keep the minimal impelmentation.
    • Get rid of EventLoop and make Poll the primary API
    • Provide a hire level API via EventLoop that accepts allocations (though this would be post 1.0).

    Alternatives

    It is possible to implement Timer and Channel as standalone types without having to implement the readiness queue. For Timer, it would require using timerfd on linux and a timer thread on other platforms. The disadvanage here is minor for linux as syscalls can be reduced significantly by only using timerfd to track the next timeout in the Timer vs. every timeout in Timer.

    However, on platforms that don't have timerfd available, a polyfill will be needed. This can be done by creating a pipe and spawning a thread. When a timeout is needed, send a request to the thread. The thread writes a byte to the pipe after the timeout has expired. This has overhead, but again it can be amortized by only using the thread/pipe combo for the next timeout in Timer vs. every timeout. Though, there may be some complication with this amoritization when registering the Timer using level triggered notifications.

    On the other hand. For Channel, a syscall would be needed for each message enqueued and dequeued. The implementation would be to have a pipe associated with the Chanenl. Each time a message is enqueued, write a byte to the pipe. Whenever a message is dequeued, read a byte.

    windows behavior api 
    opened by carllerche 38
  • Port to Solaris

    Port to Solaris

    Currently mio is using for Solaris epoll interface. But Solaris doesn't have this interface. Only Illumos has it.

    There are two things to be done:

    • To distinguish between Solaris and Illumos.
    • Implement selector on Solaris (probably via even ports API)
    opened by psumbera 34
  • Does deregister clear events already pending delivery? Should it?

    Does deregister clear events already pending delivery? Should it?

    Let's say I have two event sources registered, both of them fired at the same time and landed in pending events list for current iteration.

    While handling the first event, I do deregister on the other one. Will I still get it? It seems to me that I will, which is very inconvenient, as it makes hard (impossible?) to safely deregister events other than the one that just fired.

    A spin of the above case is: What if I do deregister and right after used register_opt on different event source, using the same token.

    Another spinoff is: What if I do reregister using different interest.

    If I am not missing anything, it seems to me that either: the end of each event delivery iteration should have a notification, during which calls like deregister and reregister can be safely used, or deregister and reregister should make sure to remove any events that are not to be delivered anymore.

    opened by dpc 27
  • Support Windows

    Support Windows

    Overview

    Currently, Mio currently only supports Linux and Darwin platforms (though *BSD support could happen relatively easily). It uses epoll and kqueue respectively to provide a readiness API to consumers. Windows offers a completion based API (completion ports) which is significantly different from epoll & kqueue. The goal would be to tweak Mio in order to support Windows while still maintaining low overhead that mio strives for across all platforms.

    History

    I have wavered a bunch on the topic of how to best support Windows. At first, I had originally planned to do whatever was needed to support windows even if the implementation was less than ideal. Then, started towards not supporting windows with Mio and instead provide a standalone IO library that supported windows only. I started investigating the IOCP APIs in more depth and thinking about what a windows library would look like and it was very similar to what mio already is.

    Completion Ports

    There are a number of details related to using completion ports, but what matters is that instead of being notified when an operation (read, write, accept, ...) is ready to be performed and then performing the operation, an operation is submitted and then completion is signaled by reading from a queue.

    For example, when reading, a byte buffer is provided to the operating system. The operating system then takes ownership of the buffer until the operation completes. When the operation completes, the application is notified by reading off of the completion status queue

    Strategy

    The strategy would be to, on windows, internally manage a pool of buffers. When a socket is registered with the event loop with readable interest, a read a system read would be initiated supplying an available buffer. When the read completes, the internal buffer is now full. The the event loop would notify readiness and the user will then be able to read from the socket. The read would copy data from the internal buffer to the user's buffer and the read would be complete.

    On write, the user's data would be copied to a an internal buffer immediately and then the internal buffer submitted to the OS for the system write call.

    Mio API changes

    In order to implement the above strategy, Mio would not be able to rely on IO types from std::net anymore. As such, I propose to bring back TcpStream and TcpListener implemented in mio::net. Since Mio will then own all IO types, there will be no more need to have the NonBlock wrapper. Also, it seems thatNonBlock` can be confusing (see #154). So, all IO types in mio will always be blocking.

    I believe that this will be the only required API change.

    windows 
    opened by carllerche 27
  • tcp_stream test fails with TcpStream::set_ttl crash

    tcp_stream test fails with TcpStream::set_ttl crash

    Problem: When running the unit tests on Windows 8.1 Pro, tcp_stream ttl test crash in TcpStrea::set_ttl :

    RUST_BACKTRACE=1 cargo test --test tcp_stream
        Finished dev [unoptimized + debuginfo] target(s) in 0.10s
         Running target\debug\deps\tcp_stream-7340f8b6dbab4dde.exe
    
    running 12 tests
    test is_send_and_sync ... ok
    test shutdown_write ... ignored
    test tcp_stream_ipv4 ... ignored
    test tcp_stream_ipv6 ... ignored
    test ttl ... FAILED
    test registering ... ok
    test deregistering ... ok
    test nodelay ... ok
    test reregistering ... ok
    test shutdown_both ... ok
    test shutdown_read ... ok
    test try_clone ... ok
    
    failures:
    
    ---- ttl stdout ----
    thread 'ttl' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 10022, kind: InvalidInput, message: "An invalid argument was supplied." }', src\libcore\result.rs:1084:5
    stack backtrace:
       0: backtrace::backtrace::trace_unsynchronized
                 at C:\Users\VssAdministrator\.cargo\registry\src\github.com-1ecc6299db9ec823\backtrace-0.3.34\src\backtrace\mod.rs:66
       1: std::sys_common::backtrace::_print
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\sys_common\backtrace.rs:47
       2: std::sys_common::backtrace::print
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\sys_common\backtrace.rs:36
       3: std::panicking::default_hook::{{closure}}
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\panicking.rs:200
       4: std::panicking::default_hook
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\panicking.rs:211
       5: std::panicking::rust_panic_with_hook
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\panicking.rs:477
       6: std::panicking::continue_panic_fmt
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\panicking.rs:384
       7: std::panicking::rust_begin_panic
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\panicking.rs:311
       8: core::panicking::panic_fmt
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libcore\panicking.rs:85
       9: core::result::unwrap_failed
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libcore\result.rs:1084
      10: core::result::Result<(), std::io::error::Error>::unwrap<(),std::io::error::Error>
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\src\libcore\result.rs:852
      11: tcp_stream::ttl
                 at .\tests\tcp_stream.rs:177
      12: tcp_stream::ttl::{{closure}}
                 at .\tests\tcp_stream.rs:168
      13: core::ops::function::FnOnce::call_once<closure-0,()>
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\src\libcore\ops\function.rs:235
      14: alloc::boxed::{{impl}}::call_once<(),FnOnce<()>>
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\src\liballoc\boxed.rs:787
      15: panic_unwind::__rust_maybe_catch_panic
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libpanic_unwind\lib.rs:80
      16: std::panicking::try
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\src\libstd\panicking.rs:275
      17: std::panic::catch_unwind
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\src\libstd\panic.rs:394
      18: test::run_test::run_test_inner::{{closure}}
                 at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libtest\lib.rs:1408
    note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
    
    
    failures:
        ttl
    
    test result: FAILED. 8 passed; 1 failed; 3 ignored; 0 measured; 0 filtered out
    

    Cause: According to @PerfectLaugh:

    I figured it out that the windows tcpstream::connect was not allowed to use set_ttl when you aren't not connected yet since we used the non_blocking on connect
    

    Solution: ?

    windows 
    opened by dtacalau 26
  • Remove net2 dependency

    Remove net2 dependency

    This initial comment only remove net2 from TcpStream::connect, on Unix platforms (expect for iOS, macOS and Solaris) this reduces the number of system calls from three to two.

    @carllerche this does increase the complexity a bit, do we still want to continue down this route?

    Closes #841. Closes #1045.

    opened by Thomasdezeeuw 26
  • Redox OS support

    Redox OS support

    This was a part of my RSoC project of porting tokio and I think it's complete now :)

    Tested:

    • Two small custom examples
    • 11/13 tokio examples work (udp servers might not be working but I'm getting 0.0.0.0:0 straight from rust which indicates this is a problem with either my setup or rust itself)
    • 6/7 hyper examples work (web_api is trying to connect to itself, which I think is generally broken - you can't even communicate between two netcat instances running on redox)
    opened by jD91mZM2 25
  • Cannot register a custom Fd with the event loop

    Cannot register a custom Fd with the event loop

    It seems either impossible or undesired to register a custom Fd with the event loop, because Evented and Io names are hidden/private.

    In my case, I got an eventfd from nix and now I want to wait for it to become readable. Am I missing something?

    opened by diwic 25
  • Add TCP networking for target_os =

    Add TCP networking for target_os = "wasi"

    With

    • https://github.com/bytecodealliance/wasmtime/pull/3711
    • https://github.com/rust-lang/rust/pull/93158

    merged, mio can have limited support for TCP networking for the wasm32-wasi target.

    I also modified the tcp_server example.

    $ cargo +nightly build --target wasm32-wasi --release --example tcp_server --features="os-poll net"
       Compiling cfg-if v1.0.0
       Compiling wasi v0.10.2+wasi-snapshot-preview1
       Compiling log v0.4.14
       Compiling libc v0.2.112
       Compiling ppv-lite86 v0.2.15
       Compiling wasi v0.11.0+wasi-snapshot-preview1
       Compiling getrandom v0.2.3
       Compiling rand_core v0.6.3
       Compiling env_logger v0.8.4
       Compiling rand_chacha v0.3.1
       Compiling mio v0.8.0 (/home/harald/git/mio)
       Compiling rand v0.8.4
        Finished release [optimized] target(s) in 2.92s
    
    $ wasmtime run --tcplisten 127.0.0.1:9000 --env 'LISTEN_FDS=1' target/wasm32-wasi/release/examples/tcp_server.wasm
    Using preopened socket FD 3
    You can connect to the server using `nc`:
     $ nc <IP> <PORT>
    You'll see our welcome message and anything you type will be printed here.
    
    opened by haraldh 24
  • Support AIX operating system

    Support AIX operating system

    Besides necessary target cfg, AIX doesn't have epoll or kqueue API. Instead, it includes a mechanism called 'pollset' to improve performance of poll. This PR also includes changes to support pollset. (see nginx implementation and pollset documentation).

    Since AIX hasn't been an official target of Rust, and pollset implementation needs more work, this is in draft status for review.

    opened by ecnelises 0
  • Reconsider supported draining behaviour

    Reconsider supported draining behaviour

    For Unix (kqueue and epoll) to drain readiness all you have to do is read/write until you get a "short" read/write, i.e. less bytes are processed than your buffer size. However Mio doesn't guarantee that another event is return until the I/O operations hits a WouldBlock error. This means that to strictly follow Mio's docs you'll have to issue another pointless system call (read/write/etc.).

    This is because on Windows Mio could only guarantee the readiness to be drained once a WouldBlock error was returned. However nowadays we control that ourselves in IoSourceState::do_io. So, we could change it to reregister when e.g. the read bytes < buffer size.

    Also find out if all of this is true for poll (see https://github.com/tokio-rs/mio/pull/1602).

    opened by Thomasdezeeuw 3
  • Support UnixStream on Windows

    Support UnixStream on Windows

    Windows 10 added support for unix sockets in a 2018 general Windows 10 release. Building this functionality into Mio would allow support in Tokio and other downstream libs.

    This seems to be of interest in several downstream libs, example issues here:

    Mio's current AFD approach on Windows actually seems to be compatible with AF_UNIX sockets (noted e.g. in this wepoll issue).

    opened by sullivan-sean 1
  • Add NamedPipe::set_buffer_size

    Add NamedPipe::set_buffer_size

    Allows the user to control the size of the buffers used.

    @vadixidav, @mlsvrts coul you try this branch. Specifically can you try a 1 >= byte write, followed by a small read, because after reading the code a bit more I'm not sure this will solve your problem(s).

    opened by Thomasdezeeuw 3
  • Implement new I/O-safe traits on types

    Implement new I/O-safe traits on types

    This PR adds a new feature, io_safety, which implements AsFd/AsSocket/From<OwnedFd>/From<OwnedSocket>/Into<OwnedFd>/Into<OwnedSocket> on the types within mio. Enabling this feature requires Rust 1.63 or higher.

    See also: #1588

    opened by notgull 6
Releases(v0.8.0)
  • v0.8.0(Nov 13, 2021)

    This will Mio's last unstable version, the next version will be v1 and introduce a stable API.

    The main change in this release is the removal of the TcpSocket type, it's replace by the socket2 crate.

    See the change log for more changes.

    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Mar 2, 2020)

    Version 0.7 of Mio contains various major changes compared to version 0.6. Overall a large number of API changes have been made to reduce the complexity of the implementation and remove overhead where possible.

    See the change log for details.

    Source code(tar.gz)
    Source code(zip)
  • v0.6.21(Nov 27, 2019)

Owner
Tokio
Rust's asynchronous runtime.
Tokio
Coroutine I/O for Rust

Coroutine I/O Coroutine scheduling with work-stealing algorithm. WARN: Possibly crash because of TLS inline, check https://github.com/zonyitoo/coio-rs

ty 454 Sep 7, 2022
[no longer maintained] Scalable, coroutine-based, fibers/green-threads for Rust. (aka MIO COroutines).

Documentation mioco Mioco provides green-threads (aka fibers) like eg. Goroutines in Go, for Rust. Status This repo is a complete re-implementation of

Dawid Ciężarkiewicz 136 Sep 16, 2022
Monad/MonadIO, Handler, Coroutine/doNotation, Functional Programming features for Rust

fpRust Monad, Functional Programming features for Rust Why I love functional programming, Rx-style coding. However it's hard to implement them in Rust

null 94 Aug 19, 2022
Zero-cost asynchronous programming in Rust

Zero-cost asynchronous programming in Rust Documentation | Website futures-rs is a library providing the foundations for asynchronous programming in R

The Rust Programming Language 4.5k Sep 27, 2022
Robyn is an async Python backend server with a runtime written in Rust, btw.

Robyn is an async Python backend server with a runtime written in Rust, btw.

Sanskar Jethi 1.5k Sep 30, 2022
Metal IO library for Rust

Mio – Metal IO Mio is a fast, low-level I/O library for Rust focusing on non-blocking APIs and event notification for building high performance I/O ap

Tokio 5.1k Sep 25, 2022
A language-based OS to run Rust on bare metal

RustOS A simple, language-based OS. Current features: Simple VGA for seeing output Some Rust libraries (core, alloc, collections) already in Working (

null 400 Sep 9, 2022
A language-based OS to run Rust on bare metal

RustOS A simple, language-based OS. Current features: Simple VGA for seeing output Some Rust libraries (core, alloc, collections) already in Working (

null 77 Aug 19, 2022
A prototype implementation of the Host Identity Protocol v2 for bare-metal systems, written in pure-rust.

Host Identity Protocol for bare-metal systems, using Rust I've been evaluating TLS replacements in constrained environments for a while now. Embedded

null 27 Jul 15, 2022
Proof of Concept / Experiment: Use IDF-HAL-LL from bare metal Rust

Proof of Concept / Experiment: Use IDF-HAL-LL from BM Rust idf-ll-compile pre-compiles a static library for ESP32C3 and ESP32 esp-ll a simple crate th

Björn Quentin 0 Aug 11, 2022
WIP / POC for using the ESP32C3 wifi drivers in bare-metal Rust

Wifi on ESP32C3 (on bare-metal Rust) About This is experimental and work-in-progress! You are welcome to contribute but probably shouldn't use this fo

Björn Quentin 92 Sep 29, 2022
WIP / POC for using the ESP32C3 and ESP32 wifi drivers in bare-metal Rust

Wifi on ESP32C3 and ESP32 (on bare-metal Rust) About This is experimental and work-in-progress! You are welcome to experiment with it and contribute b

esp-rs 91 Sep 20, 2022
ESP32-C3 interfacing to a PS/2 Keyboard (bare-metal Rust)

ESP32-C3 interfacing to a PS/2 Keyboard (bare-metal Rust) Very simplified example of connecting a PS/2 keyboard to ESP32-C3 You need to build it with

Björn Quentin 2 Jun 2, 2022
Composable n-gram combinators that are ergonomic and bare-metal fast

CREATURE FEATUR(ization) A crate for polymorphic ML & NLP featurization that leverages zero-cost abstraction. It provides composable n-gram combinator

null 3 Aug 25, 2022
Esp-backtrace - backtrace for ESP32 bare-metal

esp-backtrace - backtrace for ESP32 bare-metal supports ESP32, ESP32C3, ESP32S2, ESP32S3 optional features: panic-handler, exception-handler (will ena

Björn Quentin 8 Aug 27, 2022
Backtrace for ESP32 bare-metal

esp-backtrace - backtrace for ESP32 bare-metal supports ESP32, ESP32C3, ESP32S2, ESP32S3 optional features: panic-handler, exception-handler (will ena

esp-rs 8 Aug 27, 2022
Rust 核心库和标准库的源码级中文翻译,可作为 IDE 工具的智能提示 (Rust core library and standard library translation. can be used as IntelliSense for IDE tools)

Rust 标准库中文版 这是翻译 Rust 库 的地方, 相关源代码来自于 https://github.com/rust-lang/rust。 如果您不会说英语,那么拥有使用中文的文档至关重要,即使您会说英语,使用母语也仍然能让您感到愉快。Rust 标准库是高质量的,不管是新手还是老手,都可以从中

wtklbm 467 Sep 25, 2022
Rust library for build scripts to compile C/C++ code into a Rust library

A library to compile C/C++/assembly into a Rust library/application.

Alex Crichton 1.2k Sep 30, 2022
Rust Imaging Library's Python binding: A performant and high-level image processing library for Python written in Rust

ril-py Rust Imaging Library for Python: Python bindings for ril, a performant and high-level image processing library written in Rust. What's this? Th

Cryptex 8 Sep 25, 2022
The gRPC library for Rust built on C Core library and futures

gRPC-rs gRPC-rs is a Rust wrapper of gRPC Core. gRPC is a high performance, open source universal RPC framework that puts mobile and HTTP/2 first. Sta

TiKV Project 1.6k Sep 24, 2022