rust stackful coroutine library

Overview

May

May is a high-performant library for programming stackful coroutines with which you can easily develop and maintain massive concurrent programs. It can be thought as the Rust version of the popular Goroutine.


Table of contents


Features

  • The stackful coroutine's implementation is based on generator;
  • Support schedule on a configurable number of threads for multi-core systems;
  • Support coroutine's version of a local storage (CLS);
  • Support efficient asynchronous network I/O;
  • Support efficient timer management;
  • Support standard synchronization primitives, a semaphore, an MPMC channel, etc;
  • Support cancellation of coroutines;
  • Support graceful panic handling that will not affect other coroutines;
  • Support scoped coroutine creation;
  • Support general selection for all the coroutine's API;
  • All the coroutine's API are compatible with the standard library semantics;
  • All the coroutine's API can be safely called in multi-threaded context;
  • Both stable, beta, and nightly channels are supported;
  • Both x86_64 GNU/Linux, x86_64 Windows, x86_64 Mac OS are supported.

Usage

A naive echo server implemented with May:

#[macro_use]
extern crate may;

use may::net::TcpListener;
use std::io::{Read, Write};

fn main() {
    let listener = TcpListener::bind("127.0.0.1:8000").unwrap();
    while let Ok((mut stream, _)) = listener.accept() {
        go!(move || {
            let mut buf = vec![0; 1024 * 16]; // alloc in heap!
            while let Ok(n) = stream.read(&mut buf) {
                if n == 0 {
                    break;
                }
                stream.write_all(&buf[0..n]).unwrap();
            }
        });
    }
}

More examples

The CPU heavy load examples

The I/O heavy bound examples


Performance

You can refer to https://tfb-status.techempower.com/ to get the latest may_minihttp comparisons with other most popular frameworks.


Caveat

There is a detailed document that describes May's main restrictions. In general, there are four things you should follow when writing programs that use coroutines:

  • Don't call thread-blocking API (It will hurt the performance);
  • Carefully use Thread Local Storage (access TLS in coroutine might trigger undefined behavior).

It's considered unsafe with the following pattern:

set_tls();
// Or another coroutine's API that would cause scheduling:
coroutine::yield_now(); 
use_tls();

but it's safe if your code is not sensitive about the previous state of TLS. Or there is no coroutines scheduling between set TLS and use TLS.

  • Don't run CPU bound tasks for long time, but it's ok if you don't care about fairness;
  • Don't exceed the coroutine stack. There is a guard page for each coroutine stack. When stack overflow occurs, it will trigger segment fault error.

Note:

The first three rules are common when using cooperative asynchronous libraries in Rust. Even using a futures-based system also have these limitations. So what you should really focus on is a coroutine's stack size, make sure it's big enough for your applications.


How to tune a stack size

If you want to tune your coroutine's stack size, please check out this document.


License

May is licensed under either of the following, at your option:

Comments
  • Undefined behavior invoked by moving stacks between threads

    Undefined behavior invoked by moving stacks between threads

    A coroutine's stack can be moved from one thread to another. However, a stack may always contain things that are not Send, breaking things.

    This is one example:

    extern crate may;
    
    use std::cell::RefCell;
    use std::rc::Rc;
    use std::thread::ThreadId;
    
    use may::coroutine;
    use coroutine::yield_now;
    
    thread_local!(static ID: RefCell<Option<Rc<ThreadId>>> = RefCell::new(None));
    
    fn main() {
        may::config().set_io_workers(60);
        may::config().set_workers(60);
        let h = coroutine::spawn(move || {
            let v = (0..10000)
                .map(|i| {
                    coroutine::spawn(|| {
                        let handle = Rc::new(std::thread::current().id());
                        ID.with(|id| {
                            *id.borrow_mut() = Some(Rc::clone(&handle));
                        });
                        for _ in 0..10000 {
                            if *handle != std::thread::current().id() {
                                println!("Access to Rc content without a mutex from a different thread, {:?} vs {:?}",
                                         *handle, std::thread::current().id());
                            }
                            yield_now();
                        }
                    })
                })
                .collect::<Vec<_>>();
            for i in v {
                i.join().unwrap();
            }
        });
        h.join().unwrap();
    }
    

    If something else was accessing the Rc (like making copis of it) from the original thread (which it could, because it is accessible in the thread local storage), it would be undefined behaviour ‒ both the coroutine, that moved, and the thing in that thread (possibly other coroutine) could be accessing the counters in the Rc at the same time, or the data inside, which could be for example a RefCell.

    Now, suggesting not to use thread local storage doesn't solve anything, because:

    • I might not use thread local storage myself, but I can't certainly be expected to audit all the libraries I use not to use thread local storage. Even the standard library uses thread local storage internally.
    • There are things that are not Send for other reasons. One example might be Zero-MQ sockets, which explode the whole application if ever touched from a different thread then they were created in.

    I believe this problem is fundamental to any attempt to move stacks between threads in Rust. Such thing just breaks the Rust contract.

    So my only suggestion is to create the coroutine in one thread (it is possible to check nothing Send crosses a closure boundary, so the closure can be safely sent to another thread) and then pin it there to that thread.

    question 
    opened by vorner 18
  • Generic Wrapper Type that can be used in coroutine context

    Generic Wrapper Type that can be used in coroutine context

    any type that can be converted to the raw fd/handle could be wrapped in a generic wrapper type that would automatically have the non-blocking style Read/Write in coroutine context.

    request 
    opened by Xudong-Huang 8
  • `may_http`耗时不符合预期

    `may_http`耗时不符合预期

    发起10次http请求,耗时是10+s,而不是1+s。这是我的toml:

    [dependencies]
    docopt = "1"
    serde = "1"
    serde_derive = "1"
    http = "0.2"
    may = { git = "https://github.com/Xudong-Huang/may.git" }
    may_http = { git = "https://github.com/rust-may/may_http" }
    

    以下是发起请求的代码:

    #[macro_use]
    extern crate may;
    extern crate may_http;
    
    use std::io::{self, Read};
    use std::ops::RangeInclusive;
    use std::thread::Thread;
    use std::time::{Duration, SystemTime, UNIX_EPOCH};
    
    use http::Uri;
    use may::coroutine::*;
    use may_http::client::*;
    
    fn main() -> io::Result<()> {
        // see https://github.com/dragon-zhang/kotlin-study/blob/master/provider/src/main/kotlin/com/example/demo/TestController.kt
        let uri: Uri = "http://127.0.0.1:8081/rust".parse().unwrap();
        let mut client = {
            let host = uri.host().unwrap_or("127.0.0.1");
            let port = uri.port_u16().unwrap_or(80);
            HttpClient::connect((host, port))?
        };
    
        let all_start = current_time_millis();
        let mut s = String::new();
        for i in 0..10 {
            let uri = uri.clone();
            let start = current_time_millis();
            let mut rsp = client.get(uri)?;
            let end1 = current_time_millis();
            rsp.read_to_string(&mut s)?;
            let end2 = current_time_millis();
            println!("{} get response={}, request cost{}ms read cost{}ms",
                     i, s, end1 - start, end2 - end1);
            s.clear();
        }
        println!("all cost{}ms", current_time_millis() - all_start);
        Ok(())
    }
    
    pub fn current_time_millis() -> i64 {
        let since_the_epoch = SystemTime::now()
            .duration_since(UNIX_EPOCH)
            .expect("Time went backwards");
        let ms = since_the_epoch.as_secs() as i64 * 1000i64 + (since_the_epoch.subsec_nanos() as f64 / 1_000_000.0) as i64;
        return ms;
    }
    

    下面是控制台输出的结果:

    0 get response={"param":"rust"}, request cost1014ms read cost0ms
    1 get response={"param":"rust"}, request cost1010ms read cost0ms
    2 get response={"param":"rust"}, request cost1007ms read cost0ms
    3 get response={"param":"rust"}, request cost1004ms read cost0ms
    4 get response={"param":"rust"}, request cost1003ms read cost0ms
    5 get response={"param":"rust"}, request cost1004ms read cost0ms
    6 get response={"param":"rust"}, request cost1008ms read cost0ms
    7 get response={"param":"rust"}, request cost1010ms read cost0ms
    8 get response={"param":"rust"}, request cost1010ms read cost0ms
    9 get response={"param":"rust"}, request cost1009ms read cost0ms
    all cost10080ms
    

    可以看到,使用may_http发起10次请求,总共耗时10+ s,如果用go语言编写相同的逻辑,耗时在1.1s以内,因此这里到底是may有问题,还是may_http有问题?@Xudong-Huang

    opened by dragon-zhang 5
  • Not all coroutines get executed

    Not all coroutines get executed

    I wanted to use the go! coroutines to calculate primes with multithreading after I found out that the std::thread is slower than the go coroutines. I replaced the call thread::spawn with go! and added the line may::config().set_workers(num_cpus::get());. The coroutines get started in a for loop that counts to the number of cpus so that it starts as many threads as cores. Now my problem is that the number of coroutines that are actually running varies, basically every time I start the program its between 3 and 6 (I have 8 cores). My question is If I'm missing something. (link to the repo).

    opened by Trivernis 5
  • Move scheduler initialization out of line.

    Move scheduler initialization out of line.

     name           control ns/iter  new ns/iter  diff ns/iter  diff %  speedup
    -smoke_bench    95,092,190       96,296,706      1,204,516   1.27%   x 0.99
    +smoke_bench_1  10,265,118       10,150,296       -114,822  -1.12%   x 1.01
    +smoke_bench_2  14,199,750       13,289,244       -910,506  -6.41%   x 1.07
    +smoke_bench_3  12,718,608       11,759,877       -958,731  -7.54%   x 1.08
    +spawn_bench    1,091,394        1,051,427         -39,967  -3.66%   x 1.04
    +spawn_bench_1  856,703          832,541           -24,162  -2.82%   x 1.03
    +yield_bench    6,207,276        5,747,623        -459,653  -7.41%   x 1.08
    
    opened by alkis 5
  • Question for how to get the Coroutines Id?What should I do for doing a Heavy IO?just like request an http,or read write file?

    Question for how to get the Coroutines Id?What should I do for doing a Heavy IO?just like request an http,or read write file?

    • how to get the Coroutines Id? (some times a Database transactions runing and bind in a Coroutine,we use a threadLocal to save it. i 'cant find this api in may )
    • What should I do for doing other Heavy IO? just like send an http request ,or read write file,but the example haven't .
    • Finally, may can work with Tokio Together?
    opened by zhuxiujia 4
  • why this post request in go! didn't work

    why this post request in go! didn't work

    #[macro_use]
    extern crate may;
    use may::coroutine;
    use may::coroutine::yield_now;
    use reqwest;
    use reqwest::header::*;
    use std::{
        collections::HashMap,
        error::Error,
        fs, process, str,
        sync::{Arc, Mutex},
        thread,
        time::Duration,
    };
    
    fn construct_headers() -> HeaderMap {
        let mut headers = HeaderMap::new();
        headers.insert(USER_AGENT, HeaderValue::from_static("reqwest"));
        headers.insert(CONTENT_TYPE, HeaderValue::from_static("hello"));
        headers
    }
    
    fn gen_client() -> my_client {
        let client = Arc::new(reqwest::Client::new());
        client
    }
    
    type my_client = Arc<reqwest::Client>;
    
    fn main() {
        let client = gen_client();
    
        let client_one = client.clone();
        let h = go!(move || {
            println!("hi, I'm parent");
            let client_outer = client_one.clone();
            let v = (0..3)
                .map(|i| {
                    coroutine::sleep(Duration::from_millis(1));
                    println!("---{:?}", 12);
                    let client_inner = client_outer.clone();
    
                    go!(move || {
                        println!("---{:?}", 15);
                        match i {
                            0 => {
                                println!("hi, I'm child{:?}", 0);
                                let res = req(client_inner);
                                println!("{:?}", res);
                                yield_now();
                                println!("bye from child{:?}", 0);
                            }
                            1 => {
                                println!("hi, I'm child{:?}", 1);
                                let res = req(client_inner);
                                println!("{:?}", res);
                                yield_now();
                                println!("bye from child{:?}", 1);
                            }
                            2 => {
                                println!("hi, I'm child{:?}", 2);
                                let res = req(client_inner);
                                println!("{:?}", res);
                                yield_now();
                                println!("bye from child{:?}", 2);
                            }
                            _ => {}
                        }
                    })
                })
                .collect::<Vec<_>>();
            yield_now();
            // wait child finish
            for i in v {
                i.join().unwrap();
            }
            println!("bye from parent");
        });
        h.join().unwrap();
    }
    
    fn req(client: my_client) -> String {
        let res = client
            .post("http://httpbin.org/post")
            .headers(construct_headers())
            .body("hello")
            .send()
            .unwrap()
            .text()
            .unwrap();
        res
    }
    
    
    opened by coolit 4
  • How to cancel a coroutine?

    How to cancel a coroutine?

    It seems that this crate provides the ability to cancel coroutines, but I can't resolve how to do it. What is the general syntax for coroutine cancellation?

    opened by Hirrolot 4
  • Resolve cast_ref_to_mut Clippy warning

    Resolve cast_ref_to_mut Clippy warning

    I have noticed you have disabled cast_ref_to_mut Clippy error. I checked the code, and I found nonsensical code like the following:

    // prevent release version wrong optimization!! use a volatile read
    let push_index = unsafe { std::ptr::read_volatile(&self.push_index) };
    

    This isn't wrong optimization, this is undefined behaviour caused by cast_ref_to_mut violation.

    Volatile is 100% wrong when dealing with thread safety. Please consider migrating to atomic types specifically (atomic types like AtomicPtr are inner mutability types by the way). If you are concerned about performance, Ordering::Relaxed is pretty much free (other than disabling optimizations that would break this code anyways), so if you don't need atomics at a given point, you can simply use relaxed ordering.

    opened by xfix 4
  • Failure on Rust Nightly.

    Failure on Rust Nightly.

    On latest nightly (rustc 1.29.0-nightly (9fd3d7899 2018-07-07)) the compilation fails with

    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
     --> /root/.cargo/git/checkouts/may-adabe427d9527748/295494d/may_queue/src/block_node.rs:4:5
      |
    4 | use self::alloc::raw_vec::RawVec;
      |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      |
      = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/git/checkouts/may-adabe427d9527748/295494d/may_queue/src/block_node.rs:44:11
       |
    44 |     data: RawVec<T>,
       |           ^^^^^^^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/git/checkouts/may-adabe427d9527748/295494d/may_queue/src/block_node.rs:55:19
       |
    55 |             data: RawVec::with_capacity(BLOCK_SIZE),
       |                   ^^^^^^^^^^^^^^^^^^^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
     --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/generator-0.6.9/src/stack.rs:8:5
      |
    8 | use alloc::raw_vec::RawVec;
      |     ^^^^^^^^^^^^^^^^^^^^^^
      |
      = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/generator-0.6.9/src/stack.rs:20:10
       |
    20 |     buf: RawVec<usize>,
       |          ^^^^^^^^^^^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/generator-0.6.9/src/stack.rs:26:18
       |
    26 |             buf: RawVec::with_capacity(0),
       |                  ^^^^^^^^^^^^^^^^^^^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/generator-0.6.9/src/stack.rs:41:18
       |
    41 |             buf: RawVec::with_capacity(size),
       |                  ^^^^^^^^^^^^^^^^^^^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
       Compiling num-iter v0.1.37
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/git/checkouts/may-adabe427d9527748/295494d/may_queue/src/block_node.rs:63:34
       |
    63 |             let data = self.data.ptr().offset((index & BLOCK_MASK) as isize);
       |                                  ^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/git/checkouts/may-adabe427d9527748/295494d/may_queue/src/block_node.rs:73:34
       |
    73 |             let data = self.data.ptr().offset((index & BLOCK_MASK) as isize);
       |                                  ^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/git/checkouts/may-adabe427d9527748/295494d/may_queue/src/block_node.rs:84:36
       |
    84 |         let mut p_data = self.data.ptr().offset((start & BLOCK_MASK) as isize);
       |                                    ^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/generator-0.6.9/src/stack.rs:51:31
       |
    51 |             let buf = stk.buf.ptr();
       |                               ^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/generator-0.6.9/src/stack.rs:63:36
       |
    63 |             let mut ptr = self.buf.ptr();
       |                                    ^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/generator-0.6.9/src/stack.rs:69:28
       |
    69 |         let cap = self.buf.cap();
       |                            ^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/generator-0.6.9/src/stack.rs:79:18
       |
    79 |         self.buf.cap()
       |                  ^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/generator-0.6.9/src/stack.rs:84:27
       |
    84 |         unsafe { self.buf.ptr().offset(self.buf.cap() as isize) as *mut usize }
       |                           ^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/generator-0.6.9/src/stack.rs:84:49
       |
    84 |         unsafe { self.buf.ptr().offset(self.buf.cap() as isize) as *mut usize }
       |                                                 ^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error[E0658]: use of unstable library feature 'raw_vec_internals': implemention detail
      --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/generator-0.6.9/src/stack.rs:90:18
       |
    90 |         self.buf.ptr()
       |                  ^^^
       |
       = help: add #![feature(raw_vec_internals)] to the crate attributes to enable
    error: aborting due to 11 previous errors
    
    opened by ArtemGr 4
  • linux: file descriptor leak in epoll code

    linux: file descriptor leak in epoll code

    Just something I noticed while browsing the source code:

    https://github.com/Xudong-Huang/may/blob/ba7292065254c2b843884e61095090363b0e6681/src/io/sys/unix/epoll.rs#L39-L40

    The epoll fd (it's a RawFd) is leaked when the eventfd system call fails.

    Something else I noticed is that file descriptors are created without the close-on-exec flag, meaning they leak into child processes. I don't know if fork safety is a design consideration for may but I figured I'd point it out.

    opened by bnoordhuis 3
  • How does it actually work

    How does it actually work

    Hi, this looks really cool, but I'm struggling to understand what exactly may does under the hood to make things work. I couldn't find any documentation about this, except for the fact that it apparently uses some generator framework of yours, which somehow supports coroutines.

    How is the runtime engine different from Rust's async, how does it compare to Go's Goroutines and Java's Fibers?

    opened by piegamesde 0
  • Question about benchmark result.

    Question about benchmark result.

    That is when I was running benchmark on my MacBook Pro (15-inch, 2018, i7, 16GB DDR4) image

    The result is 2.8x slower than 4190944 even it is running in release mode. What are the possible tunings I missed?

    opened by abbychau 6
  • Updating the comparison with Tokio's new (2019-10-13) scheduler

    Updating the comparison with Tokio's new (2019-10-13) scheduler

    Tokio is switching to a new scheduler implementation. Read "Making the Tokio Scheduler 10x faster" for the details.

    As such, it would be nice if a new comparison/benchmark of between May and Tokio could take place, as it means that the 'Performance' section of the current README will become outdated.

    opened by Qqwy 6
  • linux AIO support

    linux AIO support

    Just read https://www.usenix.org/system/files/fast19-kourtis.pdf and it makes a really compelling case for the combination of stackful coroutines + proper linux AIO (and eventually SPDK support). This is a combination I could actually imagine myself using in sled, where I'm now trying to scale toward a many-core architecture, but don't want to pay the various ergonomic costs associated with the popular async stuff in the rust ecosystem right now.

    https://github.com/hmwill/tokio-linux-aio may be a nice reference for building linux AIO support for May.

    Would you be interested in having AIO support in May directly, or do you see this as something better implemented in a separate library? Very curious about this :)

    opened by spacejam 4
Owner
Xudong Huang
I like rust, interest in coroutine based systems
Xudong Huang
[no longer maintained] Scalable, coroutine-based, fibers/green-threads for Rust. (aka MIO COroutines).

Documentation mioco Mioco provides green-threads (aka fibers) like eg. Goroutines in Go, for Rust. Status This repo is a complete re-implementation of

Dawid Ciężarkiewicz 137 Dec 19, 2022
Monad/MonadIO, Handler, Coroutine/doNotation, Functional Programming features for Rust

fpRust Monad, Functional Programming features for Rust Why I love functional programming, Rx-style coding. However it's hard to implement them in Rust

null 98 Dec 24, 2022
Metal IO library for Rust

Mio – Metal IO Mio is a fast, low-level I/O library for Rust focusing on non-blocking APIs and event notification for building high performance I/O ap

Tokio 5.3k Jan 2, 2023
Zero-cost asynchronous programming in Rust

Zero-cost asynchronous programming in Rust Documentation | Website futures-rs is a library providing the foundations for asynchronous programming in R

The Rust Programming Language 4.7k Jan 1, 2023
Robyn is an async Python backend server with a runtime written in Rust, btw.

Robyn is an async Python backend server with a runtime written in Rust, btw.

Sanskar Jethi 1.8k Jan 4, 2023
Cogo is a high-performance library for programming stackful coroutines with which you can easily develop and maintain massive concurrent programs.

Cogo is a high-performance library for programming stackful coroutines with which you can easily develop and maintain massive concurrent programs.

co-rs 47 Nov 17, 2022
Coroutine Library in Rust

coroutine-rs Coroutine library in Rust [dependencies] coroutine = "0.8" Usage Basic usage of Coroutine extern crate coroutine; use std::usize; use co

Rust中文社区 404 Dec 31, 2022
Coroutine I/O for Rust

Coroutine I/O Coroutine scheduling with work-stealing algorithm. WARN: Possibly crash because of TLS inline, check https://github.com/zonyitoo/coio-rs

ty 454 Dec 2, 2022
[no longer maintained] Scalable, coroutine-based, fibers/green-threads for Rust. (aka MIO COroutines).

Documentation mioco Mioco provides green-threads (aka fibers) like eg. Goroutines in Go, for Rust. Status This repo is a complete re-implementation of

Dawid Ciężarkiewicz 137 Dec 19, 2022
Monad/MonadIO, Handler, Coroutine/doNotation, Functional Programming features for Rust

fpRust Monad, Functional Programming features for Rust Why I love functional programming, Rx-style coding. However it's hard to implement them in Rust

null 98 Dec 24, 2022
Coroutine I/O for Rust

Coroutine I/O Coroutine scheduling with work-stealing algorithm. WARN: Possibly crash because of TLS inline, check https://github.com/zonyitoo/coio-rs

ty 454 Dec 2, 2022
cogo rust coroutine database driver (Mysql,Postgres,Sqlite)

cdbc Coroutine Database driver Connectivity.based on cogo High concurrency,based on coroutine No Future<'q,Output=*>,No async fn, No .await , no Poll*

co-rs 10 Nov 13, 2022
🎮 game loop + 🐷 coroutine + 🌯 burrito = 🚀🔥 blazingly synchronous async executor for games 🔥🚀

?? Koryto ?? Pronounced like corrito, which is pronounced as if you combined coroutine and burrito, because everyone knows coroutines are burritos in

Jakub Arnold 3 Jul 6, 2023
Rust 核心库和标准库的源码级中文翻译,可作为 IDE 工具的智能提示 (Rust core library and standard library translation. can be used as IntelliSense for IDE tools)

Rust 标准库中文版 这是翻译 Rust 库 的地方, 相关源代码来自于 https://github.com/rust-lang/rust。 如果您不会说英语,那么拥有使用中文的文档至关重要,即使您会说英语,使用母语也仍然能让您感到愉快。Rust 标准库是高质量的,不管是新手还是老手,都可以从中

wtklbm 493 Jan 4, 2023
Rust library for build scripts to compile C/C++ code into a Rust library

A library to compile C/C++/assembly into a Rust library/application.

Alex Crichton 1.3k Dec 21, 2022
Rust Imaging Library's Python binding: A performant and high-level image processing library for Python written in Rust

ril-py Rust Imaging Library for Python: Python bindings for ril, a performant and high-level image processing library written in Rust. What's this? Th

Cryptex 13 Dec 6, 2022
The gRPC library for Rust built on C Core library and futures

gRPC-rs gRPC-rs is a Rust wrapper of gRPC Core. gRPC is a high performance, open source universal RPC framework that puts mobile and HTTP/2 first. Sta

TiKV Project 1.6k Jan 7, 2023
A µTP (Micro/uTorrent Transport Library) library implemented in Rust

rust-utp A Micro Transport Protocol library implemented in Rust. API documentation Overview The Micro Transport Protocol is a reliable transport proto

Ricardo Martins 134 Dec 11, 2022
A library to compile USDT probes into a Rust library

sonde sonde is a library to compile USDT probes into a Rust library, and to generate a friendly Rust idiomatic API around it. Userland Statically Defi

Ivan Enderlin 40 Jan 7, 2023
A library to compile USDT probes into a Rust library

sonde sonde is a library to compile USDT probes into a Rust library, and to generate a friendly Rust idiomatic API around it. Userland Statically Defi

Wasmer 39 Oct 12, 2022