Coroutine I/O for Rust

Related tags

Concurrency coio-rs
Overview

Coroutine I/O

Build Status Build status License

Coroutine scheduling with work-stealing algorithm.

WARN: Possibly crash because of TLS inline, check https://github.com/zonyitoo/coio-rs/issues/56 for more detail!

Feature

  • Non-blocking I/O
  • Work-stealing coroutine scheduling
  • Asynchronous computing APIs

Usage

Note: You must use Nightly Rust to build this Project.

[dependencies.coio]
git = "https://github.com/zonyitoo/coio-rs.git"

Basic Coroutines

extern crate coio;

use coio::Scheduler;

fn main() {
    Scheduler::new()
        .run(|| {
            for _ in 0..10 {
                println!("Heil Hydra");
                Scheduler::sched(); // Yields the current coroutine
            }
        })
        .unwrap();
}

TCP Echo Server

extern crate coio;

use std::io::{Read, Write};

use coio::net::TcpListener;
use coio::{spawn, Scheduler};

fn main() {
    // Spawn a coroutine for accepting new connections
    Scheduler::new().with_workers(4).run(move|| {
        let acceptor = TcpListener::bind("127.0.0.1:8080").unwrap();
        println!("Waiting for connection ...");

        for stream in acceptor.incoming() {
            let (mut stream, addr) = stream.unwrap();

            println!("Got connection from {:?}", addr);

            // Spawn a new coroutine to handle the connection
            spawn(move|| {
                let mut buf = [0; 1024];

                loop {
                    match stream.read(&mut buf) {
                        Ok(0) => {
                            println!("EOF");
                            break;
                        },
                        Ok(len) => {
                            println!("Read {} bytes, echo back", len);
                            stream.write_all(&buf[0..len]).unwrap();
                        },
                        Err(err) => {
                            println!("Error occurs: {:?}", err);
                            break;
                        }
                    }
                }

                println!("Client closed");
            });
        }
    }).unwrap();
}

Exit the main function

Will cause all pending coroutines to be killed.

extern crate coio;

use std::sync::Arc;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::time::Duration;

use coio::Scheduler;

fn main() {
    let counter = Arc::new(AtomicUsize::new(0));
    let cloned_counter = counter.clone();

    let result = Scheduler::new().run(move|| {
        // Spawn a new coroutine
        Scheduler::spawn(move|| {
            struct Guard(Arc<AtomicUsize>);

            impl Drop for Guard {
                fn drop(&mut self) {
                    self.0.store(1, Ordering::SeqCst);
                }
            }

            // If the _guard is dropped, it will store 1 to the counter
            let _guard = Guard(cloned_counter);

            coio::sleep(Duration::from_secs(10));
            println!("Not going to run this line");
        });

        // Exit right now, which will cause the coroutine to be destroyed.
        panic!("Exit right now!!");
    });

    // The coroutine's stack is unwound properly
    assert!(result.is_err() && counter.load(Ordering::SeqCst) == 1);
}

Basic Benchmarks

See benchmarks for more details.

Comments
  • Segfault due to stack size limit when recursing?

    Segfault due to stack size limit when recursing?

    fn crazy_recurse(d: usize) -> usize {
        if d == 0 { d }
        else { 1 + crazy_recurse(d - 1) }
    }
    
    fn main() {
        println!("Heil Hydra {}", crazy_recurse(16000));
        Scheduler::new().with_workers(1)
            .run(|| {
                for i in 0..4 {
                    Scheduler::spawn(move || {
                        println!("Heil Hydra {}", crazy_recurse(1964)); //1963 works fine!
                    });
                }
            }).unwrap();
    }
    

    Issue only in debug build, probably because the recursion gets optimized out in release build. If this can't be resolved quickly, I think it is worth a mention in the readme.

    opened by critiqjo 22
  • Performance regression since 78687e7

    Performance regression since 78687e7

    Up until e63031c (inclusive) the current benchmark for coio-tcp-echo-server showed (subjectively) good results:

    Speed: 75640 request/sec, 75640 response/sec
    Requests: 2269208
    Responses: 2269208
    

    Since then performance has dropped significantly:

    Speed: 1526 request/sec, 1526 response/sec
    Requests: 45801
    Responses: 45801
    

    Between 78687e7..f747ab8 all builds fail the benchmark for major or minor reasons though: socket already registered or No such file or directory error in earlier builds (before the change to the more recent mio version) and later because the slab is overfilled, or because using too many semaphores leads to a Too many open files panic. All of this made ruling out the reason a bit hard for me, but I suspect that it's related to the change to a newer mio version and the shared event loop etc.

    Any ideas?

    enhancement 
    opened by lhecker 21
  • Bad design of wait list

    Bad design of wait list

    In sync/mpsc.rs

    impl<T> Receiver<T> {
        pub fn recv(&self) -> Result<T, RecvError> {
            loop {
                // 1. Try receive
                match self.try_recv() {
                    Ok(v) => return Ok(v),
                    Err(TryRecvError::Empty) => {}
                    Err(TryRecvError::Disconnected) => return Err(RecvError),
                }
    
                {
                    // 2. Lock the wait list
                    let mut wait_list = self.wait_list.lock().unwrap();
    
                    // 3. Try to receive again, to ensure no one sent items into the queue while
                    //    we are locking the wait list
                    match self.try_recv() {
                        Ok(v) => return Ok(v),
                        Err(TryRecvError::Empty) => {}
                        Err(TryRecvError::Disconnected) => return Err(RecvError),
                    }
    
                    // 4. Push ourselves into the wait list
                    wait_list.push_back(unsafe {
                        Processor::current().running().expect("A running coroutine is required!")
                    });
    
                    // 5. Release the wait list
                }
    
                // NOTE: WHAT IF the thread is switched right here?
                // Then the other thread may get the coroutine in the `wait_list`, and insert it by `Processor::ready`
                // and wake it up while it is still not blocked.
                // CRASH!
    
                // 6. Yield
                Scheduler::block();
            }
        }
    }
    
    bug help wanted 
    opened by zonyitoo 19
  • Valgrind throwing lots of errors

    Valgrind throwing lots of errors

    When the number of coroutines spawned is greater than a certain value, a lot of "use of uninitialized value" errors are thrown by valgrind.

    Errors are thrown only when 15 or more coroutines are spawned:

    extern crate coio;
    
    use coio::{Scheduler, sleep_ms};
    
    fn main() {
        Scheduler::new()
            .with_workers(1)
            .run(|| {
                for _ in 0..15 {
                    Scheduler::spawn(move || {
                        println!("Heil Hydra");
                        sleep_ms(100);
                    });
                }
            })
            .unwrap();
    }
    

    If there are more workers, I can spawn more coroutines without errors, but I do reach a limit (eg. 19 for 2 workers). Despite these errors, the program gives the expected output, and exits without any segfaults.

    Here are a few lines (out of 11901) from the log:

    ...
    ==13148== Warning: client switching stacks?  SP change: 0x5f134e0 --> 0x407bff8
    ==13148==          to suppress, use: --max-stackframe=32077032 or greater
    ==13148==          further instances of this message will not be shown.
    ==13148== Thread 2 Worker 0:
    ...
    ==13148== Invalid read of size 8
    ==13148==    at 0x1451AF: runtime::processor::_$LT$impl$GT$::running::hbdb031c50523a379rtc (processor.rs:190)
    ==13148==  Address 0x5f13f58 is in a rw- anonymous segment
    ==13148== 
    ==13148== Invalid read of size 8
    ==13148==    at 0x1451B9: runtime::processor::_$LT$impl$GT$::running::hbdb031c50523a379rtc (processor.rs:190)
    ==13148==  Address 0x5f13f60 is in a rw- anonymous segment
    ==13148== 
    ==13148== Invalid read of size 8
    ==13148==    at 0x1451EA: _$LT$impl$GT$::insert::insert::h14411986088127779969 (lib.rs:133)
    ==13148==  Address 0x5f14028 is in a rw- anonymous segment
    ==13148== 
    ...
    ==13148== Conditional jump or move depends on uninitialised value(s)
    ==13148==    at 0x56E26EF: pthread_join (in /usr/lib/libpthread-2.22.so)
    ==13148==    by 0x140862: thread::_$LT$impl$GT$::join::join::h17510934181584830106 (mod.rs:574)
    ==13148==    by 0x1407DE: thread::_$LT$impl$GT$::join::join::h10846901181898427765 (mod.rs:604)
    ==13148==    by 0x113AA5: scheduler::_$LT$impl$GT$::run::run::h12081195173303708932 (scheduler.rs:210)
    ==13148==    by 0x11219A: main::h28feaf366a048100iaa (main.rs:6)
    ==13148==    by 0x196324: sys_common::unwind::try::try_fn::h10038815114197766760 (in /home/john/dev/coio-test/target/debug/coio-test)
    ==13148==    by 0x193B78: __rust_try (in /home/john/dev/coio-test/target/debug/coio-test)
    ==13148==    by 0x195FC6: rt::lang_start::hbdf4b213a64b8c7394x (in /home/john/dev/coio-test/target/debug/coio-test)
    ==13148==    by 0x146DE9: main (in /home/john/dev/coio-test/target/debug/coio-test)
    ==13148== 
    ...
    
    help wanted question in progress 
    opened by critiqjo 19
  • Error in compilation - simd v0.1.0

    Error in compilation - simd v0.1.0

    I tried to run this library and i got an error compilation :

    Compiling simd v0.1.0 (https://github.com/huonw/simd#be424212)
     Running `rustc /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/lib.rs --crate-name simd --crate-type lib -g -C metadata=8818301eefd37519 -C extra-filename=-8818301eefd37519 --out-dir /home/grzegorz/Pulpit/rust_server/coio-test/target/debug/deps --emit=dep-info,link -L dependency=/home/grzegorz/Pulpit/rust_server/coio-test/target/debug/deps -L dependency=/home/grzegorz/Pulpit/rust_server/coio-test/target/debug/deps -Awarnings`
    

    /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/lib.rs:169:8: 169:28 error: invalid ABI: expected one of [cdecl, stdcall, fastcall, aapcs, win64, Rust, C, system, rust-intrinsic, rust-call], found platform-intrinsic /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/lib.rs:169 extern "platform-intrinsic" { ^~~~~~~~~~~~~~~~~~~~ /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/sse2.rs:9:8: 9:28 error: invalid ABI: expected one of [cdecl, stdcall, fastcall, aapcs, win64, Rust, C, system, rust-intrinsic, rust-call], found platform-intrinsic /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/sse2.rs:9 extern "platform-intrinsic" { ^~~~~~~~~~~~~~~~~~~~ /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/sse2.rs:18:8: 18:28 error: invalid ABI: expected one of [cdecl, stdcall, fastcall, aapcs, win64, Rust, C, system, rust-intrinsic, rust-call], found platform-intrinsic /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/sse2.rs:18 extern "platform-intrinsic" { ^~~~~~~~~~~~~~~~~~~~ /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/sse3.rs:4:8: 4:28 error: invalid ABI: expected one of [cdecl, stdcall, fastcall, aapcs, win64, Rust, C, system, rust-intrinsic, rust-call], found platform-intrinsic /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/sse3.rs:4 extern "platform-intrinsic" { ^~~~~~~~~~~~~~~~~~~~ /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/ssse3.rs:10:8: 10:28 error: invalid ABI: expected one of [cdecl, stdcall, fastcall, aapcs, win64, Rust, C, system, rust-intrinsic, rust-call], found platform-intrinsic /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/ssse3.rs:10 extern "platform-intrinsic" { ^~~~~~~~~~~~~~~~~~~~ /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/sse4_1.rs:5:8: 5:28 error: invalid ABI: expected one of [cdecl, stdcall, fastcall, aapcs, win64, Rust, C, system, rust-intrinsic, rust-call], found platform-intrinsic /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/sse4_1.rs:5 extern "platform-intrinsic" { ^~~~~~~~~~~~~~~~~~~~ /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/sse4_2.rs:4:8: 4:28 error: invalid ABI: expected one of [cdecl, stdcall, fastcall, aapcs, win64, Rust, C, system, rust-intrinsic, rust-call], found platform-intrinsic /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/sse4_2.rs:4 extern "platform-intrinsic" { ^~~~~~~~~~~~~~~~~~~~ /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/avx.rs:12:8: 12:28 error: invalid ABI: expected one of [cdecl, stdcall, fastcall, aapcs, win64, Rust, C, system, rust-intrinsic, rust-call], found platform-intrinsic /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/avx.rs:12 extern "platform-intrinsic" { ^~~~~~~~~~~~~~~~~~~~ /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/avx2.rs:4:8: 4:28 error: invalid ABI: expected one of [cdecl, stdcall, fastcall, aapcs, win64, Rust, C, system, rust-intrinsic, rust-call], found platform-intrinsic /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/x86/avx2.rs:4 extern "platform-intrinsic" { ^~~~~~~~~~~~~~~~~~~~ /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/arm/neon.rs:37:8: 37:28 error: invalid ABI: expected one of [cdecl, stdcall, fastcall, aapcs, win64, Rust, C, system, rust-intrinsic, rust-call], found platform-intrinsic /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/arm/neon.rs:37 extern "platform-intrinsic" { ^~~~~~~~~~~~~~~~~~~~ /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/aarch64/neon.rs:38:8: 38:28 error: invalid ABI: expected one of [cdecl, stdcall, fastcall, aapcs, win64, Rust, C, system, rust-intrinsic, rust-call], found platform-intrinsic /home/grzegorz/.cargo/git/checkouts/simd-2a4f7a7525648cf1/master/src/aarch64/neon.rs:38 extern "platform-intrinsic" { ^~~~~~~~~~~~~~~~~~~~ error: aborting due to 11 previous errors

    please help :)

    opened by szagi3891 17
  • Support for plain TLS (not HTTPS)

    Support for plain TLS (not HTTPS)

    I wish I could send a pull request instead of a request, but my attempt at coding it during a Rust meetup today miserably failed.

    Would it be possible to wrap rust-openssl's SslStream so that coio could be used for TLS in addition to TCP?

    That would be awesome.

    enhancement 
    opened by jedisct1 14
  • Update dependencies (mio 0.6, slab 0.3)

    Update dependencies (mio 0.6, slab 0.3)

    This PR updates dependencies of this crate. I've done it to prepare the crate for some experiments with #56

    All changes keeps backward compatibility, but some implementations changed. I've changed accept methods, because mio has changes with API and most of structs used by this crate had deprecated. Let's consider this PR as keep crate alive changes and as first step towards the new mio's API.

    opened by DenisKolodin 13
  • Added sync::mpsc::sync_channel

    Added sync::mpsc::sync_channel

    This commit adapts the bounded MPSC channel "sync_channel" from the stdlib for coio, similiar to the already existing unbounded "channel".

    P.S.: I only began learning Rust 3 days ago - please go easy on me. :smiley:

    opened by lhecker 12
  • Destruction of blocked coroutines is flawed

    Destruction of blocked coroutines is flawed

    Currently if a Processor receives the Shutdown message it will destroy all coroutines by using panic!(). But not all Coroutines will be destroyed... Some of them are in the Blocked state and taken out of the Processor and are waiting in the mio EventLoop. Currently this still works okay-ish and destroys all the coroutines almost always. But what happens if a coroutine is still stuck inside a IoHandlerMessage which in turn is still inside mio's queue? Those coroutines won't be destroyed...

    I only see 2 options here:

    • Simplify mio <> coio interaction and make it possible to destroy coroutines at all times. This would mean that we need to make sure that for instance mio's queue is completely empty.
    • Use Arc instead of Box and instead of completely removing Coroutines from a Processor we would just give out a Arc reference. Thus a Processor would always know which Coroutines it owns.

    I would personally prefer the first approach but I fear that it might lead to ugly code if we have to make always sure that coroutines are not stuck in some queue etc.

    opened by lhecker 9
  • Connections might be silentrly not served

    Connections might be silentrly not served

    On current master (7d12511cb8440b189), simple echo server might "forget" to serve response to certain clients, no erros are reported on server side, tcp connection is not dropped either.

    This is very racey and you might need to play with params to trigger it more or less reliably on your machines. Params I am using in my Core i3 M370 (2 cores + 2 HT):

    server:

    cd benchmark; cargo run --bin coio-tcp-echo-server --release -- --bind 127.0.0.1:3000 -t 2
    

    client:

    cd benchmark; GOMAXPROCS=1 go run tcp_bench.go -c 3 -t 10 -a "127.0.0.1:3000" -l 4096 -d 10s
    

    New -d param specifies read/write timeout and was added in https://github.com/zonyitoo/coio-rs/pull/58

    Output:

    2016/07/10 14:01:52 read tcp 127.0.0.1:40988->127.0.0.1:3000: i/o timeout
    2016/07/10 14:01:52 read tcp 127.0.0.1:40990->127.0.0.1:3000: i/o timeout
    Benchmarking: 127.0.0.1:3000
    3 clients, running 4096 bytes, 30 sec.
    
    Speed: 18879 request/sec, 18879 response/sec
    Requests: 566395
    Responses: 566393
    

    It is not happening on every run, but I found that it happens more often when client opens less connections.

    UPD1: it is not a performance issue, set timeout to "60s", you'll see that test completes, no more active communication happens, yet few connections remain waiting to read data back indefinitely.

    bug 
    opened by redbaron 8
  • Coroutines won't be destroyed if they are in channel's wait list

    Coroutines won't be destroyed if they are in channel's wait list

    This will cause memory leaks because when the Sender, Receiver and SyncSender are going to be destroyed, the Processor may already dead.

    Reproduce code:

    extern crate coio;
    
    use coio::Scheduler;
    use coio::sync::mpsc;
    
    fn main() {
    
        Scheduler::new().with_workers(1).run(|| {
            let (tx, mut rx) = mpsc::channel();
    
            for i in 0..100 {
                let (ltx, lrx) = mpsc::channel();
                let name = format!("Coroutine #{}", i);
                Scheduler::spawn(move|| {
                    loop {
                        let value = match rx.recv() {
                            Ok(v) => v,
                            Err(..) => break,
                        };
                        println!("{} passing {:?}", name, value);
                        ltx.send(value).unwrap();
                    }
                });
    
                rx = lrx;
            }
    
            for i in 0..10 {
                println!("Master gives out {}", i);
                tx.send(i).unwrap();
                let value = rx.recv().unwrap();
                println!("Master gets {}", value);
                assert_eq!(i, value);
            }
        }).unwrap();
    
    }
    

    And add println in Coroutine::drop like this

    impl Drop for Coroutine {
        fn drop(&mut self) {
            println!("Dropping!!");
            /// ...
    }
    

    You may only see 2 "Dropping" print out on the screen. Because when the main function is finished, the main coroutine is going to be destroyed. But who prints the other "Dropping"? I don't know... But all I know is most of them haven't been destroyed properly.

    opened by zonyitoo 7
  • [FATAL] Yield coroutines while stack unwinding causes panic while panicking

    [FATAL] Yield coroutines while stack unwinding causes panic while panicking

    Reproducible example:

    extern crate coio;
    extern crate env_logger;
    
    use coio::Scheduler;
    
    fn main() {
        env_logger::init();
    
        Scheduler::new()
            .run(|| {
                struct A;
    
                impl Drop for A {
                    fn drop(&mut self) {
                        Scheduler::sched();
                    }
                }
    
                let _a = A;
    
                panic!("PANICKED in coroutine");
            }).unwrap();
    }
    

    While the coroutine is unwinding, Drop of A will be called. Coroutine is yield in the drop() function. Coroutine is now suspended, which means that the currently thread is still in panicking status, but execution process is now be switched to another coroutine. If the other coroutine panic, too, then it will definitely cause panic while panicking!.

    bug help wanted 
    opened by zonyitoo 2
  • Migrating a coroutine to another thread may move `!Send` object between threads

    Migrating a coroutine to another thread may move `!Send` object between threads

    If I understand it correctly, the coroutines here can move from thread to thread.

    Now imagine I have something that is not Send for whatever reason. And I don't mean something like Rc (which could cause havoc if I put it into TLS, but that's another issue), but something like Zero-MQ socket which is promised to crash the whole application if ever touched from a different thread. But it is on the stack, so it passes all compile time checks.

    Moving the coroutine to another thread would now cause an UB while the user used only safe Rust.

    opened by vorner 1
  • Bug: Incompatibility of Rust's stdlib

    Bug: Incompatibility of Rust's stdlib

    This bug was originally found in #53 . The reason was: stdlib uses a PANIC_COUNT in TLS to ensure no panic while panicking in runtime. But obviously, Coroutines in coio can be migrated between threads, which means that it turns out to cause data race (because compiler still think that we are running in the same thread, so we may access to the other thread's TLS without any synchronization method).

    We wanted to solve this PANIC_COUNT partially in https://github.com/rust-lang/rust/pull/33408, but because stdlib relies heavily on TLS (such as println!), it will also causes SIGSEGV randomly:

    println!("Before");
    Scheduler::sched(); // Switch out
    // Well, now, this Coroutine may already been stolen by the other thread
    // And then resumed by the other thread
    println!("After");
    

    Rust's compiler don't know that we have switched to another thread, so it may inline those TLS calls.

    We are looking for a solution for this bug, discussing in here, if you have any idea, please help.

    bug help wanted 
    opened by zonyitoo 13
  • Issue 51 Unwind error if the Coroutine::yield_with is inlined

    Issue 51 Unwind error if the Coroutine::yield_with is inlined

    coroutine_unwind was doing undefined things.

    I've replaced it with an AtomicBool flag triggering the same unwind code moved to yield_with. This isn't optimal yet (I'll try to get it passing through the data field as zonyitoo suggests) but it does pass all tests even with #[inline(always)]

    Still needs:

    • [x] data field
    • [ ] decide on proper inline attribute
    • [x] out-line the cold branch (probably)
    • [x] delete commented-out code
    opened by jClaireCodesStuff 7
  • Unwind error if the Coroutine::yield_with is inlined

    Unwind error if the Coroutine::yield_with is inlined

    Now we can only add #[inline(never)] on the Coroutine::yield_with method.

    You can reproduce it anytime by replacing it with #[inline(always)], and then call cargo test --release, you will see the coroutine_unwinds_on_drop test will fail.

    $ cargo test --release
    ...
    test coroutine::test::coroutine_unwinds_on_drop ... FAILED
    ...
    
    failures:
    
    ---- coroutine::test::coroutine_unwinds_on_drop stdout ----
        thread 'coroutine::test::coroutine_unwinds_on_drop' panicked at 'assertion failed: `(left == right)` (left: `0`, right: `1`)', src/coroutine.rs:593
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    
    
    failures:
        coroutine::test::coroutine_unwinds_on_drop
    
    test result: FAILED. 26 passed; 1 failed; 0 ignored; 0 measured
    
    error: test failed
    
    bug 
    opened by zonyitoo 5
  • Coroutine local variables

    Coroutine local variables

    Now in Coio, Coroutine is the minimum execution routine. It would be nice if we can have a coroutine_local! macro implementation just like thread_local! in std.

    Implementation detail will be discussed later.

    enhancement low 
    opened by zonyitoo 1
Owner
ty
Backend Developer
ty
Tools for concurrent programming in Rust

Crossbeam This crate provides a set of tools for concurrent programming: Atomics AtomicCell, a thread-safe mutable memory location.(no_std) AtomicCons

Crossbeam 5.7k Dec 30, 2022
Abstract over the atomicity of reference-counting pointers in rust

Archery Archery is a rust library that offers a way to abstraction over Rc and Arc smart pointers. This allows you to create data structures where the

Diogo Sousa 107 Nov 23, 2022
Rayon: A data parallelism library for Rust

Rayon Rayon is a data-parallelism library for Rust. It is extremely lightweight and makes it easy to convert a sequential computation into a parallel

null 7.8k Jan 7, 2023
Cross-platform Rust wrappers for the USB ID Repository

usb-ids Cross-platform Rust wrappers for the USB ID Repository. This library bundles the USB ID database, allowing platforms other than Linux to query

William Woodruff 18 Dec 14, 2022
Rust Ethereum 2.0 Client

Lighthouse: Ethereum 2.0 An open-source Ethereum 2.0 client, written in Rust and maintained by Sigma Prime. Documentation Overview Lighthouse is: Read

Sigma Prime 2.1k Jan 6, 2023
Rust Parallel Iterator With Output Sequential Consistency

par_iter_sync: Parallel Iterator With Sequential Output Crate like rayon do not offer synchronization mechanism. This crate provides easy mixture of p

Congyu 1 Oct 30, 2021
Implementação de uma Skip List em Rust

SkipList SkipList é uma estrutura descrita em 1989 por William Pugh que se baseia em balancear de forma probabilística atalhos de um item a outro com

Rodrigo Crispim 3 Apr 27, 2022
Coroutine I/O for Rust

Coroutine I/O Coroutine scheduling with work-stealing algorithm. WARN: Possibly crash because of TLS inline, check https://github.com/zonyitoo/coio-rs

ty 454 Dec 2, 2022
[no longer maintained] Scalable, coroutine-based, fibers/green-threads for Rust. (aka MIO COroutines).

Documentation mioco Mioco provides green-threads (aka fibers) like eg. Goroutines in Go, for Rust. Status This repo is a complete re-implementation of

Dawid Ciężarkiewicz 137 Dec 19, 2022
Monad/MonadIO, Handler, Coroutine/doNotation, Functional Programming features for Rust

fpRust Monad, Functional Programming features for Rust Why I love functional programming, Rx-style coding. However it's hard to implement them in Rust

null 98 Dec 24, 2022
rust stackful coroutine library

May May is a high-performant library for programming stackful coroutines with which you can easily develop and maintain massive concurrent programs. I

Xudong Huang 1.3k Dec 31, 2022
Coroutine Library in Rust

coroutine-rs Coroutine library in Rust [dependencies] coroutine = "0.8" Usage Basic usage of Coroutine extern crate coroutine; use std::usize; use co

Rust中文社区 404 Dec 31, 2022
Coroutine I/O for Rust

Coroutine I/O Coroutine scheduling with work-stealing algorithm. WARN: Possibly crash because of TLS inline, check https://github.com/zonyitoo/coio-rs

ty 454 Dec 2, 2022
cogo rust coroutine database driver (Mysql,Postgres,Sqlite)

cdbc Coroutine Database driver Connectivity.based on cogo High concurrency,based on coroutine No Future<'q,Output=*>,No async fn, No .await , no Poll*

co-rs 10 Nov 13, 2022
🎮 game loop + 🐷 coroutine + 🌯 burrito = 🚀🔥 blazingly synchronous async executor for games 🔥🚀

?? Koryto ?? Pronounced like corrito, which is pronounced as if you combined coroutine and burrito, because everyone knows coroutines are burritos in

Jakub Arnold 3 Jul 6, 2023
Slitter is a C- and Rust-callable slab allocator implemented primarily in Rust, with some C for performance or to avoid unstable Rust features.

Slitter is a less footgunny slab allocator Slitter is a classically structured thread-caching slab allocator that's meant to help write reliable long-

Backtrace Labs 133 Dec 5, 2022
First Git on Rust is reimplementation with rust in order to learn about rust, c and git.

First Git on Rust First Git on Rust is reimplementation with rust in order to learn about rust, c and git. Reference project This project refer to the

Nobkz 1 Jan 28, 2022
NES emulator written in Rust to learn Rust

OxideNES A NES emulator in Rust. CPU should be accurate, PPU is mostly accurate, timing between the 2 is off for some corner cases and hardware qui

null 37 Nov 7, 2022
Detects usage of unsafe Rust in a Rust crate and its dependencies.

cargo-geiger ☢️ A program that lists statistics related to the usage of unsafe Rust code in a Rust crate and all its dependencies. This cargo plugin w

Rust Secure Code Working Group 1.1k Dec 26, 2022
Automated builded images for rust-lang with rustup, "the ultimate way to install RUST"

rustup Automated builded images on store and hub for rust-lang with musl added, using rustup "the ultimate way to install RUST". tag changed: all3 ->

刘冲 83 Nov 30, 2022